This document presents research on using the Homotopy Analysis Method (HAM) to solve a time-fractional diffusion equation with a moving boundary condition. HAM is a semi-analytic technique used to solve nonlinear differential equations by generating a convergent series solution. The author applies HAM to obtain approximate analytic solutions for the concentration of a drug in a matrix and the diffusion front over time. Comparisons with exact solutions show good agreement for different parameter values. The author concludes that HAM can accurately predict drug distribution and the diffusion front in this problem.
He laplace method for special nonlinear partial differential equationsAlexander Decker
This document presents the He-Laplace method for solving nonlinear partial differential equations. The method combines Laplace transforms, homotopy perturbation method, and He's polynomials. It is shown that the He-Laplace method can easily handle nonlinear terms through the use of He's polynomials and provides better results than traditional methods. An example demonstrates the application of the He-Laplace method to solve a nonlinear parabolic-hyperbolic partial differential equation.
This document summarizes a presentation on quantifying uncertainty. It discusses basic probability notation including sample spaces, unconditional and conditional probabilities, and the semantics of propositions using full joint distributions. It also covers inference using full joint distributions, independence, Bayes' rule, and uses the Wumpus world as an example of acting under uncertainty.
Arthur Charpentier's presentation covered perspectives on predictive modeling. He discussed prediction versus estimation, parametric versus nonparametric models, linear models and least squares, modeling categorical variables, and prediction using covariates. Key points included defining prediction as estimating the expected value, providing confidence intervals to quantify uncertainty, using maximum likelihood to estimate parameters, and modeling conditional distributions based on covariates.
This document summarizes several methods for estimating copula densities from sample data in a nonparametric way, including using kernel density estimation with different types of kernels and variable transformations. It describes the standard kernel estimate, issues with it near boundaries, a mirror kernel estimate, using beta kernels, a probit transformation of variables, and improved probit transformation estimators that use local polynomial approximations. The goal is to find estimators that are consistent along the boundaries of the copula support and improve inference about the copula density.
The document discusses techniques for estimating quantiles and risk measures from sample data. It begins with an introduction to risk measures such as value-at-risk and expected shortfall. It then covers classical techniques for quantile estimation including parametric, semiparametric and nonparametric methods. Parametric methods assume a distribution and estimate parameters, while nonparametric methods do not assume a distribution and instead estimate the cumulative distribution function directly from the data. The document focuses on nonparametric quantile estimation using kernel methods and explores smoothing techniques to estimate quantiles from sample order statistics. It concludes with an agenda for presenting quantile estimation using beta kernels and a simulation-based study.
The document discusses tail dependence in risk management using Archimedean copulas. It shows how Pickands-Balkema-de Haan's theorem can be used to model tail behavior and price excess-of-loss reinsurance contracts. It also discusses how extreme value theory can be extended from univariate to bivariate distributions to model the dependence structure between componentwise maxima and between variables given that one exceeds a threshold.
The document discusses using the programming language R for actuarial science applications. It presents R as a vector-based language suitable for working with life tables and performing actuarial calculations. Examples are given of how to model life contingencies like life expectancies, annuities, and insurance values using vectors and matrices in R. The document also discusses using R to fit prospective mortality models like the Lee-Carter model to data matrices.
This document discusses time series decomposition and forecasting methods. It begins with an overview of qualitative and quantitative forecasting techniques, including short and long term forecasting and regression methods. It then focuses on Box-Jenkins ARIMA time series modeling, demonstrating decomposition of a time series into trend, seasonal, and random components. Forecasting involves modeling these components and generating predictions. Practical issues with forecasting in Excel are also mentioned. Overall the document provides an introduction to time series analysis and forecasting techniques.
He laplace method for special nonlinear partial differential equationsAlexander Decker
This document presents the He-Laplace method for solving nonlinear partial differential equations. The method combines Laplace transforms, homotopy perturbation method, and He's polynomials. It is shown that the He-Laplace method can easily handle nonlinear terms through the use of He's polynomials and provides better results than traditional methods. An example demonstrates the application of the He-Laplace method to solve a nonlinear parabolic-hyperbolic partial differential equation.
This document summarizes a presentation on quantifying uncertainty. It discusses basic probability notation including sample spaces, unconditional and conditional probabilities, and the semantics of propositions using full joint distributions. It also covers inference using full joint distributions, independence, Bayes' rule, and uses the Wumpus world as an example of acting under uncertainty.
Arthur Charpentier's presentation covered perspectives on predictive modeling. He discussed prediction versus estimation, parametric versus nonparametric models, linear models and least squares, modeling categorical variables, and prediction using covariates. Key points included defining prediction as estimating the expected value, providing confidence intervals to quantify uncertainty, using maximum likelihood to estimate parameters, and modeling conditional distributions based on covariates.
This document summarizes several methods for estimating copula densities from sample data in a nonparametric way, including using kernel density estimation with different types of kernels and variable transformations. It describes the standard kernel estimate, issues with it near boundaries, a mirror kernel estimate, using beta kernels, a probit transformation of variables, and improved probit transformation estimators that use local polynomial approximations. The goal is to find estimators that are consistent along the boundaries of the copula support and improve inference about the copula density.
The document discusses techniques for estimating quantiles and risk measures from sample data. It begins with an introduction to risk measures such as value-at-risk and expected shortfall. It then covers classical techniques for quantile estimation including parametric, semiparametric and nonparametric methods. Parametric methods assume a distribution and estimate parameters, while nonparametric methods do not assume a distribution and instead estimate the cumulative distribution function directly from the data. The document focuses on nonparametric quantile estimation using kernel methods and explores smoothing techniques to estimate quantiles from sample order statistics. It concludes with an agenda for presenting quantile estimation using beta kernels and a simulation-based study.
The document discusses tail dependence in risk management using Archimedean copulas. It shows how Pickands-Balkema-de Haan's theorem can be used to model tail behavior and price excess-of-loss reinsurance contracts. It also discusses how extreme value theory can be extended from univariate to bivariate distributions to model the dependence structure between componentwise maxima and between variables given that one exceeds a threshold.
The document discusses using the programming language R for actuarial science applications. It presents R as a vector-based language suitable for working with life tables and performing actuarial calculations. Examples are given of how to model life contingencies like life expectancies, annuities, and insurance values using vectors and matrices in R. The document also discusses using R to fit prospective mortality models like the Lee-Carter model to data matrices.
This document discusses time series decomposition and forecasting methods. It begins with an overview of qualitative and quantitative forecasting techniques, including short and long term forecasting and regression methods. It then focuses on Box-Jenkins ARIMA time series modeling, demonstrating decomposition of a time series into trend, seasonal, and random components. Forecasting involves modeling these components and generating predictions. Practical issues with forecasting in Excel are also mentioned. Overall the document provides an introduction to time series analysis and forecasting techniques.
This document discusses distorting risk measures and copulas in actuarial sciences. It introduces distorted risk measures as expectations of a distorted probability measure induced by a distortion function. Common distortion functions and associated risk measures are presented, including Value-at-Risk, Tail Value-at-Risk, proportional hazard measure. Archimedean copulas are defined using a generator function and can model dependence through a latent factor. Hierarchical and distorted Archimedean copulas are discussed as ways to flexibly model multivariate dependence structures.
This document discusses quantile estimation techniques, including parametric, semiparametric, and nonparametric approaches. Parametric estimation assumes a distribution like Gaussian and estimates quantiles based on parameters of that distribution. Semiparametric estimation uses extreme value theory to model upper tails with a generalized Pareto distribution. Nonparametric estimation estimates quantiles directly from the data without assuming a particular distribution. The document presents several techniques for quantile estimation and compares their performance.
The document discusses unifying standard multivariate copula families that exhibit tail dependence properties. It begins with an overview of common copula families, including elliptical distributions (e.g. normal or spherical), Archimedean copulas, and extreme value distributions. It then discusses the concept of tail dependence, including tail indexes and limiting distributions. The goal is to provide a framework for understanding relationships between different copula families in terms of their tail behavior.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses distorting probabilities in actuarial sciences. It introduces concepts like distorted risk measures, which can be seen as expectations under a distorted probability measure induced by a distortion function. Distorted risk measures include value-at-risk and proportional hazard measures. Archimedean copulas are introduced as a way to model dependence between risks, and can be distorted using distortion functions to alter the level of dependence. Hierarchical Archimedean copulas are also discussed as a way to model nested dependence structures.
This document discusses kernel-based estimation methods for inequality indices and risk measures. It begins with an overview of stochastic dominance and related indices like first-order, convex, and second-order stochastic dominance. It then discusses nonparametric estimation of densities and copula densities using kernel methods. Specifically, it proposes using beta kernels and transformed kernels to improve estimation at the boundaries. The document explores combining these approaches and using mixtures of distributions like beta distributions within the kernels. It concludes by discussing applications to heavy-tailed distributions.
This document discusses several nonparametric methods for estimating copula densities from data, which are useful for modeling multivariate dependence. It first provides background on copulas and density estimation. It then describes several techniques for handling boundary issues that arise when estimating densities supported on [0,1], including the mirror image method, transformed kernels, beta kernels, and averaging histograms. Examples are given comparing the performance of these different approaches. The goal is to provide flexible, data-driven estimates of copula densities without imposing a parametric copula model.
This document summarizes a presentation on machine learning and data science techniques for actuaries using R. It introduces classification models where the target variable is binary, using logistic regression. It provides an example using medical data to predict patient survival. The document discusses model interpretation, predicted probabilities, and using thresholds to classify observations. Visualization of higher dimensional classification is also briefly mentioned.
The document discusses Archimax copulas and other copula families. It begins with an overview of copulas in general and defines them for dimensions 2 and greater than 2. It then discusses some standard copula families like the independent copula and comonotonic copula. It introduces elliptical distributions and spherical distributions, which give rise to elliptical copulas. Finally, it defines Archimax copulas and discusses their properties in dimensions 2 and greater than 2.
This document discusses regression with frailty in survival analysis using the Cox proportional hazards model. It introduces survival analysis concepts like the hazard function and survival function. It then describes how to incorporate frailty, a random effect, into the Cox model to account for clustering in survival times. The Newton-Raphson method is used to estimate model parameters by maximizing the penalized partial likelihood. A simulation study applies this approach to data on infections in kidney patients.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
The document summarizes quantile and expectile regression models. It defines quantiles and expectiles, and how they are estimated from sample data. It also discusses quantile and expectile regression, including extensions for fixed effects and random effects panels. Key points include:
- Quantiles minimize an asymmetric absolute loss function, while expectiles minimize an asymmetric squared loss function.
- Quantile regression parameters are estimated by minimizing the weighted sum of losses. Expectile regression parameters are estimated similarly using weighted squared losses.
- Panel data models include penalized quantile regression with fixed effects, and quantile/expectile regression with random effects and their asymptotic properties.
This document summarizes Arthur Charpentier's presentation on econometrics and statistical learning techniques. It discusses different perspectives on modeling data, including the causal story, conditional distribution story, and explanatory data story. It also covers topics like high dimensional data, computational econometrics, generalized linear models, goodness of fit, stepwise procedures, and testing in high dimensions. The presentation provides an overview of various statistical and econometric modeling techniques.
1) The document discusses probit transformation for nonparametric kernel estimation of copulas. It introduces a standard kernel estimator for copulas that is inconsistent on boundaries.
2) It then presents a "naive" probit transformation kernel copula density estimator that transforms data to standard normal using the probit function to address boundary issues.
3) It further improves upon this by introducing local log-linear and log-quadratic approximations for the transformed density, yielding two new estimators with better asymptotic properties.
This document summarizes a presentation on testing for volatility transmission between international markets using high frequency data. It discusses using realized volatility to estimate true latent volatility processes while controlling for jumps and microstructure noise. The presentation focuses on testing for transmission of only extreme or large volatility values between markets. A quantile model is used to define extreme periods, and cross-covariances are computed to test for non-causality between markets' extreme periods using Ljung-Box statistics. Simulations are performed based on a three-regime smooth-transition model to assess the test in finite samples.
This document discusses granularity issues that arise when analyzing climatic time series data. It begins by discussing the concept of the "period of return" in the context of climate change. It then examines models for flood event data that account for the duration and timing of individual flood events. The document proposes a two-duration model for flood data that is analogous to models used for high-frequency financial data. Finally, it discusses long-range dependence and seasonality in climatic variables like wind speed, and methods for estimating return periods from long memory models.
This document provides an introduction to multivariate and dynamic risk measures. It begins with an overview of probabilistic and measurable spaces, including finite and infinite dimensional probability spaces. It then discusses univariate functional analysis and convexity, including definitions of convex functions and the Legendre-Fenchel transformation. Several examples are provided to illustrate these concepts. The document aims to establish the necessary foundations for understanding multivariate and dynamic risk measures.
This document discusses compactness estimates for nonlinear partial differential equations (PDEs), specifically Hamilton-Jacobi equations. It provides background on Kolmogorov entropy measures of compactness and covers recent results estimating the Kolmogorov entropy of solutions to scalar conservation laws and Hamilton-Jacobi equations, showing it is on the order of 1/ε. The document outlines applications of these estimates and open questions regarding extending the estimates to non-convex fluxes and non-uniformly convex Hamiltonians.
"reflections on the probability space induced by moment conditions with impli...Christian Robert
This document discusses using moment conditions to perform Bayesian inference when the likelihood function is intractable or unknown. It outlines some approaches that have been proposed, including approximating the likelihood using empirical likelihood or pseudo-likelihoods. However, these approaches do not guarantee the same consistency as a true likelihood. Alternative approximative Bayesian methods are also discussed, such as Approximate Bayesian Computation, Integrated Nested Laplace Approximation, and variational Bayes. The empirical likelihood method constructs a likelihood from generalized moment conditions, but its use in Bayesian inference requires further analysis of consistency in each application.
This document provides an introduction to homotopy type theory, including:
- Types represent objects, propositions, functions, and proofs.
- Equalities are represented as paths between types, with properties like reflexivity and transitivity.
- Dependent types allow the output of a function to depend on the input.
- Identity types represent proofs of equality between elements of a type.
- Function extensionality and univalence are axioms in homotopy type theory.
This document provides an agenda and materials for an Earth Day workshop focused on planning Earth Day events. The agenda covers introductions, an event overview, educator resources, ideas for expanding events, event planning, and resource gathering. The materials section provides event timelines, outreach strategies, volunteer engagement, material options, classroom lessons, story/movie suggestions, and evaluation resources to help attendees plan a successful Earth Day event.
This document discusses distorting risk measures and copulas in actuarial sciences. It introduces distorted risk measures as expectations of a distorted probability measure induced by a distortion function. Common distortion functions and associated risk measures are presented, including Value-at-Risk, Tail Value-at-Risk, proportional hazard measure. Archimedean copulas are defined using a generator function and can model dependence through a latent factor. Hierarchical and distorted Archimedean copulas are discussed as ways to flexibly model multivariate dependence structures.
This document discusses quantile estimation techniques, including parametric, semiparametric, and nonparametric approaches. Parametric estimation assumes a distribution like Gaussian and estimates quantiles based on parameters of that distribution. Semiparametric estimation uses extreme value theory to model upper tails with a generalized Pareto distribution. Nonparametric estimation estimates quantiles directly from the data without assuming a particular distribution. The document presents several techniques for quantile estimation and compares their performance.
The document discusses unifying standard multivariate copula families that exhibit tail dependence properties. It begins with an overview of common copula families, including elliptical distributions (e.g. normal or spherical), Archimedean copulas, and extreme value distributions. It then discusses the concept of tail dependence, including tail indexes and limiting distributions. The goal is to provide a framework for understanding relationships between different copula families in terms of their tail behavior.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses distorting probabilities in actuarial sciences. It introduces concepts like distorted risk measures, which can be seen as expectations under a distorted probability measure induced by a distortion function. Distorted risk measures include value-at-risk and proportional hazard measures. Archimedean copulas are introduced as a way to model dependence between risks, and can be distorted using distortion functions to alter the level of dependence. Hierarchical Archimedean copulas are also discussed as a way to model nested dependence structures.
This document discusses kernel-based estimation methods for inequality indices and risk measures. It begins with an overview of stochastic dominance and related indices like first-order, convex, and second-order stochastic dominance. It then discusses nonparametric estimation of densities and copula densities using kernel methods. Specifically, it proposes using beta kernels and transformed kernels to improve estimation at the boundaries. The document explores combining these approaches and using mixtures of distributions like beta distributions within the kernels. It concludes by discussing applications to heavy-tailed distributions.
This document discusses several nonparametric methods for estimating copula densities from data, which are useful for modeling multivariate dependence. It first provides background on copulas and density estimation. It then describes several techniques for handling boundary issues that arise when estimating densities supported on [0,1], including the mirror image method, transformed kernels, beta kernels, and averaging histograms. Examples are given comparing the performance of these different approaches. The goal is to provide flexible, data-driven estimates of copula densities without imposing a parametric copula model.
This document summarizes a presentation on machine learning and data science techniques for actuaries using R. It introduces classification models where the target variable is binary, using logistic regression. It provides an example using medical data to predict patient survival. The document discusses model interpretation, predicted probabilities, and using thresholds to classify observations. Visualization of higher dimensional classification is also briefly mentioned.
The document discusses Archimax copulas and other copula families. It begins with an overview of copulas in general and defines them for dimensions 2 and greater than 2. It then discusses some standard copula families like the independent copula and comonotonic copula. It introduces elliptical distributions and spherical distributions, which give rise to elliptical copulas. Finally, it defines Archimax copulas and discusses their properties in dimensions 2 and greater than 2.
This document discusses regression with frailty in survival analysis using the Cox proportional hazards model. It introduces survival analysis concepts like the hazard function and survival function. It then describes how to incorporate frailty, a random effect, into the Cox model to account for clustering in survival times. The Newton-Raphson method is used to estimate model parameters by maximizing the penalized partial likelihood. A simulation study applies this approach to data on infections in kidney patients.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
The document summarizes quantile and expectile regression models. It defines quantiles and expectiles, and how they are estimated from sample data. It also discusses quantile and expectile regression, including extensions for fixed effects and random effects panels. Key points include:
- Quantiles minimize an asymmetric absolute loss function, while expectiles minimize an asymmetric squared loss function.
- Quantile regression parameters are estimated by minimizing the weighted sum of losses. Expectile regression parameters are estimated similarly using weighted squared losses.
- Panel data models include penalized quantile regression with fixed effects, and quantile/expectile regression with random effects and their asymptotic properties.
This document summarizes Arthur Charpentier's presentation on econometrics and statistical learning techniques. It discusses different perspectives on modeling data, including the causal story, conditional distribution story, and explanatory data story. It also covers topics like high dimensional data, computational econometrics, generalized linear models, goodness of fit, stepwise procedures, and testing in high dimensions. The presentation provides an overview of various statistical and econometric modeling techniques.
1) The document discusses probit transformation for nonparametric kernel estimation of copulas. It introduces a standard kernel estimator for copulas that is inconsistent on boundaries.
2) It then presents a "naive" probit transformation kernel copula density estimator that transforms data to standard normal using the probit function to address boundary issues.
3) It further improves upon this by introducing local log-linear and log-quadratic approximations for the transformed density, yielding two new estimators with better asymptotic properties.
This document summarizes a presentation on testing for volatility transmission between international markets using high frequency data. It discusses using realized volatility to estimate true latent volatility processes while controlling for jumps and microstructure noise. The presentation focuses on testing for transmission of only extreme or large volatility values between markets. A quantile model is used to define extreme periods, and cross-covariances are computed to test for non-causality between markets' extreme periods using Ljung-Box statistics. Simulations are performed based on a three-regime smooth-transition model to assess the test in finite samples.
This document discusses granularity issues that arise when analyzing climatic time series data. It begins by discussing the concept of the "period of return" in the context of climate change. It then examines models for flood event data that account for the duration and timing of individual flood events. The document proposes a two-duration model for flood data that is analogous to models used for high-frequency financial data. Finally, it discusses long-range dependence and seasonality in climatic variables like wind speed, and methods for estimating return periods from long memory models.
This document provides an introduction to multivariate and dynamic risk measures. It begins with an overview of probabilistic and measurable spaces, including finite and infinite dimensional probability spaces. It then discusses univariate functional analysis and convexity, including definitions of convex functions and the Legendre-Fenchel transformation. Several examples are provided to illustrate these concepts. The document aims to establish the necessary foundations for understanding multivariate and dynamic risk measures.
This document discusses compactness estimates for nonlinear partial differential equations (PDEs), specifically Hamilton-Jacobi equations. It provides background on Kolmogorov entropy measures of compactness and covers recent results estimating the Kolmogorov entropy of solutions to scalar conservation laws and Hamilton-Jacobi equations, showing it is on the order of 1/ε. The document outlines applications of these estimates and open questions regarding extending the estimates to non-convex fluxes and non-uniformly convex Hamiltonians.
"reflections on the probability space induced by moment conditions with impli...Christian Robert
This document discusses using moment conditions to perform Bayesian inference when the likelihood function is intractable or unknown. It outlines some approaches that have been proposed, including approximating the likelihood using empirical likelihood or pseudo-likelihoods. However, these approaches do not guarantee the same consistency as a true likelihood. Alternative approximative Bayesian methods are also discussed, such as Approximate Bayesian Computation, Integrated Nested Laplace Approximation, and variational Bayes. The empirical likelihood method constructs a likelihood from generalized moment conditions, but its use in Bayesian inference requires further analysis of consistency in each application.
This document provides an introduction to homotopy type theory, including:
- Types represent objects, propositions, functions, and proofs.
- Equalities are represented as paths between types, with properties like reflexivity and transitivity.
- Dependent types allow the output of a function to depend on the input.
- Identity types represent proofs of equality between elements of a type.
- Function extensionality and univalence are axioms in homotopy type theory.
This document provides an agenda and materials for an Earth Day workshop focused on planning Earth Day events. The agenda covers introductions, an event overview, educator resources, ideas for expanding events, event planning, and resource gathering. The materials section provides event timelines, outreach strategies, volunteer engagement, material options, classroom lessons, story/movie suggestions, and evaluation resources to help attendees plan a successful Earth Day event.
This document contains definitions and proofs about natural numbers, functions, and theorems in Coq. It defines natural numbers as inductive types, defines addition as a recursive function, and proves theorems like 1+1=2. It also contains definitions for monads and proofs about monad bind being associative in Coq.
Discretization of a Mathematical Model for Tumor-Immune System Interaction wi...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor and immune cells. The model consists of differential equations with piecewise constant arguments and based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is obtained a system of difference equations from the system of differential equations. In order to get local and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor
and immune cells. The model consists of differential equations with piecewise constant arguments and
based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is
obtained a system of difference equations from the system of differential equations. In order to get local
and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion
and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a
consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor
and immune cells. The model consists of differential equations with piecewise constant arguments and
based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is
obtained a system of difference equations from the system of differential equations. In order to get local
and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion
and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a
consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor
and immune cells. The model consists of differential equations with piecewise constant arguments and
based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is
obtained a system of difference equations from the system of differential equations. In order to get local
and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion
and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a
consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor
and immune cells. The model consists of differential equations with piecewise constant arguments and
based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is
obtained a system of difference equations from the system of differential equations. In order to get local
and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion
and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a
consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor and immune cells. The model consists of differential equations with piecewise constant arguments and based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is obtained a system of difference equations from the system of differential equations. In order to get local and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
The present study deals with the analysis of a Lotka-Volterra model describing competition between tumor
and immune cells. The model consists of differential equations with piecewise constant arguments and
based on metamodel constructed by Stepanova. Using the method of reduction to discrete equations, it is
obtained a system of difference equations from the system of differential equations. In order to get local
and global stability conditions of the positive equilibrium point of the system, we use Schur-Cohn criterion
and Lyapunov function that is constructed. Moreover, it is shown that periodic solutions occur as a
consequence of Neimark-Sacker bifurcation.
DISCRETIZATION OF A MATHEMATICAL MODEL FOR TUMOR-IMMUNE SYSTEM INTERACTION WI...mathsjournal
1) The document describes a mathematical model of tumor-immune system interaction using a system of differential equations with piecewise constant arguments.
2) The model is discretized using the method of reduction to discrete equations, resulting in a system of difference equations.
3) Stability analysis of the positive equilibrium point of the difference equations system is performed using the Schur-Cohn criterion and Lyapunov functions. Local and global stability conditions of the equilibrium point are obtained.
4) It is shown that periodic solutions can occur as a consequence of Neimark-Sacker bifurcation, in which the equilibrium point changes stability via a pair of complex eigenvalues with unit modulus.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
This document presents a modified q-homotopy analysis method (mq-HAM) for solving high-order nonlinear partial differential equations. The mq-HAM improves upon the standard q-HAM by avoiding repeated computations at each iteration step, making it more efficient. As an illustrative example, the method is applied to solve second- and third-order nonlinear cases. The key steps of the mq-HAM include transforming the high-order PDE into a system of first-order equations, then obtaining a series solution through a zero-order deformation equation and its derivatives with respect to an embedded parameter.
In this paper, modified q-homotopy analysis method (mq-HAM) is proposed for solving high-order non-linear partial differential equations. This method improves the convergence of the series solution and overcomes the computing difficulty encountered in the q-HAM, so it is more accurate than nHAM which proposed in Hassan and El-Tawil, Saberi-Nik and Golchaman. The second- and third-order cases are solved as illustrative examples of the proposed method.
A semi analytic method for solving two-dimensional fractional dispersion equa...Alexander Decker
This document presents a semi-analytic method called the modified decomposition method for solving two-dimensional fractional dispersion equations. The method is applied to solve a two-dimensional fractional dispersion equation subject to initial and boundary conditions. The numerical results obtained from the modified decomposition method are shown to closely match the exact solution, demonstrating the accuracy of this approach. The method provides an efficient means of obtaining analytical solutions to fractional differential equations.
The document applies the variational iteration method (VIM) to solve linear and nonlinear ordinary differential equations (ODEs) with variable coefficients. It emphasizes the power of the method by using it to solve a variety of ODE models of different orders and coefficients. The document also uses VIM to solve four scientific models - the hybrid selection model, Thomas-Fermi equation, Kidder equation for unsteady gas flow through porous media, and the Riccati equation. The VIM provides efficient iterative approximations for both analytic solutions and numeric simulations of real-world applications in science and engineering.
International Journal of Mathematics and Statistics Invention (IJMSI) inventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides examples of time series data and introduces the AR(1) model. The document describes an algorithm for calculating a bootstrap confidence interval for forecasting from an AR(1) model. It then discusses a simulation study comparing empirical coverage rates of bootstrap confidence intervals under different parameters. Finally, it applies the bootstrap method to forecasting Gross National Product growth, comparing the results to a parametric approach.
This document discusses using bootstrap methods to create confidence intervals for time series forecasts. It provides background on time series models and the autoregressive (AR) process. It then presents an algorithm for calculating a bootstrap confidence interval for forecasts from an AR(1) model. A simulation study compares coverage rates for bootstrap confidence intervals under different parameters. Finally, the method is applied to US Gross National Product data to forecast and construct confidence intervals.
Compatible discretizations in our hearts and mindsMarie E. Rognes
This document discusses a total pressure augmented formulation for simulating fluid flow in porous media, such as modeling cerebral fluid flow in the brain. The formulation introduces total pressure as a variable to overcome issues with Poisson locking in the incompressible limit. The formulation results in a coupled system of equations that describes solid displacement, total pressure, and fluid pressures. Finite element methods are developed using this formulation that achieve optimal convergence rates, including in the incompressible limit, using Taylor-Hood elements. Numerical experiments demonstrate the improved convergence rates over standard formulations.
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
The document describes a multilevel hybrid split-step implicit tau-leap method for simulating stochastic reaction networks. It begins with background on modeling biochemical reaction networks stochastically. It then discusses challenges with existing simulation methods like the chemical master equation and stochastic simulation algorithm. The document introduces the split-step implicit tau-leap method as an improvement over explicit tau-leap for stiff systems. It proposes a multilevel Monte Carlo estimator using this method to efficiently estimate expectations of observables with near-optimal computational work.
This document discusses deconvolution methods and their applications in analyzing gamma-ray spectra. It provides background on several deconvolution algorithms, including Tikhonov-Miller regularization, Van Cittert, Janson, Gold, Richardson-Lucy, and Muller algorithms. These algorithms aim to improve spectral resolution by mathematically removing instrument smearing effects. They are based on solving systems of linear equations using direct or iterative methods, with regularization techniques to produce stable solutions. Examples are given to illustrate the sensitivity of non-regularized solutions to noise and the need for regularization.
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
SIAMSEAS2015
1. The application of Homotopy Analysis Method
for the solution of time-fractional diffusion
equation with a moving boundary
Ogugua N. Onyejekwe
Department of Mathematics
Indian River State College
39th Annual SIAM Southeastern Atlantic Section
Conference
March 20-22 2015
Ogugua N. Onyejekwe Homotopy Analysis Method
2. Abstract
It is difficult to obtain exact solutions to most moving boundary
problems. In this presentation we employ the use of Homotopy
Analysis Method(HAM) to solve a time-fractional diffusion
equation with a moving boundary condition.
HAM is a semi-analytic technique used to solve ordinary,
partial, algebraic, fractional and delay differential equations.
This method employs the concept of homotopy from topology to
generate a convergent series solution for nonlinear systems.
The homotopy Maclaurin series is utilized to deal with
nonlinearities in the system.
Ogugua N. Onyejekwe Homotopy Analysis Method
3. Abstract
HAM was first developed by Dr. Shijun Liao in 1992 for his PhD
dissertation in Jiatong University in Shangia. He further
modified this method in 1997 by introducing a convergent -
control parameter h which guarantees convergence for both
linear and nonlinear differential equations.
Ogugua N. Onyejekwe Homotopy Analysis Method
4. Abstract
There are advantages to using HAM [4]
it is independent of any small/large physical parameters.
when parameters are chosen well, the results obtained
show high accuracy because of the convergence- control
parameter h.
there is computational efficiency and a strong rate of
convergence.
flexibility in the choice of base function and initial/best
guess of solution.
Ogugua N. Onyejekwe Homotopy Analysis Method
5. Parameters
s (t) - diffusion front
C0 - initial concentration of drug distributed in matrix.
Cs - solubility of drug in the matrix
C (x, t) - concentration of drug in the matrix
℘ - diffusivity of drug in the matrix (assumed to be constant)
Dα
t - Caputo Derivative
R - scale of the polymer matrix
Ogugua N. Onyejekwe Homotopy Analysis Method
6. Problem Definition
Figure 1: Profile of concentration. The first picture is the initial drug
loading. The second picture is the profile of concentration of the drug
in the matrix at time t.[10]
Ogugua N. Onyejekwe Homotopy Analysis Method
7. Assumptions
We will only consider the early stages of loss before the
diffusion front moves closer to R and assume that C0 > Cs.
Perfect sink is assumed.
Ogugua N. Onyejekwe Homotopy Analysis Method
8. Introduction
Given the domain
WT = {(ξ, t) : 0 < ξ < s (t) , 0 < α ≤ 1, t > 0} (1)
The following problem is considered
Dα
t C (ξ, t) = ℘
∂2C (ξ, t)
∂ξ2
, (2)
with the initial condition
C (ξ, 0) = 0 (3)
and the following boundary conditions
C (s (1) , 1) = k1, C (s (t) , t) = Cs, t > 0, (4)
where k1 is any constant.
Ogugua N. Onyejekwe Homotopy Analysis Method
9. (C0 − Cs) Dα
t s (t) = ℘
∂C (ξ, t)
∂ξ
(ξ = s (t) , t > 0) , (5)
s (1) = k2 (6)
k2 is a constant that depends of the value of α
where Dα
t is defined as the Caputo derivative
Dα
t f (t) =
t
0
(t − τ)n−α−1
Γ (n − α)
fn
(τ) dτ, (α > 0) , (7)
for n − 1 < α < n, n ∈ N and Γ ( ) represents the Gamma
function.
Ogugua N. Onyejekwe Homotopy Analysis Method
10. Reducing Governing Equations to Dimensionless
Variables
The reduced dimensionless variables are defined as
x =
ξ
R
, τ =
℘
R2
1
α
t, u =
C
Cs
, S (τ) =
s (t)
R
(8)
Ogugua N. Onyejekwe Homotopy Analysis Method
11. Reducing Governing Equations to Dimensionless
Variables
The governing equation (2) subjected to conditions (3) − (5)
can be reduced to the dimensionless forms
Dα
t u (x, τ) =
∂2u (x, τ)
∂x2
(0 < x < S (τ) , τ > 0) (9)
u (S (1) , 1) = 1 (10)
where S (1) varies for different values of α and η
u (x, τ) = 1, (x = S (τ)) , τ > 0, (11)
∂u (x, τ)
∂x
= ηDα
t S (τ) , (x = S (τ)) , τ > 0, (12)
where η = C0−Cs
Cs
Ogugua N. Onyejekwe Homotopy Analysis Method
12. Solution by HAM
To solve equation (9) by homotopy analysis method, the the
initial guess for u (x, τ) is chosen as
u0 (x, τ) = (a0)−1
xτγ1
(13)
where a0 =
Γ(1−α
2 )
ηΓ(1+α
2 )
1
2
, γ1 = −α
2
The initial guess for the diffusion front is chosen as
S0 = a0τ
α
2 (14)
Ogugua N. Onyejekwe Homotopy Analysis Method
13. Solution by HAM
The auxiliary linear operator is
L [φ (x, τ; q)] =
∂2φ (x, τ)
∂x2
(15)
with the property
L [k] = 0 (16)
where k is the integral constant, φ (x, τ; q) is an unknown
function.
The nonlinear operator is given as
N [φ (x, τ; q)] =
∂2φ (x, τ; q)
∂x2
−
∂αφ (x, τ; q)
∂τα
(17)
Ogugua N. Onyejekwe Homotopy Analysis Method
14. Solution by HAM
By means of HAM,defined by Liao, we construct a zeroth-order
deformation
(1 − q) L [φ (x, τ; q) − u0 (x, τ)] = qhN [φ (x, τ; q)] , (18)
where q ∈ [0, 1] is the embedding parameter, h = 0 is the
convergence-control parameter,u0 (x, τ; q) is the initial/best
guess of u0 (x, τ)
Ogugua N. Onyejekwe Homotopy Analysis Method
15. Solution by HAM
Expanding φ (x, τ; q) in Taylor series with respect to q we
obtain,
φ (x, τ; q) = u0 (x, τ) +
+∞
m=1
um(x, τ)qm
(19)
Clearly we see that when q = 0 and q = 1 equation (19)
becomes
φ (x, τ; 0) = u0 (x, τ) , φ (x, τ; 1) = u (x, τ) (20)
If the auxiliary linear operator L, the initial guess u0 (x, τ) and
the convergence-control parameter h are properly chosen so
that the series described in (20) converges at q = 1, then
u (x, τ) will be one of the solutions of the problem we have
considered.
Ogugua N. Onyejekwe Homotopy Analysis Method
16. Solution by HAM
Differentiating the zero-order deformation equation (18) m
times with respect to q and then dividing it by m! and finally
setting q = 0 , we obtain an mth-order deformation equation
L [um (x, τ) − χmum−1 (x, τ)] = hVm
−−−−−−−→
um−1 (x, τ) (21)
where
χm =
0 if m ≤ 1;
1 if m > 1.
and
Vm
−−−−−−−→
um−1 (x, τ) =
∂2um−1 (x, τ)
∂x2
−
∂αum−1 (x, τ)
∂τα
(22)
Ogugua N. Onyejekwe Homotopy Analysis Method
17. Solution by HAM
We have
um (x, τ) = χmum−1 (x, τ) + hL−1
Vm
−−−−−−−→
um−1 (x, τ) + k (23)
and the integration constant k is determined by the boundary
condition equation (10).
Looking at equation (23), the values for um (x, τ) for
m = 1, 2, 3, ... can be obtained and the series solution gained.
Ogugua N. Onyejekwe Homotopy Analysis Method
18. Solution by HAM
The approximate analytic solution is gained by truncating the
following series
u (x, τ) =
m
i=0
ui(x, τ). (24)
Equation (24) contains the convergence-control parameter h,
which determines the convergence region and rate of the
homotopy-series solution.
The convergence-control parameter h is obtained by setting
(u(S (1) , 1)HAM = (u(S (1) , 1)exact
The diffusion front S (τ) is obtained by setting
(u(S (τ) , τ))HAM = 1
Ogugua N. Onyejekwe Homotopy Analysis Method
19. Comparison between Approximate and Exact
Solutions when x=0.75
The exact solution for u (x, τ) and S (τ) are given as follows
[10].
u (x, τ) = H
∞
n=0
x
τ
α
2
2n+1
(2n + 1)!Γ 1 − 2n+1
2 α
(25)
S (τ) = p.τ
α
2 (26)
Ogugua N. Onyejekwe Homotopy Analysis Method
20. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 2: Drug Distribution in tissue when η = 0.5 and α = 1,
u (x, τ)HAM is in red and u (x, τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
21. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 3: Diffusion Front in tissue when η = 0.5 and α = 1, S (τ)HAM
is in red and S (τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
22. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 4: Drug Distribution in tissue when η = 0.5 and α = 0.75,
u (x, τ)HAM is in red and u (x, τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
23. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 5: Diffusion Front in tissue when η = 0.5 and α = 0.75,
S (τ)HAM is in red and S (τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
24. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 6: Drug Distribution in tissue when η = 0.5 and α = 0.5,
u (x, τ)HAM is in red and u (x, τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
25. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 7: Diffusion Front in tissue when η = 0.5 and α = 0.5,
S (τ)HAM is in red and S (τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
26. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 8: Drug Distribution in tissue when η = 1 and α = 0.5,
u (x, τ)HAM is in red and u (x, τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
27. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 9: Diffusion Front in tissue when η = 1 and α = 0.5, S (τ)HAM
is in red and S (τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
28. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 10: Drug Distribution in tissue when η = 3 and α = 0.5,
u (x, τ)HAM is in red and u (x, τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
29. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 11: Diffusion Front in tissue when η = 3 and α = 0.5, S (τ)HAM
is in red and S (τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
30. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 12: Drug Distribution in tissue when η = 9 and α = 0.5,
u (x, τ)HAM is in red and u (x, τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
31. Comparison between Approximate and Exact
Solutions when x=0.75
Figure 13: Diffusion Front in tissue when η = 9 and α = 0.5, S (τ)HAM
is in red and S (τ)EXACT is in green
Ogugua N. Onyejekwe Homotopy Analysis Method
32. Conclusion
When calculating the values for u (x, τ) for a fixed value of
η and varying values of α, the higher the value of α, the
smaller the relative error.
For fixed values of η and varying values of α, the
approximate and exact values of S (τ) are in direct
agreement with each other.
Similarly for fixed values for α and varying values of η, the
approximate and exact values of S (τ) are in direct
agreement with each other.
Whereas for fixed values for α and varying values of η, the
values of u (x, τ) are in more agreement than they were for
u (x, τ) for a fixed value of η and varying values of α. The
relative error is smaller.
Ogugua N. Onyejekwe Homotopy Analysis Method
33. Conclusion
We have shown that HAM can be used to accurately predict
drug distribution in tissue u (x, τ) and the diffusion front S (τ)
for different values of α and η.
Ogugua N. Onyejekwe Homotopy Analysis Method
34. References I
A.K. Alomari.
Modifications of Homotopy Analysis Method for Differential
Equations: Modifications of Homotopy Analysis Method,
Ordinary, Fractional,Delay, and Algebraic Equations.
Lambert Academic Publishing,Germany, 2012.
S. Liao.
Homotopy Analysis Method in Nonlinear Equations.
Springer,New York, 2012.
S. Liao.
Beyond Perturbation: Introduction to the Homotopy
Analysis Method.
Chapman and Hall/CRC,New York, 2004.
Ogugua N. Onyejekwe Homotopy Analysis Method
35. References II
S.Liao
Advances in The Homotopy Analysis Method
World Scientific Publishing Co.Pte. Ltd, 2014.
Rajeev, M.S. Kushawa
Homotopy perturbation method for a limit case Stefan
Problem governed by fractional diffusion equation.
Applied Mathematical Modeling,37(2013),3589-3599.
S.Das, Rajeev
Solution of Fractional Diffusion Equation with a moving
boundary condition by Variational Iteration and Adomain
Decomposition Method.
Z. Naturforsch,65a(2010), 793-799.
Ogugua N. Onyejekwe Homotopy Analysis Method
36. References III
S. Liao.
Notes on the homotopy analysis method - Some definitions
and theorems.
Common Nonlinear Sci.Numer.Simulat, 14(2009),983-997.
V.R.Voller
An exact solution of a limit case Stefan problem governed
by a fractional diffusion equation.
International Journal of Heat and Mass
Transfer,53(2010),5622-5625.
V.R.Voller, F.Falcini, R.Garra
Fractional Stefan Problems exhibiting lumped and
distributed latent-heat memory effect.
Physical Review,87(2013),042401.
Ogugua N. Onyejekwe Homotopy Analysis Method
37. References IV
X.Li, M.Xu, X.Jiang.
Homotopy perturbation method to time-fractional diffusion
equation with a moving boundary condition.
Applied Mathematics and Computation,208(2009),434-439.
X.Li, S.Wang and M.Zhao
Two methods to solve a fractional single phase moving
boundary problem.
Cent.Eur.J.Phys.,11(2013),1387-1391.
Ogugua N. Onyejekwe Homotopy Analysis Method