This document presents a numerical method called Piecewise-Homotopy Analysis Method (P-HAM) for solving fourth-order boundary value problems. P-HAM is based on the Homotopy Analysis Method (HAM) but uses multiple auxiliary parameters, with each parameter applied over a sub-range of the domain for improved accuracy. The document outlines the basic steps of P-HAM, including constructing the zero-order deformation equation and deriving the governing equations. It then applies P-HAM to solve two example problems and compares the results to other numerical methods.
Special second order non symmetric fitted method for singularAlexander Decker
The document presents a special second order non-symmetric fitted difference method for solving singularly perturbed two-point boundary value problems with boundary layers. The method introduces a fitting factor in the finite difference scheme to account for rapid changes in the boundary layer. The fitting factor is derived from singular perturbation theory. The method is applied to problems with left-end and right-end boundary layers. A tridiagonal system is obtained and solved using the discrete invariant embedding algorithm. Numerical results illustrate the accuracy of the proposed method.
This document introduces the concept of conditional expectation and stochastic calculus. It defines conditional expectation as the projection of a random variable X onto the sub-σ-algebra generated by another random variable or process Y. It must minimize the mean squared error between X and the projected variable. Properties like linearity and monotonicity are proven. Conditional expectation allows incorporating observable information to make optimal guesses about unobserved variables. Martingales, which generalize random walks, also play an important role in stochastic calculus.
This document provides an introduction to stochastic calculus. It begins with a review of key probability concepts such as the Lebesgue integral, change of measure, and the Radon-Nikodym derivative. It then discusses information and σ-algebras, including filtrations and adapted processes. Conditional expectation is explained. The document concludes by introducing random walks and their connection to Brownian motion through the scaled random walk process. Key concepts such as martingales and quadratic variation are defined.
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
This document discusses operator splitting methods for solving systems of convection-diffusion equations. It begins by introducing operator splitting, where the time evolution is split into separate steps for convection and diffusion. While efficient, operator splitting can produce significant errors near shocks.
The document then examines the nonlinear error mechanism that causes issues for operator splitting near shocks. When a shock develops in the convection step, it introduces a local linearization that neglects self-sharpening effects. This leads to splitting errors.
To address this, the document discusses corrected operator splitting, which uses the wave structure from the convection step to identify where nonlinear splitting errors occur. Terms are added to the diffusion step to compensate for
11.a focus on a common fixed point theorem using weakly compatible mappingsAlexander Decker
The document presents a common fixed point theorem that generalizes an earlier theorem by Bijendra Singh and M.S. Chauhan. It replaces the conditions of compatibility and completeness with weaker conditions of weakly compatible mappings and an associated convergent sequence. The theorem proves that if self-maps A, B, S, and T of a metric space satisfy certain conditions, including (1) A(X) ⊆ T(X) and B(X) ⊆ S(X), (2) the pairs (A,S) and (B,T) are weakly compatible, and (3) the associated sequence converges, then the maps have a unique common fixed point. An example is given where the
A focus on a common fixed point theorem using weakly compatible mappingsAlexander Decker
The document presents a theorem that generalizes an existing fixed point theorem using weaker conditions. Specifically, it replaces the conditions of compatibility and completeness with weakly compatible mappings and an associated convergent sequence. The theorem proves that if four self-maps satisfy certain conditions, including being weakly compatible and having an associated sequence that converges, then the maps have a unique common fixed point. The conditions are shown to be weaker using an example where the associated sequence converges even though the space is not complete.
The document outlines research on developing optimal finite difference grids for solving elliptic and parabolic partial differential equations (PDEs). It introduces the motivation to accurately compute Neumann-to-Dirichlet (NtD) maps. It then summarizes the formulation and discretization of model elliptic and parabolic PDE problems, including deriving the discrete NtD map. It presents results on optimal grid design and the spectral accuracy achieved. Future work is proposed on extending the NtD map approach to non-uniformly spaced boundary data.
11.solution of linear and nonlinear partial differential equations using mixt...Alexander Decker
This document discusses solving linear and nonlinear partial differential equations using a combination of the Elzaki transform and projected differential transform methods.
[1] It introduces the Elzaki transform and defines its properties for handling partial derivatives.
[2] It then introduces the projected differential transform method and its fundamental theorems for decomposing nonlinear terms.
[3] As an example application, it shows how to use the two methods together to solve a general nonlinear, non-homogeneous partial differential equation with initial conditions. The Elzaki transform is used to obtain an expression in terms of the nonlinear terms, which are then decomposed using the projected differential transform method.
Special second order non symmetric fitted method for singularAlexander Decker
The document presents a special second order non-symmetric fitted difference method for solving singularly perturbed two-point boundary value problems with boundary layers. The method introduces a fitting factor in the finite difference scheme to account for rapid changes in the boundary layer. The fitting factor is derived from singular perturbation theory. The method is applied to problems with left-end and right-end boundary layers. A tridiagonal system is obtained and solved using the discrete invariant embedding algorithm. Numerical results illustrate the accuracy of the proposed method.
This document introduces the concept of conditional expectation and stochastic calculus. It defines conditional expectation as the projection of a random variable X onto the sub-σ-algebra generated by another random variable or process Y. It must minimize the mean squared error between X and the projected variable. Properties like linearity and monotonicity are proven. Conditional expectation allows incorporating observable information to make optimal guesses about unobserved variables. Martingales, which generalize random walks, also play an important role in stochastic calculus.
This document provides an introduction to stochastic calculus. It begins with a review of key probability concepts such as the Lebesgue integral, change of measure, and the Radon-Nikodym derivative. It then discusses information and σ-algebras, including filtrations and adapted processes. Conditional expectation is explained. The document concludes by introducing random walks and their connection to Brownian motion through the scaled random walk process. Key concepts such as martingales and quadratic variation are defined.
A current perspectives of corrected operator splitting (os) for systemsAlexander Decker
This document discusses operator splitting methods for solving systems of convection-diffusion equations. It begins by introducing operator splitting, where the time evolution is split into separate steps for convection and diffusion. While efficient, operator splitting can produce significant errors near shocks.
The document then examines the nonlinear error mechanism that causes issues for operator splitting near shocks. When a shock develops in the convection step, it introduces a local linearization that neglects self-sharpening effects. This leads to splitting errors.
To address this, the document discusses corrected operator splitting, which uses the wave structure from the convection step to identify where nonlinear splitting errors occur. Terms are added to the diffusion step to compensate for
11.a focus on a common fixed point theorem using weakly compatible mappingsAlexander Decker
The document presents a common fixed point theorem that generalizes an earlier theorem by Bijendra Singh and M.S. Chauhan. It replaces the conditions of compatibility and completeness with weaker conditions of weakly compatible mappings and an associated convergent sequence. The theorem proves that if self-maps A, B, S, and T of a metric space satisfy certain conditions, including (1) A(X) ⊆ T(X) and B(X) ⊆ S(X), (2) the pairs (A,S) and (B,T) are weakly compatible, and (3) the associated sequence converges, then the maps have a unique common fixed point. An example is given where the
A focus on a common fixed point theorem using weakly compatible mappingsAlexander Decker
The document presents a theorem that generalizes an existing fixed point theorem using weaker conditions. Specifically, it replaces the conditions of compatibility and completeness with weakly compatible mappings and an associated convergent sequence. The theorem proves that if four self-maps satisfy certain conditions, including being weakly compatible and having an associated sequence that converges, then the maps have a unique common fixed point. The conditions are shown to be weaker using an example where the associated sequence converges even though the space is not complete.
The document outlines research on developing optimal finite difference grids for solving elliptic and parabolic partial differential equations (PDEs). It introduces the motivation to accurately compute Neumann-to-Dirichlet (NtD) maps. It then summarizes the formulation and discretization of model elliptic and parabolic PDE problems, including deriving the discrete NtD map. It presents results on optimal grid design and the spectral accuracy achieved. Future work is proposed on extending the NtD map approach to non-uniformly spaced boundary data.
11.solution of linear and nonlinear partial differential equations using mixt...Alexander Decker
This document discusses solving linear and nonlinear partial differential equations using a combination of the Elzaki transform and projected differential transform methods.
[1] It introduces the Elzaki transform and defines its properties for handling partial derivatives.
[2] It then introduces the projected differential transform method and its fundamental theorems for decomposing nonlinear terms.
[3] As an example application, it shows how to use the two methods together to solve a general nonlinear, non-homogeneous partial differential equation with initial conditions. The Elzaki transform is used to obtain an expression in terms of the nonlinear terms, which are then decomposed using the projected differential transform method.
Solution of linear and nonlinear partial differential equations using mixture...Alexander Decker
This document discusses solving linear and nonlinear partial differential equations using a combination of the Elzaki transform and projected differential transform methods.
[1] It introduces the Elzaki transform and defines its properties for solving partial differential equations. Properties include formulas for the Elzaki transform of partial derivatives.
[2] It then introduces the projected differential transform method and defines its basic properties and theorems for decomposing nonlinear terms.
[3] As an example application, it shows how to use the two methods together to solve a general nonlinear, non-homogeneous partial differential equation with initial conditions. The Elzaki transform is used to obtain an expression for the solution, then the projected differential transform decomposes the
This document summarizes methods for estimating average treatment effects in nonlinear models with endogenous switching using variably parametric regression. It describes two estimators: 1) a minimally parametric estimator that specifies an exponential conditional mean, and 2) a fully parametric estimator that specifies a generalized gamma conditional density. A Monte Carlo study shows the fully parametric estimator has lower bias even in small samples. The methods are then applied to a real dataset on birthweight and maternal smoking.
This document proposes a new 4-point block method for direct integration of first-order ordinary differential equations. The approximate solution is a combination of power series and exponential functions. The method is constructed using interpolation and collocation techniques to generate a system of equations whose coefficients determine the block method. The new method is tested on numerical examples and is shown to perform better than some existing methods. Key properties of the new method like consistency, convergence and zero-stability are investigated.
11.solution of a singular class of boundary value problems by variation itera...Alexander Decker
1) The document proposes an effective methodology called the variation iteration method to find solutions to singular second order linear and nonlinear boundary value problems.
2) The variation iteration method constructs a sequence of correction functionals to iteratively solve the boundary value problem. It is shown that the limit of the convergent iterative sequence obtained from this method is the exact solution.
3) The convergence of the iterative sequence generated by the variation iteration method is analyzed. It is established that the sequence converges to the exact solution under certain continuity conditions on the problem functions.
NIPS2009: Sparse Methods for Machine Learning: Theory and Algorithmszukun
1) The document discusses sparse methods for machine learning, focusing on sparse linear estimation using the l1-norm and extensions to structured sparse methods on vectors and matrices.
2) It reviews theory and algorithms for l1-norm regularization, including optimality conditions, first-order methods like subgradient descent, and coordinate descent.
3) The document also discusses going beyond lasso to structured sparsity, non-linear kernels, and sparse methods for matrices applied to problems like multi-task learning.
This document discusses quantum modes and the correspondence between classical and quantum mechanics. It provides three key principles of quantum mechanics: (1) quantum states are represented by ket vectors, (2) quantum observables are hermitian operators, and (3) the Schrodinger equation governs the causal evolution of quantum systems. It also outlines how classical quantities like position and momentum correspond to quantum operators and how they form Lie algebras through commutation relations. Representations of quantum mechanics are discussed through examples like the energy basis of the harmonic oscillator.
Signal Processing Course : Convex OptimizationGabriel Peyré
This document discusses convex optimization and proximal operators. It begins by introducing convex optimization problems with objective functions G mapping from a Hilbert space H to the real numbers. It then discusses properties of convex, lower semi-continuous, and proper functions. Examples are given of regularization problems and total variation denoising. The document covers subdifferentials, proximal operators, proximal calculus including separability and compositions, and relationships between proximal operators and subdifferentials. Gradient descent and subgradient descent algorithms are also briefly discussed.
The document provides an overview of probability theory and random variables including:
1) It defines probability as a measure of the chance of obtaining a particular outcome from an event. Common properties of probability such as mutually exclusive events and conditional probability are also covered.
2) Random variables are introduced as rules that assign real numbers to possible outcomes of an experiment. Both discrete and continuous random variables are defined.
3) Key concepts related to random variables are summarized including the cumulative distribution function, probability density function, expected value, variance, and common distributions like the uniform, binomial, and Gaussian distributions.
4) Finally, random processes are defined as sets of random variables indexed by time, with properties like the mean
Solution of a singular class of boundary value problems by variation iteratio...Alexander Decker
1. The document proposes an effective methodology called the variation iteration method to find solutions to a general class of singular second-order linear and nonlinear boundary value problems.
2. The variation iteration method generates a sequence of correction functionals that converges to the exact solution of the boundary value problem.
3. The author applies the variation iteration method to solve a specific class of boundary value problems and derives the sequence of correction functionals. Convergence of the iterative sequence is also analyzed.
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...IJERA Editor
Tau method which is an economized polynomial technique for solving ordinary and partial differential
equations with smooth solutions is modified in this paper for easy computation, accuracy and speed. The
modification is based on the systematic use of „Catalan polynomial‟ in collocation tau method and the
linearizing the nonlinear part by the use of Adomian‟s polynomial to approximate the solution of 2-dimentional
Nonlinear Partial differential equation. The method involves the direct use of Catalan Polynomial in the solution
of linearizedPartial differential Equation without first rewriting them in terms of other known functions as
commonly practiced. The linearization process was done through adopting the Adomian Polynomial technique.
The results obtained are quite comparable with the standard collocation tau methods for nonlinear partial
differential equations.
An Introduction to HSIC for Independence TestingYuchi Matsuoka
This document introduces Hilbert-Schmidt Independence Criterion (HSIC) for testing independence between random variables. HSIC embeds probability distributions into reproducing kernel Hilbert spaces and computes the distance between joint and product distributions using the Maximum Mean Discrepancy. It presents HSIC as a completely nonparametric measure of dependence that is applicable to high dimensional data. The document outlines how to compute HSIC from samples and discusses its relationship to U-statistics, providing an independence test using HSIC with permutations.
11.homotopy perturbation and elzaki transform for solving nonlinear partial d...Alexander Decker
The document discusses using a combination of the homotopy perturbation method and Elzaki transform to solve nonlinear partial differential equations. It begins by introducing the homotopy perturbation method and Elzaki transform individually. It then presents the homotopy perturbation Elzaki transform method, which applies Elzaki transform to reformulate the problem before using homotopy perturbation method to obtain approximations of the solution as a series. Finally, it applies the new combined method to solve an example nonlinear partial differential equation.
Homotopy perturbation and elzaki transform for solving nonlinear partial diff...Alexander Decker
The document presents a combination of the homotopy perturbation method and Elzaki transform to solve nonlinear partial differential equations. The homotopy perturbation method is used to handle the nonlinear terms, while the Elzaki transform is applied to reformulate the equations in terms of transformed variables, obtaining a series solution via inverse transformation. The method is demonstrated to be effective for both homogeneous and non-homogeneous nonlinear partial differential equations. Key steps include using integration by parts to obtain Elzaki transforms of partial derivatives and defining a convex homotopy to reformulate the equations for the homotopy perturbation method.
This document proposes a linear programming (LP) based approach for solving maximum a posteriori (MAP) estimation problems on factor graphs that contain multiple-degree non-indicator functions. It presents an existing LP method for problems with single-degree functions, then introduces a transformation to handle multiple-degree functions by introducing auxiliary variables. This allows applying the existing LP method. As an example, it applies this to maximum likelihood decoding for the Gaussian multiple access channel. Simulation results demonstrate the LP approach decodes correctly with polynomial complexity.
This document summarizes key points from a course on recovery guarantees for inverse problems regularized with low-complexity priors. It discusses how gauges can model unions of linear subspaces corresponding to priors like sparsity, block sparsity, and low-rankness. It introduces the concept of dual certificates for characterizing solutions to noiseless inverse problems and establishes conditions under which tight dual certificates exist, ensuring stable recovery. In the compressed sensing setting, it states thresholds on the number of measurements needed to guarantee the existence of tight dual certificates for sparse vectors and low-rank matrices observed with a random measurement matrix.
The paper reports on an iteration algorithm to compute asymptotic solutions at any order for a wide class of nonlinear
singularly perturbed difference equations.
A new approach to constants of the motion and the helmholtz conditionsAlexander Decker
1) The document discusses a new approach to determining constants of motion based on Helmholtz conditions for the existence of a Lagrangian.
2) It shows that constants of motion determined by this new approach vanish when the symmetry group is related to an actual invariance of the Lagrangian, as in this case the classical Noether invariants can be used.
3) The key equations of the approach are presented, including an equation (9) that provides an expression for constants of motion associated with any symmetry of a dynamical system with a Lagrangian.
This document provides an overview of an upcoming course on inverse problems and regularization. The course will cover three topics: inverse problems, compressed sensing, and sparsity and L1 regularization. Inverse problems involve recovering an unknown signal x0 from noisy observations. Regularization is used to incorporate prior information and make the problem well-posed. Compressed sensing allows signals to be sampled below the Nyquist rate if they are sparse. The L1 norm is used as a convex relaxation of the sparsity prior, allowing sparse recovery problems to be solved as convex programs.
This document is a calculus supplement to accompany a microeconomics textbook. It introduces the concept of partial derivatives and shows how they can be used to analyze economic concepts from the textbook like demand and supply functions, substitutes and complements, and elasticities. Partial derivatives allow the slope of demand and supply functions to be determined with respect to different variables. They also allow elasticities to be defined and calculated using calculus, providing an equivalent but alternative method to the algebraic approach in the textbook. The supplement aims to illustrate the connections between calculus and microeconomic concepts.
This document presents Joe Suzuki's work on Bayes independence tests. It discusses both discrete and continuous cases. For the discrete case, it estimates mutual information using maximum likelihood and proposes a Bayesian estimation using Lempel-Ziv compression. This Bayesian estimation is shown to be consistent. For the continuous case, it constructs a generalized Bayesian estimation that is also consistent. It also discusses the Hilbert Schmidt independence criterion (HSIC) and its limitations. Experiments show the proposed method performs well on both synthetic and real data, while HSIC shows poor performance in some cases. The proposed method has significantly better execution time than HSIC.
We use stochastic methods to present mathematically correct representation of the wave function. Informal construction was developed by R. Feynman. This approach were introduced first by H. Doss Sur une Resolution Stochastique de l'Equation de Schrödinger à Coefficients Analytiques. Communications in Mathematical Physics
October 1980, Volume 73, Issue 3, pp 247–264.
Primary intention is to discuss formal stochastic representation of the Schrodinger equation solution with its applications to the theory of demolition quantum measurements.
I will appreciate your comments.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
Solution of linear and nonlinear partial differential equations using mixture...Alexander Decker
This document discusses solving linear and nonlinear partial differential equations using a combination of the Elzaki transform and projected differential transform methods.
[1] It introduces the Elzaki transform and defines its properties for solving partial differential equations. Properties include formulas for the Elzaki transform of partial derivatives.
[2] It then introduces the projected differential transform method and defines its basic properties and theorems for decomposing nonlinear terms.
[3] As an example application, it shows how to use the two methods together to solve a general nonlinear, non-homogeneous partial differential equation with initial conditions. The Elzaki transform is used to obtain an expression for the solution, then the projected differential transform decomposes the
This document summarizes methods for estimating average treatment effects in nonlinear models with endogenous switching using variably parametric regression. It describes two estimators: 1) a minimally parametric estimator that specifies an exponential conditional mean, and 2) a fully parametric estimator that specifies a generalized gamma conditional density. A Monte Carlo study shows the fully parametric estimator has lower bias even in small samples. The methods are then applied to a real dataset on birthweight and maternal smoking.
This document proposes a new 4-point block method for direct integration of first-order ordinary differential equations. The approximate solution is a combination of power series and exponential functions. The method is constructed using interpolation and collocation techniques to generate a system of equations whose coefficients determine the block method. The new method is tested on numerical examples and is shown to perform better than some existing methods. Key properties of the new method like consistency, convergence and zero-stability are investigated.
11.solution of a singular class of boundary value problems by variation itera...Alexander Decker
1) The document proposes an effective methodology called the variation iteration method to find solutions to singular second order linear and nonlinear boundary value problems.
2) The variation iteration method constructs a sequence of correction functionals to iteratively solve the boundary value problem. It is shown that the limit of the convergent iterative sequence obtained from this method is the exact solution.
3) The convergence of the iterative sequence generated by the variation iteration method is analyzed. It is established that the sequence converges to the exact solution under certain continuity conditions on the problem functions.
NIPS2009: Sparse Methods for Machine Learning: Theory and Algorithmszukun
1) The document discusses sparse methods for machine learning, focusing on sparse linear estimation using the l1-norm and extensions to structured sparse methods on vectors and matrices.
2) It reviews theory and algorithms for l1-norm regularization, including optimality conditions, first-order methods like subgradient descent, and coordinate descent.
3) The document also discusses going beyond lasso to structured sparsity, non-linear kernels, and sparse methods for matrices applied to problems like multi-task learning.
This document discusses quantum modes and the correspondence between classical and quantum mechanics. It provides three key principles of quantum mechanics: (1) quantum states are represented by ket vectors, (2) quantum observables are hermitian operators, and (3) the Schrodinger equation governs the causal evolution of quantum systems. It also outlines how classical quantities like position and momentum correspond to quantum operators and how they form Lie algebras through commutation relations. Representations of quantum mechanics are discussed through examples like the energy basis of the harmonic oscillator.
Signal Processing Course : Convex OptimizationGabriel Peyré
This document discusses convex optimization and proximal operators. It begins by introducing convex optimization problems with objective functions G mapping from a Hilbert space H to the real numbers. It then discusses properties of convex, lower semi-continuous, and proper functions. Examples are given of regularization problems and total variation denoising. The document covers subdifferentials, proximal operators, proximal calculus including separability and compositions, and relationships between proximal operators and subdifferentials. Gradient descent and subgradient descent algorithms are also briefly discussed.
The document provides an overview of probability theory and random variables including:
1) It defines probability as a measure of the chance of obtaining a particular outcome from an event. Common properties of probability such as mutually exclusive events and conditional probability are also covered.
2) Random variables are introduced as rules that assign real numbers to possible outcomes of an experiment. Both discrete and continuous random variables are defined.
3) Key concepts related to random variables are summarized including the cumulative distribution function, probability density function, expected value, variance, and common distributions like the uniform, binomial, and Gaussian distributions.
4) Finally, random processes are defined as sets of random variables indexed by time, with properties like the mean
Solution of a singular class of boundary value problems by variation iteratio...Alexander Decker
1. The document proposes an effective methodology called the variation iteration method to find solutions to a general class of singular second-order linear and nonlinear boundary value problems.
2. The variation iteration method generates a sequence of correction functionals that converges to the exact solution of the boundary value problem.
3. The author applies the variation iteration method to solve a specific class of boundary value problems and derives the sequence of correction functionals. Convergence of the iterative sequence is also analyzed.
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...IJERA Editor
Tau method which is an economized polynomial technique for solving ordinary and partial differential
equations with smooth solutions is modified in this paper for easy computation, accuracy and speed. The
modification is based on the systematic use of „Catalan polynomial‟ in collocation tau method and the
linearizing the nonlinear part by the use of Adomian‟s polynomial to approximate the solution of 2-dimentional
Nonlinear Partial differential equation. The method involves the direct use of Catalan Polynomial in the solution
of linearizedPartial differential Equation without first rewriting them in terms of other known functions as
commonly practiced. The linearization process was done through adopting the Adomian Polynomial technique.
The results obtained are quite comparable with the standard collocation tau methods for nonlinear partial
differential equations.
An Introduction to HSIC for Independence TestingYuchi Matsuoka
This document introduces Hilbert-Schmidt Independence Criterion (HSIC) for testing independence between random variables. HSIC embeds probability distributions into reproducing kernel Hilbert spaces and computes the distance between joint and product distributions using the Maximum Mean Discrepancy. It presents HSIC as a completely nonparametric measure of dependence that is applicable to high dimensional data. The document outlines how to compute HSIC from samples and discusses its relationship to U-statistics, providing an independence test using HSIC with permutations.
11.homotopy perturbation and elzaki transform for solving nonlinear partial d...Alexander Decker
The document discusses using a combination of the homotopy perturbation method and Elzaki transform to solve nonlinear partial differential equations. It begins by introducing the homotopy perturbation method and Elzaki transform individually. It then presents the homotopy perturbation Elzaki transform method, which applies Elzaki transform to reformulate the problem before using homotopy perturbation method to obtain approximations of the solution as a series. Finally, it applies the new combined method to solve an example nonlinear partial differential equation.
Homotopy perturbation and elzaki transform for solving nonlinear partial diff...Alexander Decker
The document presents a combination of the homotopy perturbation method and Elzaki transform to solve nonlinear partial differential equations. The homotopy perturbation method is used to handle the nonlinear terms, while the Elzaki transform is applied to reformulate the equations in terms of transformed variables, obtaining a series solution via inverse transformation. The method is demonstrated to be effective for both homogeneous and non-homogeneous nonlinear partial differential equations. Key steps include using integration by parts to obtain Elzaki transforms of partial derivatives and defining a convex homotopy to reformulate the equations for the homotopy perturbation method.
This document proposes a linear programming (LP) based approach for solving maximum a posteriori (MAP) estimation problems on factor graphs that contain multiple-degree non-indicator functions. It presents an existing LP method for problems with single-degree functions, then introduces a transformation to handle multiple-degree functions by introducing auxiliary variables. This allows applying the existing LP method. As an example, it applies this to maximum likelihood decoding for the Gaussian multiple access channel. Simulation results demonstrate the LP approach decodes correctly with polynomial complexity.
This document summarizes key points from a course on recovery guarantees for inverse problems regularized with low-complexity priors. It discusses how gauges can model unions of linear subspaces corresponding to priors like sparsity, block sparsity, and low-rankness. It introduces the concept of dual certificates for characterizing solutions to noiseless inverse problems and establishes conditions under which tight dual certificates exist, ensuring stable recovery. In the compressed sensing setting, it states thresholds on the number of measurements needed to guarantee the existence of tight dual certificates for sparse vectors and low-rank matrices observed with a random measurement matrix.
The paper reports on an iteration algorithm to compute asymptotic solutions at any order for a wide class of nonlinear
singularly perturbed difference equations.
A new approach to constants of the motion and the helmholtz conditionsAlexander Decker
1) The document discusses a new approach to determining constants of motion based on Helmholtz conditions for the existence of a Lagrangian.
2) It shows that constants of motion determined by this new approach vanish when the symmetry group is related to an actual invariance of the Lagrangian, as in this case the classical Noether invariants can be used.
3) The key equations of the approach are presented, including an equation (9) that provides an expression for constants of motion associated with any symmetry of a dynamical system with a Lagrangian.
This document provides an overview of an upcoming course on inverse problems and regularization. The course will cover three topics: inverse problems, compressed sensing, and sparsity and L1 regularization. Inverse problems involve recovering an unknown signal x0 from noisy observations. Regularization is used to incorporate prior information and make the problem well-posed. Compressed sensing allows signals to be sampled below the Nyquist rate if they are sparse. The L1 norm is used as a convex relaxation of the sparsity prior, allowing sparse recovery problems to be solved as convex programs.
This document is a calculus supplement to accompany a microeconomics textbook. It introduces the concept of partial derivatives and shows how they can be used to analyze economic concepts from the textbook like demand and supply functions, substitutes and complements, and elasticities. Partial derivatives allow the slope of demand and supply functions to be determined with respect to different variables. They also allow elasticities to be defined and calculated using calculus, providing an equivalent but alternative method to the algebraic approach in the textbook. The supplement aims to illustrate the connections between calculus and microeconomic concepts.
This document presents Joe Suzuki's work on Bayes independence tests. It discusses both discrete and continuous cases. For the discrete case, it estimates mutual information using maximum likelihood and proposes a Bayesian estimation using Lempel-Ziv compression. This Bayesian estimation is shown to be consistent. For the continuous case, it constructs a generalized Bayesian estimation that is also consistent. It also discusses the Hilbert Schmidt independence criterion (HSIC) and its limitations. Experiments show the proposed method performs well on both synthetic and real data, while HSIC shows poor performance in some cases. The proposed method has significantly better execution time than HSIC.
We use stochastic methods to present mathematically correct representation of the wave function. Informal construction was developed by R. Feynman. This approach were introduced first by H. Doss Sur une Resolution Stochastique de l'Equation de Schrödinger à Coefficients Analytiques. Communications in Mathematical Physics
October 1980, Volume 73, Issue 3, pp 247–264.
Primary intention is to discuss formal stochastic representation of the Schrodinger equation solution with its applications to the theory of demolition quantum measurements.
I will appreciate your comments.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
The document discusses quantiles and quantile regression. It begins by defining quantiles as the inverse of a cumulative distribution function. Quantile regression models the relationship between covariates and conditional quantiles, similar to how ordinary least squares regression models the conditional mean. The document also discusses median regression, which estimates relationships using the 1-norm rather than the 2-norm used in OLS. Median regression provides consistent estimates when the error term has a symmetric distribution.
Lecture13p.pdf.pdfThedeepness of freedom are threevalues.docxcroysierkathey
Lecture13p.pdf.pdf
Thedeepness of freedom are threevalues at thenude
functional Notconforming
patrtaf.us
vi sci x
I beease
ittouch
41 u VCsci inhalfedgeL
U VCI't x
Since u CPz are sci sc 7 that it have 3 Zeusunless e o
E is P anisolvent it forgiven I lop C P s t 4 p di
same
degree
y i l N Yi C E
Sabi n ofsystem YCp g
This is equivalent to say theonlypolynomial C PthetinterpolateZero
data Yifp o is the Zeno poly
vcpi.POTFF.gg In Edem
e e
I CRIvalue VCR Ca Ya
metfunctor
p
E3
pjJ Chip J
Shun E is p unisolvent
Y Cul VCR7 0
Xz V UCR o
rf VI UCB 0
Then over the edge PP we hone
C P havingtworootsPR
D This implies we 0
If e consider the other to edges G e b thesame
argument we can see Eo tht means W Lv o hersonly trivial
Solution
Then Yi CUI Ri for any Xi
E is P unisowent
y
csiy
Ya
f P Y cnn.PT
III Ldj Pg I Pre 2 ily
a PyO ein a
451214 7 f p i y g d CP f
ftp b f CRI I B so
fickle Cps O y Cp 7 L
Escaple5 in lectureto
Lecture_03_S08.pdf
LECTURE # 3:
ABSTRACT RITZ-GALERKIN METHOD
MATH610: NUMERICAL METHODS FOR PDES:
RAYTCHO LAZAROV
1. Variational Formulation
In the previous lecture we have introduced the following space of functions
defined on (0, 1):
(1)
V =
v :
v(x) is continuous function on (0, 1);
v′(x) exists in generalized sense and in L2(0, 1);
v(0) = v(1) = 0
:= H10 (0, 1)
and equipped it with the L2 and H1 norms
‖v‖ = (v,v)1/2 and ‖v‖V = (v,v)
1/2
V =
(∫ 1
0
(u′2 + u2)dx
)1
2
.
We also introduced the following variational and minimization problems:
(V ) find u ∈ V such that a(u,v) = L(v), ∀ v ∈ V,
(M) find u ∈ V such that F(u) ≤ F(v), ∀ v ∈ V,
where a(u,v) is a bilinear form that is symmetric, coercive and contin-
uous on V and L(v) is continuous on V and F(v) = 1
2
a(u,u) −L(v).
As an example we can take
a(u,v) ≡
∫ 1
0
(k(x)u′v′ + q(x)uv) dx and L(v) ≡
∫ 1
0
f(x)v dx.
Here we have assumed that there are positive constants k0, k1, M such that
(2) k1 ≥ k(x) ≥ k0 > 0, M ≥ q(x) ≥ 0, f ∈ L2(0, 1).
These are sufficient for the symmetry, coercivity and continuity of the
bilinear form a(., .) and the continuity of the linear form L(v).
The proof of these properties follows from the following theorem:
Theorem 1. Let u ∈ V ≡ H10 (0, 1). Then the following inequalities are
valid:
(3)
|u(x)|2 ≤ C1
∫ 1
0
(u′(x))2dx for any x ∈ (0, 1),∫ 1
0
u2(x)dx ≤ C0
∫ 1
0
(u′(x))2dx.
with constants C0 and C1 that are independent of u.
1
2 MATH610: NUMERICAL METHODS FOR PDES: RAYTCHO LAZAROV
Proof: We give two proofs. The simple one proves the above inequali-
ties with C0 = 1/2 and C1 = 1. The better proof establishes the above
inequalities with C0 = 1/6 and C1 = 1/4.
Indeed, for any x ∈ (0, 1) we have:
u(x) = u(0) +
∫ x
0
u′(s)ds.
Since u ∈ H10 (0, 1) then u(0) = 0. We square this equality and apply
Cauchy-Swartz inequality:
(4) |u(x)|2 =
∣∣∣∫ x
0
u′(s)ds
∣∣∣2 ≤ ∫ x
0
1ds
∫ x
0
(u′(s))2ds ≤ x
∫ x
0
(u′(s))2ds.
Taking the maximal value of x on the right hand side of this inequality
w ...
Lecture13p.pdf.pdfThedeepness of freedom are threevalues.docxjeremylockett77
The document discusses the abstract Ritz-Galerkin method for numerical solution of partial differential equations.
It begins by introducing the variational formulation and defining the function spaces used. It then describes the abstract form of the Ritz-Galerkin method, which finds an approximate solution in a finite dimensional subspace.
Specific examples are given to illustrate the method, including choice of basis functions for different boundary conditions. Basis functions for linear finite elements are defined on a partition of the domain into subintervals. The Ritz-Galerkin method results in a system of linear equations that can be solved for the coefficients of the approximate solution.
This document summarizes Chris Swierczewski's general exam presentation on computational applications of Riemann surfaces and Abelian functions. The presentation covered the geometry and algebra of Riemann surfaces, including bases of cycles, holomorphic differentials, and period matrices. Applications discussed include using Riemann theta functions to find periodic solutions to integrable PDEs like the Kadomtsev–Petviashvili equation. The talk also discussed linear matrix representations of algebraic curves and the constructive Schottky problem of realizing a Riemann matrix as the period matrix of a curve.
Existence of Solutions of Fractional Neutral Integrodifferential Equations wi...inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Density theorems for anisotropic point configurationsVjekoslavKovac1
This document discusses density theorems for anisotropic point configurations. Specifically:
- It summarizes previous results on density theorems for linear configurations in Euclidean spaces.
- It then presents new results on density theorems for anisotropic power-type scalings, where points are scaled by different powers in different coordinates.
- Theorems are proven for anisotropic simplices and boxes in such spaces, showing that any set of positive density must contain scaled copies of these configurations for scales above a certain threshold.
- The proofs use a multiscale approach involving pattern counting forms, smoothed counting forms, and analyzing the structured, uniform, and error parts that arise from decomposing the counting forms. Mult
A Tau Approach for Solving Fractional Diffusion Equations using Legendre-Cheb...iosrjce
In this paper, a modified numerical algorithm for solving the fractional diffusion equation is
proposed. Based on Tau idea where the shifted Legendre polynomials in time and the shifted Chebyshev
polynomials in space are utilized respectively.
The problem is reduced to the solution of a system of linear algebraic equations. From the computational point
of view, the solution obtained by this approach is tested and the efficiency of the proposed method is confirmed.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
On New Root Finding Algorithms for Solving Nonlinear Transcendental EquationsAI Publications
In this paper, we present new iterative algorithms to find a root of the given nonlinear transcendental equations. In the proposed algorithms, we use nonlinear Taylor’s polynomial interpolation and a modified error correction term with a fixed-point concept. We also investigated for possible extension of the higher order iterative algorithms in single variable to higher dimension. Several numerical examples are presented to illustrate the proposed algorithms.
International Journal of Mathematics and Statistics Invention (IJMSI) inventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
An Approach For Solving Nonlinear Programming ProblemsMary Montoya
This document presents an approach for solving nonlinear programming problems using measure theory. It begins by transforming a nonlinear programming problem into an optimal control problem by treating the variables as time-varying and integrating the objective and constraint functions. It then solves the optimal control problem using measure theory by representing the control-trajectory pair as a positive Radon measure on the space of trajectories and controls. Finally, it shows that the optimal solution to the transformed optimal control problem provides an approximate optimal solution to the original nonlinear programming problem.
A Fast Numerical Method For Solving Calculus Of Variation ProblemsSara Alvarez
This document presents a numerical method called differential transform method (DTM) for solving calculus of variation problems. DTM finds the solution of variational problems in the form of a polynomial series without discretization. The method is applied to obtain the solution of the Euler-Lagrange equation arising from variational problems by considering it as an initial value problem. Some examples are presented to demonstrate the efficiency and accuracy of DTM for solving calculus of variation problems.
This document provides an overview of Floquet theory and periodic differential equations. It begins by introducing periodic differential equations and framing their solution in terms of spectral theory and eigenvalue problems. It then covers key results of Floquet theory, including that any periodic differential equation has a non-trivial solution that is periodic up to a constant multiplier. The document defines Hill's equation and shows how general periodic differential equations can be transformed into this form. It concludes by examining cases for the discriminant of Hill's equation that determine the nature of the solutions.
This document covers key topics in seismic data processing including complex numbers, vectors, matrices, determinants, eigenvalues, singular values, matrix inversion, series, Taylor series, Fourier series, delta functions, and Fourier integrals. It provides examples of using Taylor series to approximate nonlinear systems as linear systems and using Fourier series to approximate periodic functions. The importance of Fourier transforms for spectral analysis and various geophysical applications is also discussed.
This document provides a summary of a project report on bifurcation analysis and its applications. It discusses key concepts in nonlinear systems such as equilibrium points, stability, linearization, and bifurcations including saddle node, transcritical, pitchfork and Hopf bifurcations. Examples are given to illustrate each type of bifurcation. Population models involving competition and prey-predator interactions are also discussed. The document outlines the contents which cover preliminary remarks, local theory of nonlinear systems, different types of bifurcations, and applications to population models.
11.solution of a subclass of singular second orderAlexander Decker
This document presents a method for solving a subclass of singular second order differential equations of Lane-Emden type using Taylor series. The method identifies conditions where the equations can be solved analytically as a polynomial function. Several examples from literature are provided and solved using the proposed Taylor series method to demonstrate its effectiveness. The method produces exact solutions in polynomial form and is shown to perform well on both linear and nonlinear boundary value problems.
This document discusses using the Taylor series method to solve a subclass of singular second order differential equations of Lane-Emden type.
The method is applied to solve boundary value problems that arise from modeling physical phenomena. The Taylor series method produces an analytic solution in the form of a polynomial function.
Several examples from literature are considered to demonstrate the suitability and viability of the Taylor series method for solving these types of differential equations.
Similar to Numerical solution of boundary value problems by piecewise analysis method (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
20240605 QFM017 Machine Intelligence Reading List May 2024
Numerical solution of boundary value problems by piecewise analysis method
1. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
Numerical solution of Boundary Value Problems by Piecewise
Analysis Method
+ *
O. A. TAIWO; A .O ADEWUMI and R . A. RAJI
Department of Mathematics, University of Ilorin, Ilorin Nigeria.
*Department of Mathematics and Statistics, P.M.B 301,
Osun State Polytechnic, Iree, Osun State, Nigeria.
Abstract
In this paper, we use an efficient numerical algorithm for solving two point fourth-order linear and nonlinear
boundary value problems, which is based on the homotopy analysis method (HAM), namely, the piecewise –
homotopy analysis method ( P-HAM).The method contains an auxiliary parameter that provides a powerful tool to
analysis strongly linear and nonlinear ( without linearization ) problems directly. Numerical examples are presented
to compare the results obtained with some existing results found in literatures. Results obtained by the RHAM
performed better in terms of accuracy achieved.
Keywords: Piecewise-homotopy analysis, perturbation, Adomain decomposition method, Variational Iteration,
Boundary Value Problems.
1. Introduction.
In recent years, much attention has been given to develop some analytical methods for solving boundary value
problems including the perturbation methods, decomposition method and variational iteration method etc. It is well
known that perturbation method [ 1, 2 ] provide the most versatile tools available in nonlinear analysis of engineering
problems. The major drawback in the traditional perturbation techniques is the over dependence on the existence of
small parameter. This condition over strict and greatly affects the application of the perturbation techniques because
most of the nonlinear problems
( especially those having strong nonlinearity ) do not even contain the so called small parameter; moreover, the
determination of the small parameter is a complicated process and requires special techniques. These facts have
motivated to suggest alternative techniques such as decomposition method, variational iteration method and
homotopy analysis method.
The homotopy analysis method (HAM) proposed by [ 9, 10 ] is a general analytical approach to solve various types
of linear and non-linear equations, including Partial differential equations, Ordinary differential equations, difference
equations and algebraic equations. More importantly different from all perturbation and traditional non-perturbation
methods. The homotopy analysis method provides simple way to ensure the convergence of solution in series form
and therefore, the HAM is even for strong nonlinear problems. The homotopy analysis method (HAM), is based on
homotopy, a fundamental concept in topology and differential geometry. In homotopy analysis method, one
constructs a continuous mapping of an initial guess approximation to the exact solution of problems to be considered.
An auxiliary linear operator is chosen to construct such kind of continuous mapping and an auxiliary parameter is
used to ensure convergence of solution series. The method enjoys great freedom in choosing initial approximation
and auxiliary linear operator. By means of this kind of freedom, a complicated non-linear problems can be
transformed into an infinite numbers of simpler linear sub-problems.
In this paper, we consider the general fourth –order boundary value problems of the type:
y iv ( x) = f ( x, y, y ' , y ' ' , y ' ' ' ) (1)
With the boundary conditions:
y ( a ) = α 1; y ' (a ) = α 2
y (b) = β1 y ' (b) = β 2 (2)
Or,
49
2. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
y ( a ) = α 1; y ' ' (a ) = α 2
y (b) = β1 y ' ' (b) = β 2 (3)
Where f is continuous function on [a, b] and the parameters αi and β i , i = 1, 2 are real constants. Such type
of boundary value problems arise in the mathematical modeling of the viscoelastic flows, deformation of beams
and plate deflection theory and other branches of mathematical, physical and engineering ( [ 3, 7, 8, 13 ]). Several
numerical methods including Finite Difference, B – Spline were developed for solving fourth order boundary value
problems [3]. In this paper, we used Piecewise Homotopy Analysis Method to obtain the numerical solution of fourth
order boundary value problems.
2. Description of the new homotopy analysis method.
In this section, the basic idea of the new homotopy analysis method is introduced. We start by considering the
following differential equation:
N [ y ( x) ] = 0 (4)
Where N is an operator, y (x ) is unknown function and x , the independent variable. Let y 0 ( x) denotes an initial
guess of the exact solution y (x ), h ≠ 0
an auxiliary parameter,
H (x ) ≠ 0
an auxiliary function and L , the auxiliary linear operator with the property L [ y ( x)] = 0 when y ( x) = 0. Then,
using q ε [ 0,1 ] as an embedded parameter, we construct such a homotopy
^
(1 − q ) L [ φ ( x, q ) − y 0 ( x ) ] − q h H ( x ) N [ φ ( x, q )] = H [ φ ( x, q ); y 0 ( x) H ( x ), h, q ] (5)
It should be emphasized that we have – function to choose the initial guess y 0 ( x ), the auxiliary linear operator L ,
the non – zero auxiliary parameter h , and the auxiliary function H ( x ) . Enforcing the homotopy (5) to be zero, that
is,
^
H [φ ( x, q ) ; y 0 ( x ) , H ( x ) , h, q ] = 0
(6)
50
3. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
We have the so called zero order deformation equation
(1 − q ) L [ φ ( x, q ) − y 0 ( x) ] = q h H ( x) N [ φ ( x, q )] (7)
When q = 0 the zero order deformation equation (7) becomes
φ ( x, 0 ) = y 0 ( x)
(8)
And when q = 1, h ≠ 0 and H ( x) ≠ 0 , the zero order deformation equation (7) is equivalent to:
φ ( x,1 ) = y ( x ) (9)
Thus, according to equations (8) and (9), as the embedding parameter q increases from 0 to 1, φ ( x, q ) varies
continuously from the initial approximation y 0 ( x) to the exact solution y ( x ) , such a kind of continuous variation
is called deformation in homotopy. The power series of q is as follows:
∞
φ ( x, q ) = y 0 ( x ) + ∑y
m =1
m ( x) q m
(10)
Where,
1 ∂ m φ ( x, q )
y m ( x) = q =0
m ! ∂q m (11)
If the initial guess y 0 ( x), the auxiliary linear operator L , the non- zero auxiliary parameter h and the auxiliary
function H ( x ) are properly chosen so that the power series (10) of φ ( x, q ) converges at q = 1 . Then, we have
under these assumptions that the solution series
∞
y ( x ) = φ ( x, 1 ) = y 0 ( x ) + ∑y
m =1
m ( x)
(12)
For brevity, define the vector
→
y m ( x) = { y 0 ( x) , y1 ( x ), ...... y m ( x )} (13)
51
4. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
According to the definition (11), the governing equation of y m (x) can be derived from the zero – order
deformation (7) . Differentiating the zero – order deformation equation (7) m times with respect to q and then
dividing by m! and finally setting q = 0 , we have the so called modified mth order deformation equation
→
L [ y m ( x ) − (1 − X m ) y m −1 ( x ) ] = hH ( x ) Rm ( y m −1 ( x) ) (14)
y mk ) (0) = 0 ,
(
k = 0,1,2,.............m − 1,
where,
→ 1 ∂ m −1 N [φ ( x, q)]
Rm ( y m ( x) ) = q=0
(m − 1)! ∂q m −1
And ,
0, m ≤1
Xm =
1,
m >1
According to (1) the auxiliary linear operator L can be chosen as
∂ 4 φ ( x, q )
L[φ ( x, q )] = (15)
∂x 4
And the non linear operator N can be chosen
∂ 4φ ∂φ ∂ 2φ ∂ 3φ
N [φ ( x, q ) ] = − f x, φ ,
, , (16)
∂x 4 ∂x ∂x 2 ∂x 3
The initial guess y 0 ( x) of the solution y ( x) can be determined as follows:
n −1
( x − a) k
y 0 ( x) = ∑y
k =0
(k )
(a)
k!
, k = 0, 1, 2, 3, ......
which gives, (17)
1 1
y 0 ( x) = y (a ) + y ' (a ) ( x − a ) + y ' ' (a ) ( x − a) 2 + y ' ' ' (a ) ( x − a ) 3
2! 3!
Also, let the expression,
n −1
S n ( x) = ∑ y ( x)
i =0
i (18)
52
5. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
denotes the n-term approximation to y ( x ) . Our aim is to determine the constants y ' ' ( a ) and y ' ' ' ( a ) . This can
be achieved by imposing the remaining two boundary conditions (2) at x = b to the S n . Thus we have,
S n (b) = y (b), S ' n (b) = y ' (b)
Solving for y ' ( a ) and y ' ' ( a ) yields
2
y ' ' (a ) = [ y ' (b)(a − b) + 2 y ' (a)(a − b) + 3( y (b) − y (a))]
( a − b) 2
And,
6
y ' ' ' (a ) = [ y ' (a)(a − b) + y ' (b)(a − b) + 2( y (b) − y (a))]
( a − b) 3
By substituting y ' ' ( a ) and y ' ' ' ( a ) into (17), we have
y 0 ( x) = y (a) + y ' (a)( x − a) + [ r ( x) ] 2 [ y ' (b) (a − b) + 2 y ' (a)(a − b) + 3( y (b) − y (a))]
− [r ( x) ]3 [ y ' (a)(a − b) + y ' (b)(a − b) + 2( y (b) − y (a))] (19)
Where,
( x − a)
r ( x) =
(b − a )
Note that the higher order deformation equation (14) is governing by the linear operator L , and the term
→
R ( y m−1 ( x ) can be expressed simply by (15) for any nonlinear operator N . Therefore, y m (x) can be easily
gained, especially by means of computational software such as MATLAB. The solution y ( x ) given by the above
approach is dependent of L, h, H ( x) and y 0 ( x) .
3. The Basic idea of Piecewise – Homotopy Analysis Method (PHAM).
It is a remarkable property of homotopy analysis method (HAM) that the value of the auxiliary parameter h can be
freely chosen to increase the convergence rate of the solution series. But the freedom of choosing h is subject to the
so called valid region of h. It was discovered that the solution series h does not always give a good approximation
even when h is chosen within the valid region. Nevertheless by chance, there are cases where a valid h value chosen
this way does give good approximation for a large range of x . Therefore in approximation for a large x range, we
53
6. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
propose that many different h values should be used and each h value offers good approximation only about certain
small range of x. the segments of approximation correspond to their range of x are then be combined and joined
together piece by piece to form the large x range approximation desire. T his method is named as “Piecewise -
Homotopy Analysis Method (P-HAM ), which means that there most be many h – values, each for a different value
of x. The horizontal segment of each h – curve will give a valid h – region, each corresponds to one value of x. It is
noteworthy that P- HAM adopts only one approximation series y ( x, h ) all along, but different h-values are
substituted for different interval of x such that a piecewise approximation series is formed.
That is,
y ( x , h 1) , x < x1
y ( x, h ), x1 ≤ x < x 2
2
y ( x, h3 ) , x 2 ≤ x < x3
y ( x) = :
:
y ( x, hn −1 ) , x n − 2 ≤ x < x n −1
y ( x , hn ) x n −1 ≤ x < x 2
4. Numerical Examples.
In this section, we demonstrate the effectiveness of the P-HAM with illustrative examples. All numerical
results obtained by P – HAM are compared with the results obtained by various numerical methods.
Example 1.
Consider the following linear problem:
1 2
y ' ' ' ' ( x) = (1 + c) y ' ' ( x) − cy( x) + cx − 1, 0 < x < 1,
2
Subject to ,
y ( 0) = 1 , y ' ( 0) = 1
1 3
y (1) = + Sinh (1) y ' (1) = + Cosh (1)
2 2
1 2
The exact solution for this problem is y ( x ) = 1 + x + Sinh ( x )
2
For each test point, the error between the exact solution and the results obtained by the HPM, ADM and
HAM and compared in tables 1 and 2.
54
7. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
Also Table 3 shows different values of x and their corresponding values of h for cases c=10, 100 and 1000.
Our approximate solution obtained using P-HAM are in d agreement with the exact solutions in all cases.
With only two iterations, that is, y 0 + y1 , a better approximation has been obtained than the results by
HPM, ADM and HAM.
Using (14) and (19), we obtain:
y 0 ( x) = 1 + x + 0.482522945 x 2
x τ τ2 τ1 1 2
y1 ( x ) = 0.192678247 x 3 − h ∫ ∫ ∫ ∫ (1 + c) y ' ' 0 ( x ) − cy 0 ( I ) + 2 cτ − 1 dτ dτ 1 dτ 2 dτ 3
0 0 0 0
x τ τ2 τ1 1 2
y m ( x) = − h ∫ ∫ ∫ ∫ ((1 + c ) y ' m −1 (τ ) − cy m −1 (τ ) ) X m + (1 − X m )( 2 cτ − 1) dτ dτ 1 dτ 2 dτ 3
0 0 0 0
In all cases, we have defined our error as
Error = Exact solution − Approximate solution , a ≤ x ≤ b
55
8. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
Table 1: Error of Example 1: c = 10.
X Exact solution Error of ADM Error of HPM Error of HAM Error of
P-HAM
0.0 1.000000000 0.000000 0.0000000 0.0000000 0.0000000
0.1 1.105166750 −4 −4 −4 −8
1.7 x 10 1.7 x 10 1.5 x 10 9.3 x 10
0.2 1.221336002 −4 −4 −4 −8
5.7 x 10 5.7 x 10 4.9 x 10 1.9 x 10
0.3 1.349520293 −3 −3 −4 −8
1.0 x 10 1.0 x 10 8.9 x 10 6.0 x 10
0.4 1.400752325 −3 −3 −3 −8
1.4 x 10 1.4 x 10 1.2 x 10 5.1 x 10
0.5 1.646095305 −3 −3 −3 −9
1.6 x 10 1.6 x 10 1.4 x 10 6.0 x 10
0.6 1.816653582 −3 −3 −3 −7
1.6 x 10 1.6 x 10 1.3 x 10 2.7 x 10
0.7 2.003583701 −3 −3 −3 −9
1.2 x 10 1.2 x 10 1.7 x 10 1.0 x 10
0.8 2.208105982 −3 −3 −4 −6
7.6 x 10 7.6 x 10 6.4 x 10 2.3 x 10
0.9 2.431516725 −3 −3 −4 −6
2.5 x 10 2.5 x 10 2.7 x 10 2.6 x 10
1.0 2.675201193 0.0000000 0.0000000 0.0000000 0.0000000
56
9. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
Table 2 : Numerical values when C=100
x Error of ADM Error of HPM Error of Error of P-
HAM HAM
0.0 0.0000 0.0000 0.0000 0.0000
0.1 6.5 x 10
−4 6.5 x 10-4 1.5 x 10-4 8.3 x 10-6
0.2 2.6 x 10
−3 2.6 x 10-3 4.9 x 10-4 4.7 x 10-7
0.3 6.3 x 10
−3 6.3 x 10-3 8.9 x 10-4 9.3 x 10-7
0.4 1.2 x 10
−2 1.2 x 10-2 1.2 x 10-3 1.6 x 10-6
0.5 1.8 x 10
−2 1.8 x 10-2 1.4 x 10-3 8.6 x 10-6
0.6 2.4 x 10
−2 2.4 x 10-2 1.4 x 10-3 5.3 x 10-6
0.7 2.6 x 10
−2 2.6 x 10-2 1.2 x 10-3 9.9 x 10-6
0.8 2.0 x 10-2 2.0 x 10-2 9.7 x 10-4 9.1 x 10-6
0.9 8.6 x 10-3 8.6 x 10-3 7.9 x 10-4 5.6 x 10-6
1.0 0.0000 0.0000 0.0000 0.0000
Table 3 : Numerical values when C=1000
x Error of ADM Error of HPM Error of Error of P-
HAM HAM
0.0 0.0000 0.0000 0.0000 0.0000
0.1 1.3 x 10-2 1.3 x 10-2 1.5 x 10-4 1.8 x 10-7
0.2 1.1 x 10-1 1.1 x 10-1 4.9 x 10-4 3.7 x 10-8
0.3 3.9 x 10-1 3.9 x 10-1 9.2 x 10-4 6.6 x 10-7
0.4 9.4 x 10-1 9.4 x 10-1 1.3 x 10-3 4.9 x 10-6
0.5 1.6 x 100 1.6 x 100 1.7 x 10-3 1.2 x 10-6
0.6 2.3 x 100 2.3 x 100 2.2 x 10-3 8.0 x 10-6
0.7 2.6 x 100 2.6 x 100 2.8 x 10-3 1.2 x 10-5
57
10. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
0.8 2.1 x 100 2.1 x 100 3.9 x 10-3 2.2 x 10-7
0.9 9.1 x 10-1 9.1 x 10-1 6.1 x 10-3 4.5 x 10-8
1.0 0.0000 0.0000 0.0000 0.0000
Table 4: Different h-values chosen within the horizontal segments of the h-curves for all x range when c=10, 100 and
1000.
X C=10 C=100 C=1000
0.0 1.000000 1.000000 1.000000
0.1 3.500000 6.100000 0.649000
0.2 9.445000 0.983000 0.098800
0.3 2.684000 0.277000 0.027800
0.4 0.964900 0.099000 0.009900
0.5 0.383500 0.039000 0.003930
0.6 0.155400 0.015800 0.001580
0.7 0.059500 0.006000 0.000600
0.8 0.018900 0.001900 0.000193
0.9 0.003500 0.000350 0.000036
1.0 1 x 10-9 1 x 10-9 1 x 10-9
Example 2: Consider the following non linear boundary –value problem:
With boundary conditions
Y(0) = 1
Y(1) = 1 + e
The analytical solution for this problem is
This problem is solve by the same method applied in Example 1, therefore its iteration formulation reads:
58
11. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
And for m = 2, 3,…
Table 4: Errors of the first order approximate solution obtained by HAM and P-HAM for example 2
X Analytical solution Error of HAM Error of P-HAM
0.0 1.0000000 0.0000 0.0000
0.1 1.115170918 5.2 x 10-4 5.1 x 10-7
0.2 1.261402758 1.7 x 10-4 7.4 x 10-7
0.3 1.439858808 2.9 x 10-3 2.3 x 10 -7
0.4 1.651824698 3.7 x 10-3 5.4 x 10-7
0.5 1.898721271 4.4 x 10 -3 7.7 x 10-8
0.6 2.182118800 4.1 x10-3 4.6 x 10-7
0.7 2.503752707 2.9 x 10-3 1.3 x 10-6
0.8 2.865540928 1.9 x 10-3 7.6 x 10-7
0.9 3.269603111 6.1 x 10-4 5.1 x 10-7
1.0 3.718281828 0.0000 0.0000
Table 5: Different h-values for all x
X 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
h
-0.00873
-1 x 10-9
-0.7286
-0.3212
-0.1318
-0.0445
-61.05
-11.84
-4.035
-1.653
-1.0
We conclude that we have achieved a good approximation with the numerical solution of the equation by using the
first two terms only of the P-HAM series discussed above. It is evident that the overall errors can be made smaller by
adding new terms of the series.
5. DISCUSSION AND CONCLUSION
59
12. Journal of Natural Sciences Research www.iiste.org
ISSN 2224-3186 (Paper) ISSN 2225-0921 (Online)
Vol.2, No.4, 2012
In this work, the new homotopy analysis method (P-HAM) has been successively developed and used to solve both
linear and non-linear boundary value problems. The P-HAM does not required us to calculate the unknown constant
which is usually a derivative at the boundary. All numerical approximation by P-HAM are compared with the results
in many other methods such as Homotopy perturbation method, Adomian decomposition method and homotopy
analysis method. From the results as illustrative examples, it may be concluded that this technique is very powerful
and efficient in finding the analytical solutions for a large class of differential equations.
References
[1] A. H Nayfeh (1981). Introduction to Perturbation Techniques, John Wiley and Sons, New York.
[2] A. H Nayfeh (1985). Problems in Perturbation, John Wiley and Sons, New York.
[3] E. J Doedel (1979). Finite difference collocation methods for non-linear two point boundary value problems”,
SIAM Journal in Numerical Analysis,vol. 16, no 2, 173-185.
[4] G. Adomian (1994). Solving Frontier problems of Physics: The Decomposition Method. Kluwer Academic
Publishers, Boston and London.
[5] G. Ebadi & S. Rashedi (2010 ). The extended Adomian decomposition method for fourth-order boundary value
problems, Acta Universitatis Apulensis, No 22, 65-78.
[6] H. Jafari, C. Chun, S. Seifi & M. Saeidy (2009). Analytical for nonlinear Gas Dynamic Equation by Homotopy
analysis Method . intern. J. ofApplications and Applied Mathematics, Vol. 4.no1, 149 – 154.
[7] J. H. He (2006). Some Asymptotic Methods for strong Nonlinear Equations. Intern. J. of Modern Physics, B,
Vol. 20, no 10, 1140-1199.
[8] M.M. Chawla and C. P. Katti (1979). Finite Difference Methods for two point boundary value problems
involving higher order differential equations .BIT, Vol. 19, no.1, 27-33.
[9] S.J. Liao (1995). An approximâtes solution technique not depending onsmall parameters, a special example,
Intern. J. of Nonlinear Mechanics, Vol. 30, no3, 371-380.
[10] S. J. Liao (2008). Notes on the Homotopy analysis method. Some definitions and Theorems,
Communications in Nonlinear Science and Numerical Simulation. Doi: 10. 1016 / J.ensns.
[11] S. Momani, M. Aslam Noor (2007). Numerical Comparison of methods for solving fourth order boundary
value problems. Math. Problem Eng. 1-15, article 1D98602.
[12] T. F. Ma & J. Da-Silva (2004). Iterative solution for a beam equation with nonlinear boundary conditions of
third order . Applied Mathematics and Computations. Vol. 159, 11-18.
60
13. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar