The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
The document discusses projection methods for solving functional equations. Projection methods work by specifying a basis of functions and "projecting" the functional equation against that basis to find the parameters. This allows approximating different objects like decision rules or value functions. The document focuses on spectral methods that use global basis functions and covers various basis options like monomials, trigonometric series, Jacobi polynomials and Chebyshev polynomials. It also discusses how to generalize the basis to multidimensional problems, including using tensor products and Smolyak's algorithm to reduce the number of basis elements.
The document introduces perturbation methods as a way to solve functional equations that describe economic problems. It presents a basic real business cycle model as an example problem that can be solved using perturbation methods. Specifically, it:
1) Defines the real business cycle model as a functional equation system that is difficult to solve directly.
2) Proposes using perturbation methods by introducing a small perturbation parameter (the standard deviation of technology shocks) and solving the problem when this parameter equals zero.
3) Expands the decision rules as Taylor series in terms of the state variables and perturbation parameter to build a local approximation around the deterministic steady state. This leads to a system of equations that can be solved order-by-order for
This document discusses various machine learning techniques including:
1. Tree pruning involves first growing a large tree and then pruning branches that do not improve the objective function. This prevents early stopping.
2. Boosting uses multiple weak learners sequentially to get an additive model that approximates the regression function. It combines many simple models to create a powerful ensemble model.
3. Unsupervised learning techniques like principal component analysis and clustering are used to find patterns in data without an outcome variable. These include reducing dimensions and partitioning data into subgroups.
The document discusses three examples of nonlinear and non-Gaussian DSGE models. The first example features Epstein-Zin preferences to allow for a separation between risk aversion and the intertemporal elasticity of substitution. The second example models volatility shocks using time-varying variances. The third example aims to distinguish between the effects of stochastic volatility ("fortune") versus parameter drifting ("virtue") in explaining time-varying volatility in macroeconomic variables. The document outlines the motivation, structure, and solution methods for these three nonlinear DSGE models.
This document discusses heterogeneous agent models without aggregate uncertainty. It introduces a model with a continuum of agents who face idiosyncratic income fluctuations but no aggregate shocks. There is a unique stationary equilibrium with constant interest rates and wages. The document discusses the recursive competitive equilibrium, existence and uniqueness of the stationary equilibrium, transition functions, computation methods, and some qualitative results from calibrating the model.
This document discusses filtering and likelihood inference. It begins by introducing filtering problems in economics, such as evaluating DSGE models. It then presents the state space representation approach, which models the transition and measurement equations with stochastic shocks. The goal of filtering is to compute the conditional densities of states given observed data over time using tools like the Chapman-Kolmogorov equation and Bayes' theorem. Filtering provides a recursive way to make predictions and updates estimates as new data arrives.
The document discusses Approximate Bayesian Computation (ABC). ABC allows inference for statistical models where the likelihood function is not available in closed form. ABC works by simulating data under different parameter values and comparing simulated to observed data. ABC has been used for model choice by comparing evidence for different models. Consistency of ABC for model choice depends on the criterion used and asymptotic identifiability of the parameters.
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
The document discusses projection methods for solving functional equations. Projection methods work by specifying a basis of functions and "projecting" the functional equation against that basis to find the parameters. This allows approximating different objects like decision rules or value functions. The document focuses on spectral methods that use global basis functions and covers various basis options like monomials, trigonometric series, Jacobi polynomials and Chebyshev polynomials. It also discusses how to generalize the basis to multidimensional problems, including using tensor products and Smolyak's algorithm to reduce the number of basis elements.
The document introduces perturbation methods as a way to solve functional equations that describe economic problems. It presents a basic real business cycle model as an example problem that can be solved using perturbation methods. Specifically, it:
1) Defines the real business cycle model as a functional equation system that is difficult to solve directly.
2) Proposes using perturbation methods by introducing a small perturbation parameter (the standard deviation of technology shocks) and solving the problem when this parameter equals zero.
3) Expands the decision rules as Taylor series in terms of the state variables and perturbation parameter to build a local approximation around the deterministic steady state. This leads to a system of equations that can be solved order-by-order for
This document discusses various machine learning techniques including:
1. Tree pruning involves first growing a large tree and then pruning branches that do not improve the objective function. This prevents early stopping.
2. Boosting uses multiple weak learners sequentially to get an additive model that approximates the regression function. It combines many simple models to create a powerful ensemble model.
3. Unsupervised learning techniques like principal component analysis and clustering are used to find patterns in data without an outcome variable. These include reducing dimensions and partitioning data into subgroups.
The document discusses three examples of nonlinear and non-Gaussian DSGE models. The first example features Epstein-Zin preferences to allow for a separation between risk aversion and the intertemporal elasticity of substitution. The second example models volatility shocks using time-varying variances. The third example aims to distinguish between the effects of stochastic volatility ("fortune") versus parameter drifting ("virtue") in explaining time-varying volatility in macroeconomic variables. The document outlines the motivation, structure, and solution methods for these three nonlinear DSGE models.
This document discusses heterogeneous agent models without aggregate uncertainty. It introduces a model with a continuum of agents who face idiosyncratic income fluctuations but no aggregate shocks. There is a unique stationary equilibrium with constant interest rates and wages. The document discusses the recursive competitive equilibrium, existence and uniqueness of the stationary equilibrium, transition functions, computation methods, and some qualitative results from calibrating the model.
This document discusses filtering and likelihood inference. It begins by introducing filtering problems in economics, such as evaluating DSGE models. It then presents the state space representation approach, which models the transition and measurement equations with stochastic shocks. The goal of filtering is to compute the conditional densities of states given observed data over time using tools like the Chapman-Kolmogorov equation and Bayes' theorem. Filtering provides a recursive way to make predictions and updates estimates as new data arrives.
The document discusses Approximate Bayesian Computation (ABC). ABC allows inference for statistical models where the likelihood function is not available in closed form. ABC works by simulating data under different parameter values and comparing simulated to observed data. ABC has been used for model choice by comparing evidence for different models. Consistency of ABC for model choice depends on the criterion used and asymptotic identifiability of the parameters.
Random Matrix Theory and Machine Learning - Part 4Fabian Pedregosa
Deep learning models with millions or billions of parameters should overfit according to classical theory, but they do not. The emerging theory of double descent seeks to explain why larger neural networks can generalize well. Random matrix theory provides a tractable framework to model double descent through random feature models, where the number of random features controls model capacity. In the high-dimensional limit, the test error of random feature regression exhibits a double descent shape that can be computed analytically.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
This document provides an introduction to random matrix theory and its applications in machine learning. It discusses several classical random matrix ensembles like the Gaussian Orthogonal Ensemble (GOE) and Wishart ensemble. These ensembles are used to model phenomena in fields like number theory, physics, and machine learning. Specifically, the GOE is used to model Hamiltonians of heavy nuclei, while the Wishart ensemble relates to the Hessian of least squares problems. The tutorial will cover applications of random matrix theory to analyzing loss landscapes, numerical algorithms, and the generalization properties of machine learning models.
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...SSA KPI
This document describes a method for solving nonlinear stochastic optimization problems with linear constraints using Monte Carlo estimators. The key aspects are:
1) An ε-feasible solution approach is used to avoid "jamming" or "zigzagging" when dealing with linear constraints.
2) The optimality of solutions is tested statistically using the asymptotic normality of Monte Carlo estimators.
3) The Monte Carlo sample size is adjusted iteratively based on the gradient estimate to decrease computational trials while maintaining solution accuracy.
4) Under certain conditions, the method is proven to converge almost surely to a stationary point of the optimization problem.
5) As an example, the method is applied to portfolio optimization with
This document discusses various importance sampling methods for approximating marginal likelihoods, including regular importance sampling, bridge sampling, and harmonic means. It compares these methods on a probit model example using data on diabetes in Pima Indian women. Regular importance sampling uses the MLE distribution as an importance function. Bridge sampling introduces a pseudo-posterior to handle models with different parameter dimensions. Harmonic means directly uses the posterior sample but requires a proposal distribution with lighter tails than the posterior.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This chapter summary discusses discrete probability distributions. It distinguishes between discrete and continuous random variables and distributions. It describes how to determine the mean and variance of discrete distributions. It introduces some common discrete distributions like the binomial and Poisson distributions. For the binomial distribution, it explains how to calculate the probability of a given number of successes in a given number of trials. For the Poisson distribution, it provides the probability formula and explains that it models independent events occurring continuously over an interval.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.
This document contains lecture notes on exponential growth and decay from a Calculus I class at New York University. It begins with announcements about an upcoming review session, office hours, and midterm exam. It then outlines the topics to be covered, including the differential equation y=ky, modeling population growth, radioactive decay including carbon-14 dating, Newton's law of cooling, and continuously compounded interest. Examples are provided of solving various differential equations representing exponential growth or decay. The document explains that many real-world situations exhibit exponential behavior due to proportional growth rates.
The first report of Machine Learning Seminar organized by Computational Linguistics Laboratory at Kazan Federal University. See http://cll.niimm.ksu.ru/cms/lang/en_US/main/seminars/mlseminar
The document is a lecture on inverse trigonometric functions from a Calculus I class at New York University. It defines inverse trig functions like arcsin, arccos, and arctan and discusses their domains, ranges, and relationships to the original trig functions. It also provides examples of evaluating inverse trig functions at specific values.
Numerical solution of boundary value problems by piecewise analysis methodAlexander Decker
This document presents a numerical method called Piecewise-Homotopy Analysis Method (P-HAM) for solving fourth-order boundary value problems. P-HAM is based on the Homotopy Analysis Method (HAM) but uses multiple auxiliary parameters, with each parameter applied over a sub-range of the domain for improved accuracy. The document outlines the basic steps of P-HAM, including constructing the zero-order deformation equation and deriving the governing equations. It then applies P-HAM to solve two example problems and compares the results to other numerical methods.
This document describes a clustering procedure and nonparametric mixture estimation. It introduces a mixture density model where the goal is to efficiently estimate the mixture weights (αi) and component densities (fi). A two-stage clustering algorithm is proposed: 1) perform clustering on covariates (X) to estimate labels (Ik), and 2) estimate component densities (fi) using kernel density estimation within each cluster. The performance of this approach depends on the clustering method's misclassification error. A toy example with two components having disjoint support densities for X is provided to illustrate the model.
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
The document describes a multilevel hybrid split-step implicit tau-leap method for simulating stochastic reaction networks. It begins with background on modeling biochemical reaction networks stochastically. It then discusses challenges with existing simulation methods like the chemical master equation and stochastic simulation algorithm. The document introduces the split-step implicit tau-leap method as an improvement over explicit tau-leap for stiff systems. It proposes a multilevel Monte Carlo estimator using this method to efficiently estimate expectations of observables with near-optimal computational work.
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
STUDIES ON INTUTIONISTIC FUZZY INFORMATION MEASURESurender Singh
This document discusses studies on measures of intuitionistic fuzzy information. It begins with introductions and definitions related to fuzzy sets, intuitionistic fuzzy sets, and measures of fuzzy entropy. It then discusses special t-norm operators and proposes a measure of intuitionistic fuzzy entropy based on these t-norms. The measure is defined using a function of the membership, non-membership, and hesitancy degrees of an intuitionistic fuzzy set. Several desirable properties of such a measure are outlined, including sharpness, maximality, resolution, symmetry, and valuation. The document provides mathematical foundations and definitions to propose and analyze a measure of intuitionistic fuzzy entropy.
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsMatthew Leingang
The document is a lecture on derivatives of exponential and logarithmic functions. It begins with announcements about homework and an upcoming midterm. It then provides objectives and an outline for sections on exponential and logarithmic functions. The body of the document defines exponential functions, establishes conventions for exponents of all types, discusses properties of exponential functions, and graphs various exponential functions. It focuses on setting up the necessary foundations before discussing derivatives of these functions.
This document contains slides from a lecture on linear regression models given by Dr. Frank Wood. The slides:
- Review properties of multivariate Gaussian distributions and sums of squares that are important for understanding Cochran's theorem.
- Explain that Cochran's theorem describes the distributions of partitioned sums of squares of normally distributed random variables, which is important for traditional linear regression analysis.
- Provide an outline of the lecture, which will prove Cochran's theorem by first establishing some prerequisites around quadratic forms of normal random variables and then proving a supporting lemma.
This document contains notes from a Calculus I class at New York University. It discusses related rates problems, which involve taking derivatives of equations relating changing quantities to determine rates of change. The document provides examples of related rates problems involving an oil slick, two people walking towards and away from each other, and electrical resistors. It also outlines strategies for solving related rates problems, such as drawing diagrams, introducing notation, relating quantities with equations, and using the chain rule to solve for unknown rates.
1) The document outlines the nonlinear equilibrium conditions of a simple New Keynesian model without capital. It discusses formulating the model's nonlinear equations to study optimal monetary policy and higher-order solutions.
2) It presents the key components of the model, including household and firm behavior assumptions. Households maximize utility from consumption and labor. Firms set prices according to Calvo pricing and maximize profits.
3) The document derives the nonlinear equilibrium conditions that characterize household and firm optimization, including the household's intertemporal FOC and the intermediate firm's price-setting problem. It expresses the model's equilibrium objects like marginal costs and the price index.
The document discusses a model of financial frictions that arise from asymmetric information between borrowers and lenders. It first presents a simple model where entrepreneurs with private information about project returns borrow from banks, and the optimal contract balances risk for the bank. The document then explores integrating this costly state verification model into a dynamic stochastic general equilibrium framework to analyze how financial shocks may influence business cycles.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
This document provides an introduction to random matrix theory and its applications in machine learning. It discusses several classical random matrix ensembles like the Gaussian Orthogonal Ensemble (GOE) and Wishart ensemble. These ensembles are used to model phenomena in fields like number theory, physics, and machine learning. Specifically, the GOE is used to model Hamiltonians of heavy nuclei, while the Wishart ensemble relates to the Hessian of least squares problems. The tutorial will cover applications of random matrix theory to analyzing loss landscapes, numerical algorithms, and the generalization properties of machine learning models.
Application of the Monte-Carlo Method to Nonlinear Stochastic Optimization wi...SSA KPI
This document describes a method for solving nonlinear stochastic optimization problems with linear constraints using Monte Carlo estimators. The key aspects are:
1) An ε-feasible solution approach is used to avoid "jamming" or "zigzagging" when dealing with linear constraints.
2) The optimality of solutions is tested statistically using the asymptotic normality of Monte Carlo estimators.
3) The Monte Carlo sample size is adjusted iteratively based on the gradient estimate to decrease computational trials while maintaining solution accuracy.
4) Under certain conditions, the method is proven to converge almost surely to a stationary point of the optimization problem.
5) As an example, the method is applied to portfolio optimization with
This document discusses various importance sampling methods for approximating marginal likelihoods, including regular importance sampling, bridge sampling, and harmonic means. It compares these methods on a probit model example using data on diabetes in Pima Indian women. Regular importance sampling uses the MLE distribution as an importance function. Bridge sampling introduces a pseudo-posterior to handle models with different parameter dimensions. Harmonic means directly uses the posterior sample but requires a proposal distribution with lighter tails than the posterior.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This chapter summary discusses discrete probability distributions. It distinguishes between discrete and continuous random variables and distributions. It describes how to determine the mean and variance of discrete distributions. It introduces some common discrete distributions like the binomial and Poisson distributions. For the binomial distribution, it explains how to calculate the probability of a given number of successes in a given number of trials. For the Poisson distribution, it provides the probability formula and explains that it models independent events occurring continuously over an interval.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.
This document contains lecture notes on exponential growth and decay from a Calculus I class at New York University. It begins with announcements about an upcoming review session, office hours, and midterm exam. It then outlines the topics to be covered, including the differential equation y=ky, modeling population growth, radioactive decay including carbon-14 dating, Newton's law of cooling, and continuously compounded interest. Examples are provided of solving various differential equations representing exponential growth or decay. The document explains that many real-world situations exhibit exponential behavior due to proportional growth rates.
The first report of Machine Learning Seminar organized by Computational Linguistics Laboratory at Kazan Federal University. See http://cll.niimm.ksu.ru/cms/lang/en_US/main/seminars/mlseminar
The document is a lecture on inverse trigonometric functions from a Calculus I class at New York University. It defines inverse trig functions like arcsin, arccos, and arctan and discusses their domains, ranges, and relationships to the original trig functions. It also provides examples of evaluating inverse trig functions at specific values.
Numerical solution of boundary value problems by piecewise analysis methodAlexander Decker
This document presents a numerical method called Piecewise-Homotopy Analysis Method (P-HAM) for solving fourth-order boundary value problems. P-HAM is based on the Homotopy Analysis Method (HAM) but uses multiple auxiliary parameters, with each parameter applied over a sub-range of the domain for improved accuracy. The document outlines the basic steps of P-HAM, including constructing the zero-order deformation equation and deriving the governing equations. It then applies P-HAM to solve two example problems and compares the results to other numerical methods.
This document describes a clustering procedure and nonparametric mixture estimation. It introduces a mixture density model where the goal is to efficiently estimate the mixture weights (αi) and component densities (fi). A two-stage clustering algorithm is proposed: 1) perform clustering on covariates (X) to estimate labels (Ik), and 2) estimate component densities (fi) using kernel density estimation within each cluster. The performance of this approach depends on the clustering method's misclassification error. A toy example with two components having disjoint support densities for X is provided to illustrate the model.
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
The document describes a multilevel hybrid split-step implicit tau-leap method for simulating stochastic reaction networks. It begins with background on modeling biochemical reaction networks stochastically. It then discusses challenges with existing simulation methods like the chemical master equation and stochastic simulation algorithm. The document introduces the split-step implicit tau-leap method as an improvement over explicit tau-leap for stiff systems. It proposes a multilevel Monte Carlo estimator using this method to efficiently estimate expectations of observables with near-optimal computational work.
Numerical smoothing and hierarchical approximations for efficient option pric...Chiheb Ben Hammouda
1. The document presents a numerical smoothing technique to improve the efficiency of option pricing and density estimation when analytic smoothing is not possible.
2. The technique involves numerically determining discontinuities in the integrand and computing the integral only over the smooth regions. It also uses hierarchical representations and Brownian bridges to reduce the effective dimension of the problem.
3. The numerical smoothing approach outperforms Monte Carlo methods for high dimensional cases and improves the complexity of multilevel Monte Carlo from O(TOL^-2.5) to O(TOL^-2 log(TOL)^2).
STUDIES ON INTUTIONISTIC FUZZY INFORMATION MEASURESurender Singh
This document discusses studies on measures of intuitionistic fuzzy information. It begins with introductions and definitions related to fuzzy sets, intuitionistic fuzzy sets, and measures of fuzzy entropy. It then discusses special t-norm operators and proposes a measure of intuitionistic fuzzy entropy based on these t-norms. The measure is defined using a function of the membership, non-membership, and hesitancy degrees of an intuitionistic fuzzy set. Several desirable properties of such a measure are outlined, including sharpness, maximality, resolution, symmetry, and valuation. The document provides mathematical foundations and definitions to propose and analyze a measure of intuitionistic fuzzy entropy.
Lesson 14: Derivatives of Logarithmic and Exponential FunctionsMatthew Leingang
The document is a lecture on derivatives of exponential and logarithmic functions. It begins with announcements about homework and an upcoming midterm. It then provides objectives and an outline for sections on exponential and logarithmic functions. The body of the document defines exponential functions, establishes conventions for exponents of all types, discusses properties of exponential functions, and graphs various exponential functions. It focuses on setting up the necessary foundations before discussing derivatives of these functions.
This document contains slides from a lecture on linear regression models given by Dr. Frank Wood. The slides:
- Review properties of multivariate Gaussian distributions and sums of squares that are important for understanding Cochran's theorem.
- Explain that Cochran's theorem describes the distributions of partitioned sums of squares of normally distributed random variables, which is important for traditional linear regression analysis.
- Provide an outline of the lecture, which will prove Cochran's theorem by first establishing some prerequisites around quadratic forms of normal random variables and then proving a supporting lemma.
This document contains notes from a Calculus I class at New York University. It discusses related rates problems, which involve taking derivatives of equations relating changing quantities to determine rates of change. The document provides examples of related rates problems involving an oil slick, two people walking towards and away from each other, and electrical resistors. It also outlines strategies for solving related rates problems, such as drawing diagrams, introducing notation, relating quantities with equations, and using the chain rule to solve for unknown rates.
1) The document outlines the nonlinear equilibrium conditions of a simple New Keynesian model without capital. It discusses formulating the model's nonlinear equations to study optimal monetary policy and higher-order solutions.
2) It presents the key components of the model, including household and firm behavior assumptions. Households maximize utility from consumption and labor. Firms set prices according to Calvo pricing and maximize profits.
3) The document derives the nonlinear equilibrium conditions that characterize household and firm optimization, including the household's intertemporal FOC and the intermediate firm's price-setting problem. It expresses the model's equilibrium objects like marginal costs and the price index.
The document discusses a model of financial frictions that arise from asymmetric information between borrowers and lenders. It first presents a simple model where entrepreneurs with private information about project returns borrow from banks, and the optimal contract balances risk for the bank. The document then explores integrating this costly state verification model into a dynamic stochastic general equilibrium framework to analyze how financial shocks may influence business cycles.
This document outlines exercises using Dynare to analyze a simple New Keynesian model. The exercises explore the rationale for the Taylor principle, potential conflicts between monetary policy channels, the sensitivity of inflation and output to shock persistence, cases when the Taylor rule does not adjust interest rates enough, and instances when news shocks cause the Taylor rule to move rates in unintended directions.
1) This document describes an optimal monetary policy model with 7 endogenous variables and 5 equilibrium conditions, leaving two degrees of freedom.
2) The model maximizes social welfare as the sum of period utilities from consumption and labor, subject to the equilibrium conditions.
3) The first order optimality conditions result in a system of 7 equations that can be solved using log-linearization methods around the non-stochastic steady state, similarly to previous examples.
The document discusses various applications of dimension reduction techniques to extract low-dimensional representations from high-dimensional data for purposes of prediction, descriptive analysis, and input into subsequent causal analysis. It provides examples of such applications using Google search data, genetic data, medical claims data, credit scores, online purchases, and congressional roll call votes. It also discusses issues around text as data, including bag-of-words representations and the use of automated and manual steps in text analysis.
Econometrics of High-Dimensional Sparse ModelsNBER
The document discusses high-dimensional sparse econometric models where the number of predictors (p) is much larger than the sample size (n). It outlines an approach for estimating regression functions using penalization methods like the LASSO. Specifically, it discusses:
1. Using the LASSO estimator to minimize squared errors while penalizing the l1-norm of coefficients, inducing sparsity.
2. Choosing the optimal penalty level as a function of the error variance and sample size. Variants like the square-root LASSO provide a tuning-free approach.
3. Examples showing how sparse approximations can better capture patterns in population data than traditional low-dimensional approximations.
High-Dimensional Methods: Examples for Inference on Structural EffectsNBER
This document describes a study that uses high-dimensional methods to estimate the effect of 401(k) eligibility on measures of accumulated assets. It begins by outlining the baseline model and notes areas for improvement, such as controlling for income. It then discusses using regularization like LASSO for variable selection in high-dimensional settings. The document explores more flexible specifications by generating many interaction and polynomial terms but notes the need for dimension reduction. It describes using LASSO to select important variables from a large set. The results select a parsimonious set of variables and estimate similar 401(k) effects as the baseline.
Big Data analysis involves building predictive models from high-dimensional data using techniques like variable selection, cross-validation, and regularization to avoid overfitting. The document discusses an example analyzing web browsing data to predict online spending, highlighting challenges with large numbers of variables. It also covers summarizing high-dimensional data through dimension reduction and model building for prediction versus causal inference.
The document discusses practical computing issues that arise when working with large datasets. It begins by noting that many statistical analyses can be done on a single laptop. It then discusses storing very large datasets, which may require terabytes of storage. The document outlines some basic computing concepts for working with big data, including software engineering practices, databases, and distributed computing.
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 5.
More info at http://summerschool.ssa.org.ua
Nonlinear Stochastic Programming by the Monte-Carlo methodSSA KPI
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 4.
More info at http://summerschool.ssa.org.ua
In this work I studied characteristic polynomials, associated to the energy graph of the non linear Schrodinger equation on a torus. The discussion is essentially algebraic and combinatoral in nature.
Randomness conductors are a general framework that unifies various combinatorial objects like expanders, extractors, condensers, and universal hash functions. They can transform a probability distribution X with a certain amount of "entropy" into another distribution X' with a specified amount of entropy. The document discusses how expanders, extractors, and other objects are special cases of randomness conductors. It also describes how zigzag graph products can be used to construct explicit constant-degree randomness conductors and discusses some open problems in further studying and constructing these objects.
The document outlines a model for analyzing transaction costs in portfolio choice. It presents explicit formulas for trading boundaries, certainty equivalent rates, liquidity premiums, and trading volumes in terms of model parameters like the spread. Graphs show how these quantities vary with factors like risk aversion. The results are obtained by solving a free boundary problem using a shadow price approach and smooth pasting conditions at the boundaries. Asymptotics of the solutions are also derived in terms of the spread approaching zero.
The document provides an overview of convex optimization problems, including linear programming (LP), quadratic programming (QP), quadratic constraint quadratic programming (QCQP), second-order cone programming (SOCP), and geometric programming. It discusses how these problems can be transformed into equivalent convex optimization problems to help solve them. Local optima are guaranteed to be global optima for convex problems. Optimality criteria are presented for problems with differentiable objectives.
Stochastic Approximation and Simulated AnnealingSSA KPI
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 8.
More info at http://summerschool.ssa.org.ua
The document discusses quiescent steady state (DC) analysis using the Newton-Raphson method. It begins by introducing DC analysis and defining the goal as solving the system's differential algebraic equations (DAEs) under the assumption of no time variation. It then describes the Newton-Raphson method as an iterative numerical technique for solving nonlinear systems of equations. The method computes the Jacobian matrix at each iteration to determine the update to the state vector that will converge to a solution.
1. The document discusses maximum likelihood estimation and Bayesian parameter estimation for machine learning problems involving parametric densities like the Gaussian.
2. Maximum likelihood estimation finds the parameter values that maximize the probability of obtaining the observed training data. For Gaussian distributions with unknown mean and variance, MLE returns the sample mean and variance.
3. Bayesian parameter estimation treats the parameters as random variables and uses prior distributions and observed data to obtain posterior distributions over the parameters. This allows incorporation of prior knowledge with the training data.
1) Stochastic processes are sequences of random variables indexed by time that evolve randomly over time. The value at each time Xt may depend on previous values.
2) Stochastic processes are characterized by their probability distributions and moments like mean, variance, covariance over time. Stationary processes have these moments unchanged over time.
3) Autocovariance and autocorrelation functions describe the covariance and correlation between values at different times and are important tools for analyzing stationary processes.
The document discusses the Lagrangian multiplier method for solving constrained maximization problems. It describes setting up the Lagrangian expression using a constraint function and Lagrangian multiplier, then taking the first-order conditions to solve for the optimal values and the multiplier. The multiplier provides the marginal value or "shadow price" of relaxing the constraint. It also discusses the related dual problem and envelope theorem. An example of maximizing area with a fixed fence length is used to illustrate the method.
We develop a new method to optimize portfolios of options in a market where European calls and puts are available with many exercise prices for each of several potentially correlated underlying assets. We identify the combination of asset-specific option payoffs that maximizes the Sharpe ratio of the overall portfolio: such payoffs are the unique solution to a system of integral equations, which reduce to a linear matrix equation under suitable representations of the underlying probabilities. Even when implied volatilities are all higher than historical volatilities, it can be optimal to sell options on some assets while buying options on others, as hedging demand outweighs demand for asset-specific returns.
The document provides an introduction to the gamma function Γ(x). Some key points:
1) The gamma function was introduced by Euler to generalize the factorial to non-integer values. It is defined by definite integrals and satisfies the functional equation Γ(x+1)=xΓ(x).
2) The gamma function can be defined for both positive and negative real values, except for negative integers where it has simple poles. It is related to important constants like Euler's constant.
3) The gamma function satisfies important formulas like the duplication formula, multiplication formula, and complement/reflection formula. Stirling's formula approximates the gamma function for large integer values.
This document discusses several classical methods for unconstrained continuous optimization, including gradient descent, Newton's method, the Gauss-Newton method, and the Levenberg-Marquardt algorithm. It explains how each method works by choosing a search direction to iteratively minimize an objective function. Newton's method and the Gauss-Newton method have faster convergence than gradient descent but require computing Hessians. The Levenberg-Marquardt algorithm interpolates between gradient descent and Gauss-Newton steps to improve convergence.
This document provides an overview of optimization techniques used in economics. It introduces maximizing a function of one variable, such as profit for a firm. The key concepts covered are:
- Derivatives and partial derivatives allow finding the optimal value of a variable that maximizes an objective function.
- The first order condition requires the derivative at the optimal point to be zero. Second order conditions ensure this is a maximum.
- Functions of multiple variables use partial derivatives to optimize while holding other variables constant.
- Elasticities measure the proportional response of one variable to changes in another. They are important in economic models.
Nonlinear Stochastic Optimization by the Monte-Carlo MethodSSA KPI
This document describes a method for solving stochastic optimization problems using Monte Carlo simulation. It introduces Monte Carlo estimators for the objective function, its gradient, and the covariance matrix that can be computed using a random sample. It then presents an iterative stochastic gradient descent procedure where the sample size is adjusted at each iteration inversely proportional to the square of the gradient estimate. Two theorems prove that this approach ensures convergence to the optimal solution and provides accuracy bounds on the estimate of the distance to the optimal point. The method offers a way to efficiently solve stochastic optimization problems using adaptive sample sizes.
This document discusses likelihood methods for continuous-time models in finance. It describes approximating the transition density function pX of a continuous-time process through a series of transformations to get closer to a normal distribution. This allows representing pX as a series expansion involving Hermite polynomials. Computing the expansion coefficients allows obtaining an explicit closed-form approximation to pX. Maximizing the approximate likelihood results in an estimator that converges to the true MLE as the number of terms increases.
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATESNBER
1) State deficits can boost job growth in the deficit state but also in neighboring states, showing significant spillover effects. Coordinated fiscal policies across states are more cost-effective than individual state policies.
2) Federal aid to states, when coordinated, can effectively stimulate the overall economy. Targeted aid linked to services for lower income households is more effective than untargeted aid.
3) The economic stimulus of the American Recovery and Reinvestment Act could have been 30% more effective if it relied more on targeted aid and less on untargeted aid. Coordinated fiscal policies that account for spillovers across economic regions are optimal for stimulus programs.
Business in the United States Who Owns it and How Much Tax They PayNBER
This document analyzes business ownership and tax payments in the United States using administrative tax data from 2011. It finds:
1. Pass-through business income, such as from partnerships and S-corporations, is highly concentrated.
2. The average federal income tax rate on pass-through business income is 19%.
3. 30% of income earned by partnerships cannot be uniquely traced to an identifiable, ultimate owner.
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...NBER
This document analyzes the program linkages and budgetary spillovers of minimum wage regulation using data from recent federal minimum wage increases. It finds that wages increased for some low-skilled workers but employment declined significantly. While safety net programs provided some income replacement, earnings and tax revenues decreased substantially. Overall, the analysis suggests minimum wage increases reallocated income from employers and taxpayers to low-wage workers, with program and tax revenue spillovers of approximately $1-2 billion annually.
The Distributional Effects of U.S. Clean Energy Tax CreditsNBER
This document summarizes a study examining the distributional effects of US clean energy tax credits from 2006-2012. It finds that higher-income households claimed a disproportionate share of the $18 billion in credits. Specifically, the study analyzes tax return data to see who claimed credits for investments like home weatherization, solar panels, hybrid vehicles, and electric vehicles. It aims to provide insights into how the inequitable distribution may inform future program design and the debate around subsidies versus carbon taxes.
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...NBER
This document summarizes a study that tested different strategies for increasing property tax compliance in Philadelphia. The researchers worked with the city's Department of Revenue to randomly assign taxpayers with overdue property taxes to receive one of four letters: a standard letter, or a standard letter plus an additional sentence appealing to civic duty, public services benefits, or potential home loss. They found the civic duty appeal significantly increased tax payments, especially for those with lower debts. Appealing to public services benefits also showed some effect on higher debt taxpayers. The researchers conclude strategically targeting messages could further improve compliance.
This document discusses recommendation systems and topic modeling for documents using machine learning techniques. It begins by introducing recommendation systems and different types of recommendation literature, including item similarity, collaborative filtering, and hierarchical models. It then discusses bringing in user choice data and different collaborative filtering approaches like k-nearest neighbor prediction and matrix factorization. The document also covers topic modeling, including latent Dirichlet allocation, and how topic models can be combined with user choice models. It concludes by discussing challenges in causal inference when using machine learning.
The document discusses using machine learning methods to estimate heterogeneous causal effects. It proposes an approach of using regression trees on a transformed outcome variable to estimate individual treatment effects. However, this approach is critiqued as it can introduce noise. An improved approach is presented that uses the sample average treatment effect within each leaf as the estimator, and uses the variance of predictions for model fitting criteria and a matching estimator for out-of-sample evaluation. The approach separates the tasks of model selection and treatment effect estimation to enable valid statistical inference on estimated effects in subgroups.
This document summarizes a discussion between Susan Athey and Guido Imbens on the relationship between machine learning and causal inference. It notes that while machine learning excels at prediction problems using large datasets, it has weaknesses when it comes to causal questions. Econometrics and statistics literature focuses more on formal theories of causality. The document proposes combining the strengths of both fields by developing machine learning methods that can estimate causal effects, accounting for issues like endogeneity and treatment effect heterogeneity. It outlines some open problems and directions for future research at the intersection of these fields.
This document summarizes key points from a lecture on diffusion, identification, and network formation. It discusses how diffusion of products can be modeled, including information passing between neighbors. Estimation techniques are described to model information diffusion on actual networks by simulating propagation over time. The challenges of identification when networks are endogenous are also covered. Forming models of network formation that account for link dependencies is an important area of current research.
This document provides an overview of social and economic networks. It discusses why networks are important to study, as interactions are shaped by relationships. Some examples of networks are presented, such as marriage networks, friendship networks in high schools, military alliances, and interbank payment networks. The document then discusses how to represent networks mathematically and introduces concepts like degree, paths, average path length, and degree distributions. It also covers homophily, or the tendency for similar people to connect, and shows examples of homophily along attributes. Finally, it introduces the idea of centrality and influence within a network, discussing measures like degree centrality and eigenvector centrality.
Daron Acemoglu presents a document on networks, games over networks, and peer effects. The document discusses how networks can be used to model externalities and peer effects. It presents a model of a game over networks where players' payoffs are determined by their own actions, the actions of their network neighbors, and potential strategic interactions. The best responses in this game are characterized. Under certain conditions, such as the game being a potential game, the game will have a unique Nash equilibrium where each player's action is determined by their position in the network. The document discusses applications of this type of network game model.
The document discusses how economic shocks propagate through networks of production and inputs. It begins by presenting a simple model of an economy consisting of sectors that use each other's outputs as inputs. Shocks to individual sectors can spread to other sectors through this production network. While diversification across many sectors could cause microeconomic shocks to "wash out", the structure of the network influences how shocks aggregate. Asymmetric networks with some sectors having outsized importance can lead to greater aggregate volatility than more regular networks where all sectors are equally important. Empirical analysis of input-output data supports the theory by finding significant downstream effects of sectoral shocks.
The NBER Working Paper Series at 20,000 - Joshua GansNBER
This document discusses publication lags in economics research, with working papers appearing years before peer-reviewed published work. It questions whether publication means anything given the large number of working papers now available. It also considers options for the National Bureau of Economic Research's web repository, such as providing open access to working papers along with links to related materials, peer reviews, and published versions of the papers.
The NBER Working Paper Series at 20,000 - Claudia GoldinNBER
This document analyzes trends in the NBER Working Paper series from 1978 to 2013. It finds that the number of working papers published annually has increased dramatically over time, from around 100 in the late 1970s to over 1,200 by 2013. The number of NBER research programs has also expanded significantly, from 7 originally to over 20 currently. Individual working papers now tend to involve more programs and more authors than in the past as well. The working paper series has become less specialized and more collaborative over four decades of growth and evolution.
The NBER Working Paper Series at 20,000 - James PoterbaNBER
This document summarizes the origin and evolution of the NBER Working Paper series from its beginning in 1972 to the present. It started as an outlet for NBER research and has grown tremendously over time. Some key points:
- The first working paper was published in June 1973 and there were only 3 papers in the first month.
- Growth accelerated after Martin Feldstein became NBER President in 1977, with over 200 papers published in 1981.
- There are now over 20,000 working papers published and about 5.5 million downloads per year from around the world.
- The most popular papers focus on topics like financial crises, economic growth, and corporate governance.
The NBER Working Paper Series at 20,000 - Scott SternNBER
The NBER Working Paper series recently reached 20,000 papers published and is recognized as one of the leading economics working paper series in the world. According to 2014 Google Scholar Metrics, the NBER Working Paper series ranked 18th out of thousands of journals by its H-5 index, which measures the productivity and impact of published work. The high ranking of the NBER Working Paper series demonstrates its important role in disseminating new economic research and ideas worldwide.
The NBER Working Paper Series at 20,000 - Glenn EllisonNBER
This document summarizes trends in the publication process and the role of working papers. It finds that publication times at economics journals have increased significantly over the past 30 years. Acceptance rates at top journals have also declined. These changes mean that published papers cannot address current issues or reflect the latest state of knowledge as quickly. The document also finds that working papers, such as those from the NBER, play an increasingly important role, as economists can disseminate their work more quickly through working paper series than through the traditional publication process. NBER working papers account for a large share of papers eventually published in top journals and those NBER papers go on to be well-cited.
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
2. Overall Outline
• Perturbation and Projection Methods for DSGE
Models: an Overview
• Simple New Keynesian model
– Formulation and log‐linear solution.
– Ramsey‐optimal policy.
– Using Dynare to solve the model by log‐linearization:
• Taylor principle, implications of working capital, News shocks,
monetary policy with the long rate.
• Financial Frictions as in BGG
– Risk shocks and the CKM critique of intertemporal shocks.
– Dynare exercise.
• Ramsey Optimal Policy, Time Consistency, Timeless
Perspective.
4. Outline
• A Simple Example to Illustrate the basic ideas.
– Functional form characterization of model
solution.
– Use of Projections and Perturbations.
• Neoclassical model.
– Projection methods
– Perturbation methods
• Make sense of the proposition, ‘to a first order
approximation, can replace equilibrium conditions with
linear expansion about nonstochastic steady state and
solve the resulting system using certainty equivalence’
5. Simple Example
• Suppose that x is some exogenous variable
and that the following equation implicitly
defines y:
hx, y 0, for all x ∈ X
• Let the solution be defined by the ‘policy rule’,
g:
y gx
‘Error function’
• satisfying
Rx; g ≡ hx, gx 0
• for all x ∈ X
6. The Need to Approximate
• Finding the policy rule, g, is a big problem
outside special cases
– ‘Infinite number of unknowns (i.e., one value of g
for each possible x) in an infinite number of
equations (i.e., one equation for each possible x).’
• Two approaches:
– projection and perturbation
7. Projection
ĝx;
• Find a parametric function, , where is a
vector of parameters chosen so that it imitates
Rx; g 0
the property of the exact solution, i.e.,
for all x ∈ X , as well as possible.
• Choose values for so that
̂
Rx; hx, ĝx;
x∈X
• is close to zero for .
• The method is defined by how ‘close to zero’ is
ĝx;
defined and by the parametric function, ,
that is used.
8. Projection, continued
• Spectral and finite element approximations
ĝx;
– Spectral functions: functions, , in which
ĝx; x∈X
each parameter in influences for all
example:
n 1
ĝx; ∑ i Hi x,
i0
n
H i x x i ~ordinary polynominal (not computationaly efficient)
H i x T i x,
T i z : −1, 1 → −1, 1, i th order Chebyshev polynomial
: X → −1, 1
9. Projection, continued
ĝx;
– Finite element approximations: functions, ,
ĝx;
in which each parameter in influences
over only a subinterval of x ∈ X
ĝx; 1 2 3 4 5 6 7
4
2
X
10. Projection, continued
• ‘Close to zero’: collocation and Galerkin
x : x1 , x2 , . . . , xn ∈ X
• Collocation, for n values of
1 n
choose n elements of so that
̂
Rx i ; hx i , ĝx i ; 0, i 1, . . . , n
– how you choose the grid of x’s matters…
• Galerkin, for m>n values of
x : x1 , x2 , . . . , xm ∈ X
choose the n elements of 1 n
m
∑ wij hxj , ĝxj ; 0, i 1, . . . , n
j1
11. Perturbation
• Projection uses the ‘global’ behavior of the functional
equation to approximate solution.
– Problem: requires finding zeros of non‐linear equations.
Iterative methods for doing this are a pain.
– Advantage: can easily adapt to situations the policy rule is
not continuous or simply non‐differentiable (e.g.,
occasionally binding zero lower bound on interest rate).
• Perturbation method uses local properties of
functional equation and Implicit Function/Taylor’s
theorem to approximate solution.
– Advantage: can implement it using non‐iterative methods.
– Possible disadvantages:
• may require enormously high derivatives to achieve a decent
global approximation.
• Does not work when there are important non‐differentiabilities
(e.g., occasionally binding zero lower bound on interest rate).
12. Perturbation, cnt’d
x∗ ∈ X
• Suppose there is a point, , where we
know the value taken on by the function, g,
that we wish to approximate:
gx ∗ g ∗ , some x ∗
• Use the implicit function theorem to
approximate g in a neighborhood of x ∗
• Note:
Rx; g 0 for all x ∈ X
→
j
R x; g ≡ d j Rx; g 0 for all j, all x ∈ X.
dx j
13. Perturbation, cnt’d
• Differentiate R with respect to and evaluate
x
x∗
the result at :
R 1 x ∗ d hx, gx|xx ∗ h 1 x ∗ , g ∗ h 2 x ∗ , g ∗ g ′ x ∗ 0
dx
′ ∗ h 1 x ∗ , g ∗
→ g x −
h 2 x ∗ , g ∗
• Do it again!
2
R x ∗ d 2 hx, gx| ∗ h x ∗ , g ∗ 2h x ∗ , g ∗ g ′ x ∗
xx 11 12
dx 2
h 22 x ∗ , g ∗ g ′ x ∗ 2 h 2 x ∗ , g ∗ g ′′ x ∗ .
→ Solve this linearly for g ′′ x ∗ .
14. Perturbation, cnt’d
• Preceding calculations deliver (assuming
enough differentiability, appropriate
invertibility, a high tolerance for painful
notation!), recursively:
g ′ x ∗ , g ′′ x ∗ , . . . , g n x ∗
• Then, have the following Taylor’s series
approximation:
gx ≈ ĝx
ĝx g ∗ g ′ x ∗ x − x ∗
1 g ′′ x ∗ x − x ∗ 2 . . . 1 g n x ∗ x − x ∗ n
2 n!
16. Example of Implicit Function Theorem
y
hx, y 1 x 2 y 2 − 8 0
2
4
gx ≃ g∗ x ∗ x − x ∗
− ∗
g
‐4
4
x
‐4
h 1 x ∗ , g ∗ x ∗ h 2 had better not be zero!
g′ ∗
x − − ∗
∗ ∗
h 2 x , g g
17. Neoclassical Growth Model
• Objective:
1−
c −1
E 0 ∑ t uc t , uc t t
1−
t0
• Constraints:
c t expk t1 ≤ fk t , a t , t 0, 1, 2, . . . .
a t a t−1 t .
fk t , a t expk t expa t 1 − expk t .
18. Efficiency Condition
ct1
E t u ′ fk t , a t − expk t1
ct1 period t1 marginal product of capital
− u ′ fk t1 , a t t1 − expk t2 f K k t1 , a t t1 0.
• Here, k t , a t ~given numbers
t1 ~ iid, mean zero variance V
time t choice variable, k t1
• Convenient to suppose the model is the limit
→1
of a sequence of models, , indexed by
t1 ~ 2 V , 1.
19. Solution
• A policy rule,
k t1 gk t , a t , .
• With the property:
ct
Rk t , a t , ; g ≡ E t u ′ fk t , a t − expgk t , a t ,
ct1
k t1 a t1 k t1 a t1
− u ′ f gk t , a t , , a t t1 − exp g gk t , a t , , a t t1 ,
k t1 a t1
fK gk t , a t , , a t t1 0,
• for all a t , k t and 1.
20. Projection Methods
• Let
ĝk t , a t , ;
– be a function with finite parameters (could be either
spectral or finite element, as before).
• Choose parameters, , to make
Rk t , a t , ; ĝ
– as close to zero as possible, over a range of values of
the state.
– use Galerkin or Collocation.
21. Occasionally Binding Constraints
• Suppose we add the non‐negativity constraint on
investment:
expgk t , a t , − 1 − expk t ≥ 0
• Express problem in Lagrangian form and optimum is
characterized in terms of equality conditions with a
multiplier and with a complementary slackness condition
associated with the constraint.
• Conceptually straightforward to apply preceding method.
For details, see Christiano‐Fisher, ‘Algorithms for Solving
Dynamic Models with Occasionally Binding Constraints’,
2000, Journal of Economic Dynamics and Control.
– This paper describes alternative strategies, based on
parameterizing the expectation function, that may be easier,
when constraints are occasionally binding constraints.
22. Perturbation Approach
• Straightforward application of the perturbation approach, as in the simple
example, requires knowing the value taken on by the policy rule at a point.
• The overwhelming majority of models used in macro do have this
property.
– In these models, can compute non‐stochastic steady state without any
knowledge of the policy rule, g.
k∗
– Non‐stochastic steady state is such that
a0 (nonstochastic steady state in no uncertainty case) 0 (no uncertainty)
∗ ∗
k g k , 0 , 0
1
– and 1−
k∗ log .
1 − 1 −
23. Perturbation
• Error function:
ct
Rk t , a t , ; g ≡ E t u ′ fk t , a t − expgk t , a t ,
ct1
− u ′ fgk t , a t , , a t t1 − expggk t , a t , , a t t1 ,
f K gk t , a t , , a t t1 0,
k t , a t , .
– for all values of
• So, all order derivatives of R with respect to its
arguments are zero (assuming they exist!).
24. Four (Easy to Show) Results About
Perturbations
• Taylor series expansion of policy rule:
linear component of policy rule
gk t , a t , ≃ k g k k t − k g a a t g
second and higher order terms
1 g kk k t − k 2 g aa a 2 g 2 g ka k t − ka t g k k t − k g a a t . . .
t
2
– g 0 : to a first order approximation, ‘certainty equivalence’
– All terms found by solving linear equations, except coefficient on past
gk
endogenous variable, ,which requires solving for eigenvalues
– To second order approximation: slope terms certainty equivalent –
g k g a 0
– Quadratic, higher order terms computed recursively.
25. First Order Perturbation
• Working out the following derivatives and
evaluating at k t k ∗ , a t 0
R k k t , a t , ; g R a k t , a t , ; g R k t , a t , ; g 0
‘problematic term’ Source of certainty equivalence
• Implies: In linear approximation
R k u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2 f K 0
k
R a u ′′ f a − e g g a − u ′ f Kk g a f Ka − u ′′ f k g a f a − e g g k g a g a f K 0
R −u ′ e g u ′′ f k − e g g k f K g 0
26. Technical notes for following slide
u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2 f K
k 0
1 f − e g g − u ′ f Kk g − f g − e g g 2 f K 0
k k
u ′′
k k k k
1 f − 1 e g u ′ f Kk f f K g e g g 2 f K 0
k u ′′
k k k
1 fk − 1 u ′ f Kk f k g g 2 0
k
eg fK f K u ′′ e g f K eg k
1 − 1 1 u ′ f Kk gk g2 0
k
u ′′ e g f K
• Simplify this further using:
f K K−1 expa 1 − , K ≡ expk
exp − 1k a 1 −
f k expk a 1 − expk f K e g
f Kk − 1 exp − 1k a
f KK − 1K−2 expa − 1 exp − 2k a f Kk e −g
• to obtain polynomial on next slide.
27. First Order, cont’d
Rk 0
• Rewriting term:
1 − 1 1 u ′ f KK gk g2 0
k
u ′′ f K
0 g k 1, g k 1
• There are two solutions,
– Theory (see Stokey‐Lucas) tells us to pick the smaller
one.
– In general, could be more than one eigenvalue less
than unity: multiple solutions.
gk ga
• Conditional on solution to , solved for
Ra 0
linearly using equation.
• These results all generalize to multidimensional
case
28. Numerical Example
• Parameters taken from Prescott (1986):
2 20, 0. 36, 0. 02, 0. 95, Ve 0. 01 2
• Second order approximation:
3.88 0.98 0.996 0.06 0.07 0
ĝk t , a t−1 , t , k ∗ gk k t − k ∗ ga at g
0.014 0.00017 0.067 0.079 0.000024 0.00068 1
1 g kk k t − k g aa
2
a2
t g 2
2
−0.035 −0.028 0 0
g ka k t − ka t g k k t − k g a a t
29. Conclusion
• For modest US‐sized fluctuations and for
aggregate quantities, it is reasonable to work
with first order perturbations.
• First order perturbation: linearize (or, log‐
linearize) equilibrium conditions around non‐
stochastic steady state and solve the resulting
system.
– This approach assumes ‘certainty equivalence’. Ok, as
a first order approximation.
30. List of endogenous variables determined at t
Solution by Linearization
• (log) Linearized Equilibrium Conditions:
E t 0 z t1 1 z t 2 z t−1 0 s t1 1 s t 0
• Posit Linear Solution:
s t − Ps t−1 − t 0.
z t Az t−1 Bs t Exogenous shocks
• To satisfy equil conditions, A and B must:
0 A2 1 A 2 I 0, F 0 0 BP 1 0 A 1 B 0
• If there is exactly one A with eigenvalues less
than unity in absolute value, that’s the solution.
Otherwise, multiple solutions.
• Conditional on A, solve linear system for B.