- The document describes an empirical methods project that solves a heterogeneous agent macroeconomic model numerically following the steps outlined in a 1998 paper by Per Krusell and Anthony Smith.
- The model includes idiosyncratic and aggregate risk and assumes agents have bounded rationality in predicting the distribution of capital.
- The algorithm involves computing transition probabilities, choosing parameters for the capital distribution law of motion, and iterating on the value function until policy functions converge.
Statistical Analysis and Model Validation of Gompertz Model on different Real...Editor Jacotech
This document summarizes statistical analysis and model validation of the Gompertz model on different real data sets for reliability modeling. It presents the maximum likelihood estimation of parameters for the Gompertz model using the Newton-Raphson method. Goodness of fit tests including the Kolmogorov-Smirnov test and quantile-quantile plot are used to validate the Gompertz model on six different real data sets and determine which data sets provide the best fit for parameter estimation of the Gompertz model.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
- The document discusses representation of stochastic processes in real and spectral domains and Monte Carlo sampling.
- Stochastic processes can be represented in the real (time or space) domain using autocorrelation and variogram functions, and in the spectral domain using power spectral density functions.
- Monte Carlo sampling uses techniques to generate random numbers from a probability density function for random sampling.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
1) The document discusses generating random numbers with specified distributions for use in simulations and finance modeling.
2) It describes how linear congruential generators are commonly used to generate uniformly distributed random numbers by calculating values modulo a large integer.
3) Quality requirements for random number generators include having a long period before repeating, passing statistical tests for the desired distribution, and being uniformly distributed in multi-dimensional spaces without clustering along hyperplanes.
This document discusses stochastic models for site characterization. It describes several continuous models for generating random fields including the multivariate normal method, LU decomposition method, and turning bands method. The multivariate normal method models a random vector as having a multivariate normal distribution defined by a mean vector and covariance matrix. The LU decomposition method generates a random field with a given covariance structure by decomposing the covariance matrix into lower and upper triangular matrices. It provides numerical examples of applying the LU decomposition method to generate correlated random variables at two points.
Statistical Analysis and Model Validation of Gompertz Model on different Real...Editor Jacotech
This document summarizes statistical analysis and model validation of the Gompertz model on different real data sets for reliability modeling. It presents the maximum likelihood estimation of parameters for the Gompertz model using the Newton-Raphson method. Goodness of fit tests including the Kolmogorov-Smirnov test and quantile-quantile plot are used to validate the Gompertz model on six different real data sets and determine which data sets provide the best fit for parameter estimation of the Gompertz model.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
- The document discusses representation of stochastic processes in real and spectral domains and Monte Carlo sampling.
- Stochastic processes can be represented in the real (time or space) domain using autocorrelation and variogram functions, and in the spectral domain using power spectral density functions.
- Monte Carlo sampling uses techniques to generate random numbers from a probability density function for random sampling.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
1) The document discusses generating random numbers with specified distributions for use in simulations and finance modeling.
2) It describes how linear congruential generators are commonly used to generate uniformly distributed random numbers by calculating values modulo a large integer.
3) Quality requirements for random number generators include having a long period before repeating, passing statistical tests for the desired distribution, and being uniformly distributed in multi-dimensional spaces without clustering along hyperplanes.
This document discusses stochastic models for site characterization. It describes several continuous models for generating random fields including the multivariate normal method, LU decomposition method, and turning bands method. The multivariate normal method models a random vector as having a multivariate normal distribution defined by a mean vector and covariance matrix. The LU decomposition method generates a random field with a given covariance structure by decomposing the covariance matrix into lower and upper triangular matrices. It provides numerical examples of applying the LU decomposition method to generate correlated random variables at two points.
This document summarizes the theoretical foundations of a general equilibrium model using a 2x2 Heckscher-Ohlin-Samuelson (HOS) model. It presents the model, which assumes two goods are produced using two factors of production, and explores the implications on prices and outputs from changes in factors or goods. Key results include the Stolper-Samuelson theorem, which states that an increase in the price of a good raises the reward to its intensive factor of production, and the Rybczynski theorem, relating changes in factors to changes in outputs. Equations are provided and concepts like elasticity of substitution and determinants are introduced to analyze comparative static effects in the model.
Stochastic differential equations (SDEs) describe systems with random components. Common methods to solve SDEs include spectral and perturbation methods. The spectral method represents variables and parameters as mean values plus fluctuations. Taking the expected value of the SDE yields equations for the mean and fluctuations that can be solved. The perturbation method expresses variables and parameters as power series expansions. Introducing these into the SDE allows analytical or numerical solution. SDEs are used to model systems with uncertain parameters like groundwater flow with random hydraulic conductivity.
1. The document discusses the estimation problem in geostatistics, which is determining the value of a quantity Zo at an unmeasured point (xo,yo) based on measurements at nearby points.
2. It describes kriging as the best linear unbiased estimator that takes into account the spatial structure and correlation between points to estimate values across a field. The kriging system minimizes the variance of errors in estimates.
3. A simple kriging example is shown using a computer program to generate data, perform kriging, and display the kriged estimates and associated error variances across the field.
Estimation of mean and its function using asymmetric loss function ijscmcj
In this paper suggested an improve estimator for mean using Linex loss function and shows that the
improved estimator dominates the Searls (1964) estimator underLinex loss function. The sufficient statistics
can be used to find the uniformly minimum risk unbiased estimators. In this paper an improve estimation
forµ
2
is suggested (which uses coefficient of variation) under Linex loss function. The mathematical
expression of improve estimator of fourth power of mean is also obtained and an improve estimator for
common mean in negative exponential distribution is also proposed under Linex loss function.Pandey and
Malik (1994) considered the estimator T w x w y w3x y
2
2
2
1
′ = 1 + + for common mean with the
restriction . 1 w1 + w2 + w3 = Here considered the above estimator for 1 w1 + w2 + w3 ≠ and studied its
property under Linex loss function. In this paper alsoconsidered the displaced exponential distribution
under Linex loss function and suggested an improve estimator.
Estimation of mean and its function using asymmetric loss functionijscmcj
In this paper suggested an improve estimator for mean using Linex loss function and shows that the improved estimator dominates the Searls (1964) estimator underLinex loss function. The sufficient statistics can be used to find the uniformly minimum risk unbiased estimators. In this paper an improve estimation forµ 2 is suggested (which uses coefficient of variation) under Linex loss function. The mathematical expression of improve estimator of fourth power of mean is also obtained and an improve estimator for common mean in negative exponential distribution is also proposed under Linex loss function.Pandey and Malik (1994) considered the estimator T w x w y w3x y
2
2 2 1′ = 1 + + for common mean with the restriction . 1 w1 + w2 + w3 = Here considered the above estimator for 1 w1 + w2 + w3 ≠ and studied its property under Linex loss function. In this paper alsoconsidered the displaced exponential distribution under Linex loss function and suggested an improve estimator.
This document provides input file specifications for several stochastic modeling programs: MVG.exe simulates multi-Gaussian random fields, NNG.exe simulates non-Gaussian fields using normal score transformation, TBG.exe simulates truncated Gaussian fields, GEOMARKOV.exe simulates fields with a Markov chain geometric model, and MRKOVTB.exe simulates fields by combining a Markov chain model with truncated Gaussian simulation. The document lists the parameters and their order required in each program's input data file.
This document summarizes key aspects of variational autoencoders (VAEs):
VAE is a generative model that learns a latent representation of data. It approximates the intractable posterior using an encoder network and maximizes a variational lower bound. Semi-supervised VAE models can incorporate unlabeled data by learning shared representations. VAEs have been extended for recurrent sequences, convolutional structures, disentangled representations, and multi-modal data. Importance weighted autoencoders provide a tighter evidence lower bound than standard VAEs.
X02 Supervised learning problem linear regression multiple featuresMarco Moldenhauer
This document discusses supervised learning problems and linear regression with multiple features. It defines key terms like training data, input and output variables, and feature scaling. The training data is represented as a matrix with m examples, each containing the input feature values and corresponding output. Feature scaling standardizes the range of independent features to help algorithms work properly and speed up gradient descent convergence.
1) The document presents regression results from Chapter 7 of the textbook "Basic Econometrics" by Gujarati and Porter. It discusses multiple regression analysis and the problem of estimation.
2) Various regression models are estimated using different variables and datasets. The results, standard errors, and other regression outputs like R-squared are reported for each model.
3) Key concepts discussed include omitted variable bias, partial regression coefficients, elasticities, and the consequences of model misspecification.
This summary analyzes the free vibration of laminated composite and sandwich plates using the Euler-Lagrange equation based on first order shear deformation theory. The document presents analytical formulations and solutions for the natural frequency of simply supported composite and sandwich plates. The results are compared to previous literature. The theoretical model accounts for transverse shear deformation, transverse normal strain/stress, and nonlinear variation of in-plane displacement through the thickness, modeling warping more accurately without shear correction coefficients.
Batch arrival retrial queuing system with state dependent admission and berno...eSAT Journals
Abstract
A single server batch arrival retrial queue with server vacation under Bernoulli schedule is considered. Arrivals are controlled
according to the state of the server. The necessary and sufficient condition for the system to be stable is derived. Explicit formulae for
the stationary distributions and performance measures of the system in steady state are obtained. Numerical examples are presented
to illustrate the influence of the parameters on several performance characteristics.
Keywords: Retrial queue, batch arrival, state dependent admission control, Bernoulli vacation.
Regularization and variable selection via elastic netKyusonLim
The document summarizes the Elastic Net regularization method for variable selection in datasets with more predictors than observations (p > n). It describes how the Elastic Net overcomes limitations of LASSO and Ridge Regression by performing automatic variable selection, continuous shrinkage, and selecting groups of correlated predictors. The Naive Elastic Net formulation is presented, along with how it relates to LASSO and Ridge penalties. Computational details of the Elastic Net, including the LARS-EN algorithm and simulations, are discussed.
The document discusses Bayesian networks and causal discovery methods. It provides definitions and examples of key concepts in Bayesian networks including directed acyclic graphs (DAGs), Markov blankets, and the Markov condition. It also describes different approaches to learning Bayesian network structures, including constraint-based methods such as the PC algorithm and score-based methods like greedy hill climbing. Causal discovery from data aims to infer causal relationships between variables using techniques like conditional independence tests on Bayesian networks.
Super-twisting sliding mode based nonlinear control for planar dual arm robotsjournalBEEI
This document describes a super-twisting sliding mode controller developed for a planar dual arm robot. The controller is designed to improve tracking ability and reduce chattering compared to a basic sliding mode controller. Mathematical models are developed to describe the kinematics and dynamics of the dual arm robot. A super-twisting algorithm is then applied within a sliding mode control framework to stabilize the robot and drive it to follow a desired trajectory. Simulations show the super-twisting controller has better tracking performance and less chattering than a conventional sliding mode controller.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
My favorite place at school is the benches where I can sit comfortably and talk with my friends Lucia, Paula, Teresa, and Lourdes. They are very nice to me and are the same age. I also enjoy the playground on sunny days.
This document provides information on additional benefits and support services available to Masters students at Aston University, including a professional internship program, mentoring program, and dedicated career support. The internship program offers flexible part-time work experience to fit around studies. The mentoring program provides transition support and is matched based on research interests. Career support includes 1-on-1 appointments, workshops, and events to improve employability.
Este documento describe la importancia de preservar el medio ambiente y los recursos naturales. Explica que el medio ambiente incluye factores bióticos y abióticos que forman la biosfera y sustentan la vida. También señala que la contaminación y el uso irresponsable de recursos por los humanos están dañando el medio ambiente. Finalmente, enfatiza la necesidad de desarrollo sostenible, conservación de recursos y reducción de la contaminación para proteger el medio ambiente.
This document summarizes the theoretical foundations of a general equilibrium model using a 2x2 Heckscher-Ohlin-Samuelson (HOS) model. It presents the model, which assumes two goods are produced using two factors of production, and explores the implications on prices and outputs from changes in factors or goods. Key results include the Stolper-Samuelson theorem, which states that an increase in the price of a good raises the reward to its intensive factor of production, and the Rybczynski theorem, relating changes in factors to changes in outputs. Equations are provided and concepts like elasticity of substitution and determinants are introduced to analyze comparative static effects in the model.
Stochastic differential equations (SDEs) describe systems with random components. Common methods to solve SDEs include spectral and perturbation methods. The spectral method represents variables and parameters as mean values plus fluctuations. Taking the expected value of the SDE yields equations for the mean and fluctuations that can be solved. The perturbation method expresses variables and parameters as power series expansions. Introducing these into the SDE allows analytical or numerical solution. SDEs are used to model systems with uncertain parameters like groundwater flow with random hydraulic conductivity.
1. The document discusses the estimation problem in geostatistics, which is determining the value of a quantity Zo at an unmeasured point (xo,yo) based on measurements at nearby points.
2. It describes kriging as the best linear unbiased estimator that takes into account the spatial structure and correlation between points to estimate values across a field. The kriging system minimizes the variance of errors in estimates.
3. A simple kriging example is shown using a computer program to generate data, perform kriging, and display the kriged estimates and associated error variances across the field.
Estimation of mean and its function using asymmetric loss function ijscmcj
In this paper suggested an improve estimator for mean using Linex loss function and shows that the
improved estimator dominates the Searls (1964) estimator underLinex loss function. The sufficient statistics
can be used to find the uniformly minimum risk unbiased estimators. In this paper an improve estimation
forµ
2
is suggested (which uses coefficient of variation) under Linex loss function. The mathematical
expression of improve estimator of fourth power of mean is also obtained and an improve estimator for
common mean in negative exponential distribution is also proposed under Linex loss function.Pandey and
Malik (1994) considered the estimator T w x w y w3x y
2
2
2
1
′ = 1 + + for common mean with the
restriction . 1 w1 + w2 + w3 = Here considered the above estimator for 1 w1 + w2 + w3 ≠ and studied its
property under Linex loss function. In this paper alsoconsidered the displaced exponential distribution
under Linex loss function and suggested an improve estimator.
Estimation of mean and its function using asymmetric loss functionijscmcj
In this paper suggested an improve estimator for mean using Linex loss function and shows that the improved estimator dominates the Searls (1964) estimator underLinex loss function. The sufficient statistics can be used to find the uniformly minimum risk unbiased estimators. In this paper an improve estimation forµ 2 is suggested (which uses coefficient of variation) under Linex loss function. The mathematical expression of improve estimator of fourth power of mean is also obtained and an improve estimator for common mean in negative exponential distribution is also proposed under Linex loss function.Pandey and Malik (1994) considered the estimator T w x w y w3x y
2
2 2 1′ = 1 + + for common mean with the restriction . 1 w1 + w2 + w3 = Here considered the above estimator for 1 w1 + w2 + w3 ≠ and studied its property under Linex loss function. In this paper alsoconsidered the displaced exponential distribution under Linex loss function and suggested an improve estimator.
This document provides input file specifications for several stochastic modeling programs: MVG.exe simulates multi-Gaussian random fields, NNG.exe simulates non-Gaussian fields using normal score transformation, TBG.exe simulates truncated Gaussian fields, GEOMARKOV.exe simulates fields with a Markov chain geometric model, and MRKOVTB.exe simulates fields by combining a Markov chain model with truncated Gaussian simulation. The document lists the parameters and their order required in each program's input data file.
This document summarizes key aspects of variational autoencoders (VAEs):
VAE is a generative model that learns a latent representation of data. It approximates the intractable posterior using an encoder network and maximizes a variational lower bound. Semi-supervised VAE models can incorporate unlabeled data by learning shared representations. VAEs have been extended for recurrent sequences, convolutional structures, disentangled representations, and multi-modal data. Importance weighted autoencoders provide a tighter evidence lower bound than standard VAEs.
X02 Supervised learning problem linear regression multiple featuresMarco Moldenhauer
This document discusses supervised learning problems and linear regression with multiple features. It defines key terms like training data, input and output variables, and feature scaling. The training data is represented as a matrix with m examples, each containing the input feature values and corresponding output. Feature scaling standardizes the range of independent features to help algorithms work properly and speed up gradient descent convergence.
1) The document presents regression results from Chapter 7 of the textbook "Basic Econometrics" by Gujarati and Porter. It discusses multiple regression analysis and the problem of estimation.
2) Various regression models are estimated using different variables and datasets. The results, standard errors, and other regression outputs like R-squared are reported for each model.
3) Key concepts discussed include omitted variable bias, partial regression coefficients, elasticities, and the consequences of model misspecification.
This summary analyzes the free vibration of laminated composite and sandwich plates using the Euler-Lagrange equation based on first order shear deformation theory. The document presents analytical formulations and solutions for the natural frequency of simply supported composite and sandwich plates. The results are compared to previous literature. The theoretical model accounts for transverse shear deformation, transverse normal strain/stress, and nonlinear variation of in-plane displacement through the thickness, modeling warping more accurately without shear correction coefficients.
Batch arrival retrial queuing system with state dependent admission and berno...eSAT Journals
Abstract
A single server batch arrival retrial queue with server vacation under Bernoulli schedule is considered. Arrivals are controlled
according to the state of the server. The necessary and sufficient condition for the system to be stable is derived. Explicit formulae for
the stationary distributions and performance measures of the system in steady state are obtained. Numerical examples are presented
to illustrate the influence of the parameters on several performance characteristics.
Keywords: Retrial queue, batch arrival, state dependent admission control, Bernoulli vacation.
Regularization and variable selection via elastic netKyusonLim
The document summarizes the Elastic Net regularization method for variable selection in datasets with more predictors than observations (p > n). It describes how the Elastic Net overcomes limitations of LASSO and Ridge Regression by performing automatic variable selection, continuous shrinkage, and selecting groups of correlated predictors. The Naive Elastic Net formulation is presented, along with how it relates to LASSO and Ridge penalties. Computational details of the Elastic Net, including the LARS-EN algorithm and simulations, are discussed.
The document discusses Bayesian networks and causal discovery methods. It provides definitions and examples of key concepts in Bayesian networks including directed acyclic graphs (DAGs), Markov blankets, and the Markov condition. It also describes different approaches to learning Bayesian network structures, including constraint-based methods such as the PC algorithm and score-based methods like greedy hill climbing. Causal discovery from data aims to infer causal relationships between variables using techniques like conditional independence tests on Bayesian networks.
Super-twisting sliding mode based nonlinear control for planar dual arm robotsjournalBEEI
This document describes a super-twisting sliding mode controller developed for a planar dual arm robot. The controller is designed to improve tracking ability and reduce chattering compared to a basic sliding mode controller. Mathematical models are developed to describe the kinematics and dynamics of the dual arm robot. A super-twisting algorithm is then applied within a sliding mode control framework to stabilize the robot and drive it to follow a desired trajectory. Simulations show the super-twisting controller has better tracking performance and less chattering than a conventional sliding mode controller.
MIXTURES OF TRAINED REGRESSION CURVESMODELS FOR HANDRITTEN ARABIC CHARACTER R...ijaia
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We proceed then, by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
My favorite place at school is the benches where I can sit comfortably and talk with my friends Lucia, Paula, Teresa, and Lourdes. They are very nice to me and are the same age. I also enjoy the playground on sunny days.
This document provides information on additional benefits and support services available to Masters students at Aston University, including a professional internship program, mentoring program, and dedicated career support. The internship program offers flexible part-time work experience to fit around studies. The mentoring program provides transition support and is matched based on research interests. Career support includes 1-on-1 appointments, workshops, and events to improve employability.
Este documento describe la importancia de preservar el medio ambiente y los recursos naturales. Explica que el medio ambiente incluye factores bióticos y abióticos que forman la biosfera y sustentan la vida. También señala que la contaminación y el uso irresponsable de recursos por los humanos están dañando el medio ambiente. Finalmente, enfatiza la necesidad de desarrollo sostenible, conservación de recursos y reducción de la contaminación para proteger el medio ambiente.
Wesco International reported net income of $70 million for the third quarter of 2007 and $177.8 million for the first nine months of the year. Their cash flow provided by operations was $79 million for the third quarter and $207.4 million year-to-date, which was calculated by reconciling net income with changes in accounts receivable, inventory, accounts payable, depreciation/amortization, and other items. Wesco considers free cash flow, defined as cash flow from operations less capital expenditures, to be a measure of excess funds available and it was $74.5 million for the third quarter and $196.2 million for the first nine months.
UNESCO estimates that of the 6,000 current languages spoken today, more than half will be extinct by the start of the next century, adding that "with the disappearance of unwritten and undocumented languages, humanity will lose not only a cultural wealth, but also important ancestral knowledge embedded, in particular, in indigenous languages." These languages require urgent intervention. In many remote locations, only a handful of speakers remain. There is also a growing movement where communities are recognizing the value of maintaining their native language despite internal and external pressures. Online media and web 2.0 tools hold immense possibilities for the inclusion of indigenous people in the online conversation and in democratic processes that start with the simple exercise of a person’s right to express themselves using the tools available to them. These tools have have a significant potential for cultural preservation and identity formation of young indigenous people.
El documento describe las actividades realizadas por un maestro durante una clase de español de 1 hora y 30 minutos. La primera actividad trató sobre causas y consecuencias de las casas, la cual los alumnos completaron rápidamente debido a la dinámica. La segunda actividad también fue exitosa ya que los alumnos trabajaron de forma rápida. La última actividad fue dejar tarea sobre causas y consecuencias en la vida diaria de los alumnos.
Program semester mata pelajaran PAI kelas IX/2 SMP 1 Tirto tahun pelajaran 2015/2016. Mencakup 13 standar kompetensi yang meliputi bacaan Al-Qur'an surat Al-Insyiroh, hadis tentang kebersihan, iman kepada qadha dan qadar, perilaku takabur, shalat berjamaah dan munfarid, serta sejarah tradisi Islam Nusantara. Alokasi waktu pelaksanaannya adalah bulan Januari hingga Juni
Successful team collaboration requires pre-planned conflict resolution strategies, utilizing each member's strengths and learning styles, developing communication skills, and establishing motivational strategies. The document outlines key aspects of effective teams such as setting goals, defining roles, and providing constructive feedback. With these elements in place, the team's completed project will be focused and persuasive in satisfying their overall vision.
El documento presenta el portafolio de trabajo del arquitecto Carlos H. Jaramillo, quien se dedica a la arquitectura, el urbanismo y la planificación en Medellín, Colombia. El portafolio incluye proyectos como la Casa Vereda Pantanillo en Envigado y contiene los contactos del arquitecto Jaramillo y enlaces a su sitio web y presentaciones en SlideShare.
This document describes the implementation of an Extended Kalman Filter (EKF) to estimate the state (position and heading angle) of a bicycle model. The EKF was able to provide reasonably accurate estimates of position over time based on position measurements and steering/velocity inputs, but struggled to accurately estimate the heading angle due to a lack of direct measurements. Histograms of the final state errors across many test cases showed normally distributed position errors and a uniformly distributed random heading angle error. While the EKF provided an approximation, a more advanced filter may have yielded better heading angle estimates.
This document summarizes an electrical engineering student's final project report on using an Unscented Kalman Filter (UKF) to estimate the state of a balancing robot. The UKF was able to accurately track states and uncertainties in simulation, but had difficulty estimating the true robot length from experimental datasets, likely due to insufficient oscillation of the robot. While the UKF and Extended KF agreed on the estimated length, more accurate methods may be needed such as Monte Carlo simulation with more samples.
SLAM of Multi-Robot System Considering Its Network Topologytoukaigi
This document proposes a new solution to the multi-robot simultaneous localization and mapping (SLAM) problem that takes into account the network topology between robots. Previous multi-robot SLAM research has expanded one-robot SLAM algorithms without considering how the relationship between robots changes over time. The proposed approach models the network structure and derives the mathematical formulation for estimating the multi-robot SLAM. It presents motion and observation update equations in an information filter framework that can be implemented in a decentralized way on individual robots. Future work will focus on specific challenges in multi-robot SLAM like map merging.
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docxanhlodge
SAMPLE QUESTION:
Exercise 1: Consider the function
f (x,C)=
sin(C x)
Cx
(a) Create a vector x with 100 elements from -3*pi to 3*pi. Write f as an inline or anonymous function
and generate the vectors y1 = f(x,C1), y2 = f(x,C2) and y3 = f(x,C3), where C1 = 1, C2 = 2 and
C3 = 3. Make sure you suppress the output of x and y's vectors. Plot the function f (for the three
C's above), name the axis, give a title to the plot and include a legend to identify the plots. Add a
grid to the plot.
(b) Without using inline or anonymous functions write a function+function structure m-file that does
the same job as in part (a)
SAMPLE LAB WRITEUP:
MAT 275 MATLAB LAB 1 NAME: __________________________
LAB DAY and TIME:______________
Instructor: _______________________
Exercise 1
(a)
x = linspace(-3*pi,3*pi); % generating x vector - default value for number
% of pts linspace is 100
f= @(x,C) sin(C*x)./(C*x) % C will be just a constant, no need for ".*"
C1 = 1, C2 = 2, C3 = 3 % Using commans to separate commands
y1 = f(x,C1); y2 = f(x,C2); y3 = f(x,C3); % supressing the y's
plot(x,y1,'b.-', x,y2,'ro-', x,y3,'ks-') % using different markers for
% black and white plots
xlabel('x'), ylabel('y') % labeling the axis
title('f(x,C) = sin(Cx)/(Cx)') % adding a title
legend('C = 1','C = 2','C = 3') % adding a legend
grid on
Command window output:
f =
@(x,C)sin(C*x)./(C*x)
C1 =
1
C2 =
2
C3 =
3
(b)
M-file of structure function+function
function ex1
x = linspace(-3*pi,3*pi); % generating x vector - default value for number
% of pts linspace is 100
C1 = 1, C2 = 2, C3 = 3 % Using commans to separate commands
y1 = f(x,C1); y2 = f(x,C2); y3 = f(x,C3); % function f is defined below
plot(x,y1,'b.-', x,y2,'ro-', x,y3,'ks-') % using different markers for
% black and white plots
xlabel('x'), ylabel('y') % labeling the axis
title('f(x,C) = sin(Cx)/(Cx)') % adding a title
legend('C = 1','C = 2','C = 3') % adding a legend
grid on
end
function y = f(x,C)
y = sin(C*x)./(C*x);
end
Command window output:
C1 =
1
C2 =
2
C3 =
3
More instructions for the lab write-up:
1) You are not obligated to use the 'diary' function. It was presented only for you convenience. You
should be copying and pasting your code, plots, and results into some sort of "Word" type editor that
will allow you to import graphs and such. Make sure you always include the commands to generate
what is been asked and include the outputs (from command window and plots), unless the pr.
This document provides examples that demonstrate solving systems of equations, continuous-time state-space models, and using strings and scripts in Scilab. Example 2-1 shows how to solve a system of 3 equations with 3 unknowns by writing the equations in matrix form and using the backslash operator. Example 2-2 demonstrates using mesh currents to solve a circuit problem since Kirchhoff's law leads to a non-square matrix. Example 2-3 defines a state-space model and simulates the output and state responses. Example 2-4 is a script that converts a time in seconds to hours, minutes and seconds using strings, input, floor, and modulo functions.
The document discusses dynamic modeling of robot manipulators using the Euler-Lagrange approach. It introduces dynamic models and the direct and inverse problems. The Euler-Lagrange approach is then explained in detail. It involves computing the kinetic and potential energies of each link based on the link masses, centers of mass, moments of inertia, and joint velocities and positions. This allows deriving the dynamic equations of motion for the manipulator.
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Alexander Litvinenko
1) The document describes a method called Multilevel Monte Carlo (MLMC) to efficiently compute electromagnetic fields scattered from dielectric objects of uncertain shapes. MLMC balances statistical errors from random sampling and numerical errors from geometry discretization to reduce computational time.
2) A surface integral equation solver is used to model scattering from dielectric objects. Random geometries are generated by perturbing surfaces with random fields defined by spherical harmonics.
3) MLMC is shown to estimate scattering cross sections accurately while requiring fewer overall computations compared to traditional Monte Carlo methods. This is achieved by optimally allocating samples across discretization levels.
This document discusses deep neural networks and computational graphs. It begins by explaining key concepts like derivatives, partial derivatives, optimization, training sets, and activation functions. It then provides examples of applying the chain rule in deep learning, including forward and back propagation in a neural network. Specifically, it demonstrates forward propagation through a simple network and calculating the gradient using backpropagation and the chain rule. Finally, it works through an example applying these concepts to a neural network using sigmoid activation functions.
Fuzzy clustering algorithm can not obtain good clustering effect when the sample characteristic is not obvious and need to determine the number of clusters firstly. For thi0s reason, this paper proposes an adaptive fuzzy kernel clustering algorithm. The algorithm firstly use the adaptive function of clustering number to calculate the optimal clustering number, then the samples of input space is mapped to highdimensional feature space using gaussian kernel and clustering in the feature space. The Matlab simulation results confirmed that the algorithm's performance has greatly improvement than classical clustering algorithm and has faster convergence speed and more accurate clustering results.
Fuzzy clustering algorithm can not obtain good clustering effect when the sample characteristic is not
obvious and need to determine the number of clusters firstly. For thi0s reason, this paper proposes an
adaptive fuzzy kernel clustering algorithm. The algorithm firstly use the adaptive function of clustering
number to calculate the optimal clustering number, then the samples of input space is mapped to highdimensional
feature space using gaussian kernel and clustering in the feature space. The Matlab simulation
results confirmed that the algorithm's performance has greatly improvement than classical clustering algorithm and has faster convergence speed and more accurate clustering results
This document presents a comparative study of two genetic algorithm-based task allocation models in distributed computing systems. It aims to minimize turnaround time, where the previous model aimed to maximize reliability. The models are implemented on two example cases, with the minimum turnaround time model finding an allocation with a turnaround of 14 units and slightly lower reliability than the maximum reliability model's allocation of 20 units. In conclusion, minimizing turnaround time leads to slightly reduced reliability compared to maximizing reliability.
The document summarizes the k-means clustering algorithm. It describes how k-means aims to group data into k clusters by minimizing the distance between data points and their assigned cluster centroid. The algorithm works by iteratively assigning points to the closest centroid and moving each centroid to the mean of its assigned points until convergence. While k-means converges, finding the global minimum is not guaranteed as it can get stuck in local optima, so it is best to run it multiple times.
The document compares several tools for solving the Markowitz portfolio selection problem, including AMPL, Julia, Python, and C. It provides code implementations of the problem in each language/tool and compares the results. For a sample data set of 20 stocks, the solutions found were similar but not identical, with Python and C through Ipopt finding the lowest objective risk value of 0.002874.
The document describes a two-way analysis of variance (ANOVA) used to analyze data from an experiment with two factors. The experiment examined the effect of three types of paint and three steel-alloy compositions on the corrosion resistance of metal panels. A two-way ANOVA was conducted to determine if differences existed due to the paint type or steel-alloy composition. Calculations were shown to obtain the sum of squares for each factor and residual, from which F-ratios were derived. The results indicated a significant difference due to steel-alloy composition but not paint type.
Kakuro: Solving the Constraint Satisfaction ProblemVarad Meru
This work was done as a part of the project for the course CS 271: Introduction to Artificial Intelligence (http://www.ics.uci.edu/~kkask/Fall-2014%20CS271/index.html), taught in Fall 2014.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
This document summarizes the derivation of the EM algorithm for parameter estimation in a mixed normal model. It begins by presenting the log-likelihood function and derives update equations for the mean (μk) and covariance (Σk) parameters of each normal component. An experimental design is then described to statistically analyze the performance of the EM algorithm under different conditions. The results show that the EM estimates are most accurate when the normal components have distinct means and covariances, and when more training data is available. Interactions between factors are also examined.
This document discusses using fuzzy clustering to group real estate properties. It presents a case study clustering 46 real estate listings into 3 groups based on price, area, and region attributes. The fuzzy c-means clustering algorithm in MATLAB is used to assign membership levels and cluster centroids. The results identify 3 clusters - one for mid-priced properties in good regions and average areas, one for high-priced properties in excellent regions and large areas, and one for low-priced properties in poor regions and small areas. Graphs and tables show the clustered properties and centroids.
1. Empirical methods project (Econ 597)
Oleksii Khvastunov
Abstract
My empirical methods project is based on Per Krusell and Anthony
Smith paper ”Income and Wealth Heterogeneity in the Macroeconomy”
(JPE, 1998).The paper presents a simple benchmark incomplete market
heterogeneous model with two types of risk: idiosyncratic (unemploy-
ment) and aggregate (productivity). In my project I am solving bench-
mark model following the steps presented in the paper.
1
2. Contents
1 Introduction 3
2 Model 4
3 Algorithm 7
4 My results 9
5 Appendix 12
5.1 Short description of the functions . . . . . . . . . . . . . . . . . . 12
5.2 Main function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Function that finds value functions for given aggregate capital
law of motion parameters . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Function that performs value function itteration step . . . . . . . 19
5.5 Simulation routine . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.6 Auxiliary functions . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2
3. 1 Introduction
A large number of dynamic general equilibrium macroeconomic models rely
on the assumption of representative agent. In some cases it could be a rea-
sonable assumption like in the case of complete markets. However the models
that want to have strong microfoundation require agents heterogeneity. This set
up makes models much more complicated and most of them can not be solved
analytically.
Numerical solution of such models is not an easy task as well. Rational
agents in this type of models make decisions based on the entire distribution of
individuals state variables. In general case this object is infinitely dimensional
that makes problem extremely complicated. The paper by Per Krusell and An-
thony Smith (henceforth KS) presents methodology of numerical approximation
of equilibrium in heterogeneous agent models.
The key assumption that is used to calculate the approximate equilibrium
is the following: agents have a limited ability to predict the evolution of the
distribution of individual state variables. It is shown in the paper that this
bound does not significantly restrict agents. As a result, the law of motion that
agents perceive could be described using finite-dimensional vector of distribu-
tion parameters (moments could be used for this purpose).
The rest of the report is organized as follows: in Model section I briefly
explain the model, intuition behind the methodology and give model parame-
ters that are used. In Algorithm section I will explain in details what are the
steps for equilibrium computation and how I implemented them. In the results
section I present my results and discuss them.
3
4. 2 Model
The benchmark model consists of continuum (measure one) ex-ante iden-
tical consumers who maximize expected discounted utility subject to budget
constraints in every period:
max E0Σ∞
t=0βt
U(ct) (1)
subject to
c + k − (1 − δ)k = y (2)
where y is individual’s income that consists of two parts: return on capital that
individual owns and wage that individual gets if he is employed. U(c) - CRRA
function. Market for capital and labor is competitive so individual gets return
and wage equal to marginal product.
Production function is Cobb-Douglas:
F(k, l) = z¯kα¯l1−α
(3)
where z - aggregate productivity shock, ¯k - aggregate capital in the economy, ¯l
- total amount of labor supplied.
There are two types of uncertainty in the model: aggregate (productivity
shock - z) and idiosyncratic (individual unemployment shock - ). Both shocks
follow first order Markov process with two possible states: zg and zb for aggre-
gate productivity shock and 0 and 1 for idiosyncratic shock (0 means that the
agent is unemployed). Parameters of the model specify the average duration of
boom and recession in the economy. The authors are trying to match durations
to the ones observed in the data (8 quaters for boom and recession). As a result,
the transition matrix for aggregate shock is:
Bad Good
Bad 0.875 0.125
Good 0.125 0.875
The shock are dependent in the following sense: aggregate shock affects indi-
vidial, but individual does not affect aggregate. The intuition behind this fact is
the following: aggregate shock determines the state of the economy, in particular
employment (unemployment in boom ug and recession ub are parameters of the
model) that affects the evolution of the individual shock. The authors impose
additional conditions for the dependence of aggregate and individual shock in
order to identify the joint transition matrix. I wrote the function ”emprpr” to
find this matrix by solving the system of linear equations.As a result, the joint
transion matrix has the following form:
Bad0 Bad1 Good0 Good1
Bad0 0.5250 0.35 0.0312 0.0938
Bad1 0.0389 0.8361 0.0021 0.1229
Good0 0.0937 0.0313 0.2917 0.5833
Good1 0.0091 0.1159 0.0243 0.8507
4
5. Elements of this matrix satisfy system of the linear equations as I mentioned
above. Four of these equations make sure that unemployment shares remains
consistent with the model parameters ug and ub.
Individual’s Bellman equation has the following form:
v(k, ; Γ, z) = max
c,k
{U(c) + βE[v(k , ; Γ , z )|z, ]} (4)
subject to
c + k = r(¯k, ¯l, z)k + w(¯k, ¯l, z)˜l + (1 − δ)k,
Γ = H(Γ, z, z ),
k ≥ 0
where k - individual capital in current period, - current employment, Γ -
current distribution of capital, z - current aggregate shock.
Notice that when agent is employed he supplies exogenously given amount
of labor ˜l and gets wage. Otherwise an agent does not get a wage and return
on capital is the only source of his income.
The key assumption that allows compute equilibrium numerically is that
agents are boundedly rational in their perception of law of motion. Therefore,
current and future prices are assumed to be dependent only on the current
moments of capital distribution.
As a result, agent’s problem becomes:
v(k, ; ¯k, z) = max
c,k
{U(c) + βE[v(k , ; ¯k , z )|z, ]} (5)
subject to
c + k = r(¯k, ¯l, z)k + w(¯k, ¯l, z)˜l + (1 − δ)k,
log(¯k ) = a0 + a1log(¯k) if z = zg
log(¯k ) = b0 + b1log(¯k) if z = zb
k ≥ 0
where w(¯k, ¯l, z) = (1 − α)z(¯k/¯l)α
and r(¯k, ¯l, z) = αz(¯k/¯l)α−1
5
6. For model simulations I have used the following parameters:
Parameter Value Meaning
β 0.99 Discount factor
δ 0.025 Capital depreciation
σ 1 Utility risk-aversion parameter
α 0.36 Capital share in the production
function
zg 1.01 Productivity in a good state
zb 0.99 Productivity in a bad state
ug 0.04 Unemployment in a good state
ub 0.1 Unemployment in a bad state
˜l 0.3271 Individual labor supply
I want to point out that for the draft version I used the value of parameter
˜l = 1.11 that is not the one used in the paper. The reason was that the value
of this parameter is not given in the published version of the paper. Therefore
I looked in the other papers that are based on KS. There ˜l is normalized such
that labor supply in a bad state is equal to one (˜l = 1
1−ub
= 1
0.9 ≈ 1.11). Having
obtained results that are far from the ones in the paper I looked at the working
paper version of KS, where side note 9 on the page 10 is the following:
For the final version of the Empirical Methods paper I am using the value
of exogenous labor supply ˜l = 0.3271 that is the one exploited in KS.
6
7. 3 Algorithm
In this section I discuss equilibrium computation algorithm and it’s imple-
mentation.
Step 1: Compute transition probabilities. There are 4 possible states for in-
dividual in each period: (z, ) ∈ {(b, 0), (b, 1), (g, 0), (g, 1)} and paper contains
16 linear equations that determine these probabilities. As a mentioned in a
Model section evolution of (z, ) follows first order Markov process.
The function ”emprpr” contains these 16 equation and when we apply fsolve
to this function we obtain transition probabilities. These equations reflect the
following facts: individual employment can not affect aggregate economy state,
unemployment for each aggregate state should be consistent with model param-
eters ub and ug, etc.
Step 2: Choose parameters (a0, a1, b0, b1) for the law of motion of aggregate
capital. Chose grid for k and ¯k. KS suggest to have 70-130 points in k dimen-
sion with more points close to 0; and 4-6 points in ¯k dimension. Then starting
with zero initial value function do the following: do the value function iteration
step for the points on the grid, interpolate obtained result and continue these
two procedures until policy functions on a grid for two consecutive iterations
will be close enough. To be more precise the value function iteration step in
this set up is the following (denote f ≡ r(¯k, ¯l, z)k + w(¯k, ¯l, z)˜l + (1 − δ)k):
vt+1(k, ; ¯k, z) = max
k ∈[0,f]
{log(k − k ) + β
z ,
π(z , |z, )vt(k , ; ¯k , z )} (6)
Notice that vt(k , ; ¯k , z ) is defined for all k ∈ [0, f], so when we are trying
to find vt+1(k, ; ¯k, z) for the points on the grid we solve continuous optimiza-
tion problem. Having obtained vt+1(k, ; ¯k, z) for the points on the grid we
interpolate this function so that it can be used in the next iteration. We start
with v0(k, ; ¯k, z) = 0 for all k and k’. This procedure allows me to find policy
functions. In fact I need to find four policy functions because there are four
combinations of aggregate and idiosyncratic shocks.
The function ”emprrhs” computes next iteration interpolated value function
given as inputs interpolated value functions from the previous iteration. I did
not maximize value functions in each point of the grid separately. I maximize
sum of value function that should produce the same result given that value
functions at each point of the grid have different arguments. To be more precise
if you denote mij(k ) ≡ log(kj − k ) + β z , π(z , |z, )vt(k , ; ¯ki , z ) (it is
a function that you need to optimize in order to get value function in a grid
point (kj, ¯ki)), then you can find kij ∈ argmax{mij(k )} either by optimizing all
mij(k ) separately or sum them up and maximize jointly. These two procedures
should lead to the same result because all optimizations problems are indepen-
dent, however I joint optimization allows to reduce computation time (it is the
case because I can supply gradient for optimization routine). For interpolation
7
8. purpose I used not spline routine but pchip (Piecewise Cubic Hermite Inter-
polating Polynomials) because it allows to produces functions without wiggles
even for relatively small number of grid points (it means that value functions
will be increasing and concave).
Step 3: Given policy functions simulate 5000 agents and 11000 periods. This
procedure includes simulation of aggregate and idiosyncratic shocks according
to their conditional distribution. Policy functions give us individual capital ev-
ery period, so we can calculate actual aggregate capital. Then we can regress
log of actual aggregate capital on log of previous period actual aggregate capital
in good and bad times. This procedure gives us new parameters (a0, a1, b0, b1)
of the aggregate law of motion. Then we can go back to the Step 2. We repeat
Steps 2 and 3 until R2
for the regression of the log of actual capital is high
enough or until parameters (a0, a1, b0, b1) don’t change too much. It means
|ai
0 − ai+1
0 | + |ai
1 − ai+1
1 | + |bi
0 − bi+1
0 | + |bi
1 − bi+1
1 | < eps.
The function ”emprsim3” does simulation given policy functions and returns
implied sequence of aggregate capital and shocks. The difference with the func-
tion ”emprsim” that I used in the draft version is the following: I managed to
vectorize computations in the agent’s dimension that reduced required for sim-
ulations time from 3 hours to 1 minute (these numbers reflect computational
time on ”hammer” machine).
8
9. 4 My results
For the results part I started with the following: I took the true values of
the parameters of the law of motion and tried to reproduce Figure 1 and Figure
2 from the paper.
log(¯k ) = 0.095 + 0.962log(¯k) if z = zg
log(¯k ) = 0.085 + 0.965log(¯k) if z = zb
In order to solve agent’s Bellman equation I used the following grid:
k = [0.00001,0.00004,0.00008,0.0001,0.001:0.013:0.04,0.05:0.15:1,1:1:35]; (50 points)
¯k = [7.7 : 4 : 19.7]; (4 points)
I am using less grid points in both dimensions than in the draft version
because Piecewise Cubic Hermite Interpolating Polynomials that I am using
allows me to do that. These polynomials don’t produce wiggly function that
permits me to use less grid points without substantial lose of interpolating preci-
sion. Less grid point allows me to reduce time of one iteration for value functions
iterations. So within the same time period I can perform more iterations and
solve policy function more precisely on a grid. The stopping criteria was that
average absolute change of the policy functions for two consecutive iterations
(across all grid points for all value functions) should be less than 0.00005. In the
draft version it was 0.005. I use average absolute change not maximum absolute
change, but it does not make a significant difference here because convergence is
moreless uniform on a grid points. I have got the following figure that contains
policy function for agent in a good state.
Analog of Figure 2: Tomorrow’s Individual Capital as a function of Today’s
Individual Capital
9
10. Then I simulate economy with 5000 individuals (who follow obtained policy
functions) for 11000 periods. I started with the good aggregate state where all
individuals hold 10 units of capital. I discarded first 1000 periods and tried to
reproduce Figure 1 in the paper.
Analog of Figure 1: Tomorrow’s Aggregate Capital as a function of Today’s
Aggregate Capital
As we can see this figure is slightly different from the one in the paper
(see Figure 1 from the paper on the next page). When I run regression of the
next period aggregate capital on today’s period aggregate capital I have got the
following results:
In good times:
log(¯k ) = 0.0993 + 0.9601log(¯k); R2
= 0.999941 ˆσ = 0.0186%
In bad times:
log(¯k ) = 0.0877 + 0.9624log(¯k); R2
= 0.999851 ˆσ = 0.0291%
10
11. Figure 1 from the paper: Tomorrow’s Aggregate Capital as a function of
Today’s Aggregate Capital
As we can see, obtained coefficients are close but not exactly the ones we
started from. In fact the sum of absolute coefficients change is of magnitude
0.01 that can not be considered as very precise.
Then I run the entire procedure - I started with the following guesses for the
aggregate capital law of motion:
log(¯k ) = 0 + log(¯k) if z = zg
log(¯k ) = 0 + log(¯k) if z = zb
The results of the first four iterations are presented in the table below:
The smallest change in coefficients is observed in the step from third to forth
iteration. I did not incude more iterations in the table because the procedure
diverges. The procedure does not converge because at some point iterations
produce range for the aggregate capital that fall outside the range where I have
grid points for the aggregate capital.
11
12. 5 Appendix
5.1 Short description of the functions
Section 5.2:
emprmainbchm - main function that performs iterations for the aggregate cap-
ital law of motion coefficients.
Section 5.3:
emprmain1 - function that finds value functions for given aggregate capital law
of motion coefficients.
Section 5.4:
emprrhs - function that performs value iteration step for given aggregate capital
law of motion coefficients.
Section 5.5:
emprsim3 - function that simulates time series for the aggregate capital given
agents’ policy functions
Section 5.6:
emprcon2 - function that constructs linear constraints for the optimization re-
quired for value function iterations. This constraint insures that we will have
increasing in individual capital value function. This constraint was needed when
I used spline interpolation. For Hermite interpolation this constraint does play
a role.
emprder - function that computes coefficients of the piecewise polynomials
derivative, this function is used to supply optimization routine in the value
iteration step a gradient.
emprderu - function that returns the derivative of the utility function
emprf00, emprf01, emprf10, emprf11 - functions that are optimized in order
to perform value function iteration step.
emprfun - function that computes next period aggregate capital for today’s
corresponding grid points for aggregate capital
emprpr - function that returns vector of zeros when all conditions for tran-
sition probabilitis is satisfied. Applying fsolve to this function allows to get 4x4
transition probabilities matrix.
emprr - function that computes returns on capital.
empru - function that returns value of the utility function.
12
13. emprvalev - function that computes value function in all grid points.
emprw - function that computes wage.
13
16. 5.3 Function that finds value functions for given aggregate
capital law of motion parameters
tic
global bbeta delta sig alpha zg zb ug ub prag durub durug mkk mkkb trpr kk
kkb a0 a1 b0 b1 in00 in01 in10 in11 plcov fnorm mc00 mc01 mc10 mc11 limp
con ltild mouse;
% prag(i,j) - probabilities of transition for the aggregate state from i to
% j where 1 - bad, 2 - good
% mkk - matrix that contains grid points for k, each row is a grid for k,
% number of rows is equal to the number of grid points for kbar, all rows
% are the same
% mkkb - matrix that contains grid points for kbar, each column is a grid
% for kbar, number of colums is equal to the number of grid points for k,
% all columns are the same
% trpr - transition probabilities of moving from different conbinations of
% aggregate and idiosyncratic shocks
% orger of rows and columns is the following
% 1 - b0
% 2 - b1
% 3 - g0
% 4 - g1
% kk - grid for k
% kkb - grid for kbar
% a0, a1, b0, b1 - coefficients for the aggregate capital law of motion
% in00, in01, in10, in11 - initial value for the policy function (it is
% needed for maximization routine). The results from the previous step of
% the value function itteration is used here.
% plcov - (policy convergence) this value contains the distance between
% policy functions for the consecutive itterations.
% If it is small then value function iterations converged
plcov=2;
% average duration of bad and good times
durb=8;
durg=8;
prag=[1-1/durb, 1/durb; 1/durg, 1-1/durg];
% average duration of unemployment in good and bad times
durub=2.5;
16
17. durug=1.5;
ub=0.1;
ug=0.04;
zb=0.99;
zg=1.01;
alpha=0.36;
% see workpaper version
ltild=0.3271;
sig=1;
bbeta=0.99;
delta=0.025;
kk=[0.00001,0.00004,0.00008,0.0001,0.001:0.013:0.04,0.05:0.15:1,1:1:35];
kkb=[7.7:4:19.7];
% mkk(i,j) - amount of capital if kbar corresponds to i-s point on the grid
% and kk corresponds to j-s point on the grid
mkk=repmat(kk,size(kkb,2),1);
mkkb=repmat(kkb’,1,size(kk,2));
in00=zeros(size(mkk));
in01=zeros(size(mkk));
in10=zeros(size(mkk));
in11=zeros(size(mkk));
% calculation of transition probabilities
f=@(p) emprpr(p);
[trpr,fval,exitflag,output] = fsolve(f,ones(4,4));
mc00=emprr(0).*mkk+(1-delta).*mkk;
% 1 - aggregate state is good
% 0 - person is unemployed
mc10=emprr(1).*mkk+(1-delta).*mkk;
mc01=emprr(0).*mkk+emprw(0)*ltild+(1-delta).*mkk;
mc11=emprr(1).*mkk+emprw(1)*ltild+(1-delta).*mkk;
%Zero initial value function
y00=zeros(size(mkk));
v00=pchip(kk,y00);
y01=zeros(size(mkk));
v01=pchip(kk,y01);
y10=zeros(size(mkk));
v10=pchip(kk,y10);
17
19. 5.4 Function that performs value function itteration step
function [res00,res10,res01,res11]=emprrhs(v00,v10,v01,v11)
% function that computes rhs for the value function iteration
% v00 - array that contains: v00(i) - function of kprime given i-s point on
% the grid for kbar and shocks 0-unemployed and 0-bad times
global kk kkb mkk delta trpr bbeta in00 in01 in10 in11 plcov fnorm
mc00 mc01 mc10 mc11 vp00 vp01 vp10 vp11 limp con mouse;
kkpg=emprfun(1);
kkpb=emprfun(0);
help00=(ppval(v00,kk));
help10=(ppval(v10,kk));
help01=(ppval(v01,kk));
help11=(ppval(v11,kk));
h00=pchip(kkb,help00’);
h10=pchip(kkb,help10’);
h01=pchip(kkb,help01’);
h11=pchip(kkb,help11’);
he00=(ppval(h00,kkpb));
he10=(ppval(h10,kkpg));
he01=(ppval(h01,kkpb));
he11=(ppval(h11,kkpg));
% functions of kprime for every kbarprime that correspond to kbar
vp00=pchip(kk,he00’);
vp10=pchip(kk,he10’);
vp01=pchip(kk,he01’);
vp11=pchip(kk,he11’);
% derivatives of these functions
vp00d=fnder(vp00,1);
vp00d.coefs=emprder(vp00.coefs);
vp01d=fnder(vp01,1);
vp01d.coefs=emprder(vp01.coefs);
vp10d=fnder(vp10,1);
vp10d.coefs=emprder(vp10.coefs);
vp11d=fnder(vp11,1);
vp11d.coefs=emprder(vp11.coefs);
nvar=4*size(mkk,1)*size(mkk,2);
epss=1e-8;
19
22. 5.5 Simulation routine
function [res,res2]=emprsim3()
% function that simulates nn consumers for tt time periods and we burn
% first burn periods, because aggregate capital there is affected by
% initial wealth distribution
global ub ug kkb in00 in01 in10 in11 trpr prag kk;
nn=5000;
tt=11000;
burn=1000;
% suppose initially everybody has the same level of capital ink and
% aggregate state is good
ink=10;
kap0=ink*ones(nn,1);
kap=zeros(nn,1);
agst=zeros(1,tt);
empl=(rand(nn,1)>ug);
agkap=zeros(1,tt);
agkap(1)=ink;
kap=kap0;
kap2=zeros(nn,1);
empl2=zeros(nn,1);
agst(1,1)=1;
% first index is aggreagate state, second - employment
ii00=pchip(kkb,in00’);
ii01=pchip(kkb,in01’);
ii10=pchip(kkb,in10’);
ii11=pchip(kkb,in11’);
help=1; % good initial state
for i=2:1:tt
i
he00=(ppval(ii00,agkap(i-1)));
he10=(ppval(ii10,agkap(i-1)));
he01=(ppval(ii01,agkap(i-1)));
he11=(ppval(ii11,agkap(i-1)));
i00=pchip(kk,he00’);
i10=pchip(kk,he10’);
i01=pchip(kk,he01’);
i11=pchip(kk,he11’);
help2=rand(1,1);
22
23. if (help==0)
if (help2<prag(1,1))
help3=0;
else
help3=1;
end;
end;
if (help==1)
if (help2<prag(2,1))
help3=0;
else
help3=1;
end;
end;
agst(1,i)=help3;
help4=rand(nn,1);
if ((help==0) && (help3==0))
thresh=(1-empl)*trpr(1,1)/prag(1,1)+empl*trpr(2,1)/prag(1,1);
empl2=(help4>thresh);
hh=sum(ismember(empl2,1)); % number of employed guys
% adjustment of employment in order to have exact unemployment
% rates
if (hh>nn*(1-ub))
hhh=find(ismember(empl2,1));
emppl2(hhh(1:(hh-nn*(1-ub)),1),1)=0;
end;
if (hh<nn*(1-ub))
hhh=find(ismember(empl2,0));
emppl2(hhh(1:(hh-nn*(1-ub)),1),1)=1;
end;
kap2(find(ismember(empl,1)))=ppval(i01,kap(find(ismember(empl,1))));
kap2(find(ismember(empl,0)))=ppval(i00,kap(find(ismember(empl,0))));
end;
if ((help==1) && (help3==0))
thresh=(1-empl)*trpr(3,1)/prag(2,1)+empl*trpr(4,1)/prag(2,1);
empl2=(help4>thresh);
23
24. hh=sum(ismember(empl2,1)); % number of employed guys
% adjustment of employment in order to have exact unemployment
% rates
if (hh>nn*(1-ub))
hhh=find(ismember(empl2,1));
emppl2(hhh(1:(hh-nn*(1-ub)),1),1)=0;
end;
if (hh<nn*(1-ub))
hhh=find(ismember(empl2,0));
emppl2(hhh(1:(hh-nn*(1-ub)),1),1)=1;
end;
kap2(find(ismember(empl,1)))=ppval(i11,kap(find(ismember(empl,1))));
kap2(find(ismember(empl,0)))=ppval(i10,kap(find(ismember(empl,0))));
end;
if ((help==0) && (help3==1))
thresh=(1-empl)*trpr(1,3)/prag(1,2)+empl*trpr(2,3)/prag(1,2);
empl2=(help4>thresh);
hh=sum(ismember(empl2,1)); % number of employed guys
% adjustment of employment in order to have exact unemployment
% rates
if (hh>nn*(1-ug))
hhh=find(ismember(empl2,1));
emppl2(hhh(1:(hh-nn*(1-ug)),1),1)=0;
end;
if (hh<nn*(1-ug))
hhh=find(ismember(empl2,0));
emppl2(hhh(1:(hh-nn*(1-ug)),1),1)=1;
end;
kap2(find(ismember(empl,1)))=ppval(i01,kap(find(ismember(empl,1))));
kap2(find(ismember(empl,0)))=ppval(i00,kap(find(ismember(empl,0))));
end;
if ((help==1) && (help3==1))
thresh=(1-empl)*trpr(3,3)/prag(2,2)+empl*trpr(4,3)/prag(2,2);
empl2=(help4>thresh);
24
25. hh=sum(ismember(empl2,1)); % number of employed guys
% adjustment of employment in order to have exact unemployment
% rates
if (hh>nn*(1-ug))
hhh=find(ismember(empl2,1));
emppl2(hhh(1:(hh-nn*(1-ug)),1),1)=0;
end;
if (hh<nn*(1-ug))
hhh=find(ismember(empl2,0));
emppl2(hhh(1:(hh-nn*(1-ug)),1),1)=1;
end;
kap2(find(ismember(empl,1)))=ppval(i11,kap(find(ismember(empl,1))));
kap2(find(ismember(empl,0)))=ppval(i10,kap(find(ismember(empl,0))));
end;
help=help3;
agkap(1,i)=sum(kap2)/nn;
agkap(1,i)
kap=kap2;
empl=empl2;
end;
res=agkap(1,burn+1:end);
res2=agst(1,burn+1:end);
25
26. 5.6 Auxiliary functions
function res=emprcon2()
global mkk;
h1=ones(size(mkk,1)*(size(mkk,2)-1),1);
h2=eye(size(mkk,1)*size(mkk,2))+diag(-h1,size(mkk,1));
res=h2(1:end-size(mkk,1),:);
*****************************************************************************
function res=emprder(coef)
% coef - each row contains coefficients of piecewise polynomials at some
% interval, for instance [1,0,2] means x^2+2
col=size(coef,2);
row=size(coef,1);
help=repmat(col-1:-1:1,row,1);
res=coef(:,1:end-1).*help;
*****************************************************************************
function res=emprderu(c)
global sig;
if (sig==1)
res=1./c;
else
res=(c.^(-sig));
end;
*****************************************************************************
function [res,res2]=emprf00(kp)
global mc00 trpr vp00 vp01 vp10 vp11 bbeta limp vp00d vp01d vp10d vp11d;
% res - value of the function
%kp=reshape(kp,size(mc00,1),size(mc00,2));
res= -1*(sum(sum(limp*empru(mc00-kp)+bbeta*(trpr(1,1)*emprvalev(vp00,kp)+ ...
+trpr(1,2)*emprvalev(vp01,kp)+trpr(1,3)*emprvalev(vp10,kp)+...+
+trpr(1,4)*emprvalev(vp11,kp)))));
% res2 - drivative (since function has matrix argument derivative is matrix as well)
res2= -1*(-1*limp*emprderu(mc00-kp)+bbeta*(trpr(1,1)*emprvalev(vp00d,kp)+ ...
+trpr(1,2)*emprvalev(vp01d,kp)+trpr(1,3)*emprvalev(vp10d,kp)+...
+trpr(1,4)*emprvalev(vp11d,kp)));
26
27. function [res,res2]=emprf01(kp)
global mc01 trpr vp00 vp01 vp10 vp11 bbeta limp vp00d vp01d vp10d vp11d;
res= -1*(sum(sum(limp*empru(mc01-kp)+bbeta*(trpr(2,1)*emprvalev(vp00,kp)+ ...
+trpr(2,2)*emprvalev(vp01,kp)+trpr(2,3)*emprvalev(vp10,kp)+...
+trpr(2,4)*emprvalev(vp11,kp)))));
res2=-1*(-1*limp*emprderu(mc01-kp)+bbeta*(trpr(2,1)*emprvalev(vp00d,kp)+ ...
+trpr(2,2)*emprvalev(vp01d,kp)+trpr(2,3)*emprvalev(vp10d,kp)+...
+trpr(2,4)*emprvalev(vp11d,kp)));
*****************************************************************************
function [res,res2]=emprf10(kp)
global mc10 trpr vp00 vp01 vp10 vp11 bbeta limp vp00d vp01d vp10d vp11d;
res=-1*(sum(sum(limp*empru(mc10-kp)+bbeta*(trpr(3,1)*emprvalev(vp00,kp)+ ...
+trpr(3,2)*emprvalev(vp01,kp)+trpr(3,3)*emprvalev(vp10,kp)+...
+trpr(3,4)*emprvalev(vp11,kp)))));
res2=-1*(-1*limp*emprderu(mc10-kp)+bbeta*(trpr(3,1)*emprvalev(vp00d,kp)+ ...
+trpr(3,2)*emprvalev(vp01d,kp)+trpr(3,3)*emprvalev(vp10d,kp)+...
+trpr(3,4)*emprvalev(vp11d,kp)));
*****************************************************************************
function [res,res2]=emprf11(kp)
global mc11 trpr vp00 vp01 vp10 vp11 bbeta limp vp00d vp01d vp10d vp11d;
res= -1*(sum(sum(limp*empru(mc11-kp)+bbeta*(trpr(4,1)*emprvalev(vp00,kp)+ ...
+trpr(4,2)*emprvalev(vp01,kp)+trpr(4,3)*emprvalev(vp10,kp)+...
+trpr(4,4)*emprvalev(vp11,kp)))));
res2= -1*(-1*limp*emprderu(mc11-kp)+bbeta*(trpr(4,1)*emprvalev(vp00d,kp)+ ...
+trpr(4,2)*emprvalev(vp01d,kp)+trpr(4,3)*emprvalev(vp10d,kp)+...
+trpr(4,4)*emprvalev(vp11d,kp)));
27
28. function res=emprfun(flag)
% function that computes next period aggregate capital for today’s
% correponding grid points for aggregate capital
% kkb - grid for aggregate capital
% flag=0 - bad state
% flag=1 - good state
global a0 a1 b0 b1 kkb;
if (flag==0)
res=exp(b0+b1*log(kkb));
elseif (flag==1)
res=exp(a0+a1*log(kkb));
end;
*****************************************************************************
function res=emprpr(pprob)
% function that returns vector of zeros when all conditions for transition
% probabilitis is satisfied
% pprob is 4x4
% pprob(i,j) is probability of transition from state i to j
% where i,j belongs to 1 - b0, 2 - b1, 3 - g0, 4 - g1
global ug ub prag durub durug;
res=zeros(16,1);
res(1,1)=1-1/durub-pprob(1,1)/prag(1,1);
res(2,1)=1-1/durug-pprob(3,3)/prag(2,2);
res(3,1)=pprob(1,1)+pprob(1,2)-prag(1,1);
res(4,1)=pprob(2,1)+pprob(2,2)-prag(1,1);
res(5,1)=pprob(1,3)+pprob(1,4)-prag(1,2);
res(6,1)=pprob(2,3)+pprob(2,4)-prag(1,2);
res(7,1)=pprob(3,1)+pprob(3,2)-prag(2,1);
res(8,1)=pprob(4,1)+pprob(4,2)-prag(2,1);
res(9,1)=pprob(3,3)+pprob(3,4)-prag(2,2);
res(10,1)=pprob(4,3)+pprob(4,4)-prag(2,2);
res(11,1)=pprob(3,1)*prag(1,1)-1.25*pprob(1,1)*prag(2,1);
res(12,1)=pprob(1,3)*prag(2,2)-0.75*pprob(3,3)*prag(1,2);
res(13,1)=ub*pprob(1,1)+(1-ub)*pprob(2,1)-ub*prag(1,1);
res(14,1)=ub*pprob(1,3)+(1-ub)*pprob(2,3)-ug*prag(1,2);
res(15,1)=ug*pprob(3,1)+(1-ug)*pprob(4,1)-ub*prag(2,1);
res(16,1)=ug*pprob(3,3)+(1-ug)*pprob(4,3)-ug*prag(2,2);
28
29. function res=emprr(flag)
% function computes return on capital
% flag=0 - bad aggregate state
% flag=1 - good aggregate state
global ub ug alpha zb zg mkkb ltild;
if (flag==0)
res=alpha*zb*(mkkb.^(alpha-1))/((ltild*(1-ub))^(alpha-1));
elseif (flag==1)
res=alpha*zg*(mkkb.^(alpha-1))/((ltild*(1-ug))^(alpha-1));
end;
*****************************************************************************
function res=empru(c)
global sig;
if (sig==1)
res=log(c);
else
res=(c.^(1-sig))/(1-sig);
end;
*****************************************************************************
function res=emprvalev(cs,kpr)
% function that computes values for the value function
% res(i,j) = cs(i)(kpr(i,j))
help=ppval(cs,kpr);
help2=repmat(eye(size(kpr,1),size(kpr,1)),[1,1,size(kpr,2)]);
help3=help.*help2;
help4=sum(help3);
res=squeeze(help4);
*****************************************************************************
function res=emprw(flag)
% function computes wages
% flag=0 - bad aggregate state
% flag=1 - good aggregate state
global ub ug alpha zb zg mkkb ltild;
if (flag==0)
res=(1-alpha)*zb*(mkkb.^(alpha))/((ltild*(1-ub))^(alpha));
elseif (flag==1)
res=(1-alpha)*zg*(mkkb.^(alpha))/((ltild*(1-ug))^(alpha));
end;
29