This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) ME...ijmpict
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
This document summarizes a research paper that develops a measure of dynamic efficiency within a dynamic discrete choice structural inventory model. The model allows a firm to decide each period whether to order (ait = 1) or not order (ait = 0) individual products. Using data from a Portuguese firm, the model is estimated using a two-stage approach. In the first stage, transition probabilities of state variables are estimated nonparametrically. In the second stage, parameters are estimated using Bayesian methods. A counterfactual experiment indicates the firm's actual decisions diverged from optimal decisions in at least 12.71% of cases.
The document compares using oversampling versus small area estimation methods to estimate poverty indicators at a small area level using data from the EU-SILC survey and census for Tuscany, Italy. It estimates headcount ratios and poverty gaps for the province of Pisa by gender of head of household using direct estimates from standard and oversampled EU-SILC data as well as M-quantile small area estimates, finding that small area estimates have higher values and lower standard errors.
The document introduces perturbation methods as a way to solve functional equations that describe economic problems. It presents a basic real business cycle model as an example problem that can be solved using perturbation methods. Specifically, it:
1) Defines the real business cycle model as a functional equation system that is difficult to solve directly.
2) Proposes using perturbation methods by introducing a small perturbation parameter (the standard deviation of technology shocks) and solving the problem when this parameter equals zero.
3) Expands the decision rules as Taylor series in terms of the state variables and perturbation parameter to build a local approximation around the deterministic steady state. This leads to a system of equations that can be solved order-by-order for
I conti economici trimestrali: avanzamenti metodologici e prospettive di innovazione
Seminario
Roma, 21 aprile 2016
Istat, Aula Magna
Via Cesare Balbo, 14
The document presents a dynamic discrete choice model of demand for insecticide treated nets (ITNs) that accounts for time inconsistent preferences and unobserved heterogeneity. The model has three periods where agents make ITN purchase and retreatment decisions. Agents are either time consistent, "naive" time inconsistent, or "sophisticated" time inconsistent. The model is identified in two steps - first when types are directly observed using survey responses, and second when types are unobserved. Identification exploits variation from elicited beliefs about malaria risk. The model can point identify time preference parameters and utility functions up to a normalization.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
AN IMPROVED DECISION SUPPORT SYSTEM BASED ON THE BDM (BIT DECISION MAKING) ME...ijmpict
Based on the BDM (Bit Decision Making) method, the present work presents two contributions: first, the
illustration of the use of the technique known as SOP (Sum Of Products) in order to systematize the
process to obtain the correlation function for sub-system’s mathematical modelling, and second,the provision of capacity to manage a greater than binary but a finite - discrete set of possible subjective qualifications of suppliers at any criterion.
This document summarizes a research paper that develops a measure of dynamic efficiency within a dynamic discrete choice structural inventory model. The model allows a firm to decide each period whether to order (ait = 1) or not order (ait = 0) individual products. Using data from a Portuguese firm, the model is estimated using a two-stage approach. In the first stage, transition probabilities of state variables are estimated nonparametrically. In the second stage, parameters are estimated using Bayesian methods. A counterfactual experiment indicates the firm's actual decisions diverged from optimal decisions in at least 12.71% of cases.
The document compares using oversampling versus small area estimation methods to estimate poverty indicators at a small area level using data from the EU-SILC survey and census for Tuscany, Italy. It estimates headcount ratios and poverty gaps for the province of Pisa by gender of head of household using direct estimates from standard and oversampled EU-SILC data as well as M-quantile small area estimates, finding that small area estimates have higher values and lower standard errors.
The document introduces perturbation methods as a way to solve functional equations that describe economic problems. It presents a basic real business cycle model as an example problem that can be solved using perturbation methods. Specifically, it:
1) Defines the real business cycle model as a functional equation system that is difficult to solve directly.
2) Proposes using perturbation methods by introducing a small perturbation parameter (the standard deviation of technology shocks) and solving the problem when this parameter equals zero.
3) Expands the decision rules as Taylor series in terms of the state variables and perturbation parameter to build a local approximation around the deterministic steady state. This leads to a system of equations that can be solved order-by-order for
I conti economici trimestrali: avanzamenti metodologici e prospettive di innovazione
Seminario
Roma, 21 aprile 2016
Istat, Aula Magna
Via Cesare Balbo, 14
The document presents a dynamic discrete choice model of demand for insecticide treated nets (ITNs) that accounts for time inconsistent preferences and unobserved heterogeneity. The model has three periods where agents make ITN purchase and retreatment decisions. Agents are either time consistent, "naive" time inconsistent, or "sophisticated" time inconsistent. The model is identified in two steps - first when types are directly observed using survey responses, and second when types are unobserved. Identification exploits variation from elicited beliefs about malaria risk. The model can point identify time preference parameters and utility functions up to a normalization.
- The document analyzes forecasting volatility for the MSCI Emerging Markets Index using a Stochastic Volatility model solved with Kalman Filtering. It derives the Stochastic Differential Equations for the model and puts them into State Space form solved with a Kalman Filter.
- Descriptive statistics on the daily returns of the MSCI Emerging Markets Index ETF from 2011-2016 show a mean close to 0, standard deviation of 0.01428, negative skewness, and kurtosis close to a normal distribution. The model will be evaluated against a GARCH model.
This document discusses heterogeneous agent models without aggregate uncertainty. It introduces a model with a continuum of agents who face idiosyncratic income fluctuations but no aggregate shocks. There is a unique stationary equilibrium with constant interest rates and wages. The document discusses the recursive competitive equilibrium, existence and uniqueness of the stationary equilibrium, transition functions, computation methods, and some qualitative results from calibrating the model.
The document discusses three examples of nonlinear and non-Gaussian DSGE models. The first example features Epstein-Zin preferences to allow for a separation between risk aversion and the intertemporal elasticity of substitution. The second example models volatility shocks using time-varying variances. The third example aims to distinguish between the effects of stochastic volatility ("fortune") versus parameter drifting ("virtue") in explaining time-varying volatility in macroeconomic variables. The document outlines the motivation, structure, and solution methods for these three nonlinear DSGE models.
The document discusses projection methods for solving functional equations. Projection methods work by specifying a basis of functions and "projecting" the functional equation against that basis to find the parameters. This allows approximating different objects like decision rules or value functions. The document focuses on spectral methods that use global basis functions and covers various basis options like monomials, trigonometric series, Jacobi polynomials and Chebyshev polynomials. It also discusses how to generalize the basis to multidimensional problems, including using tensor products and Smolyak's algorithm to reduce the number of basis elements.
The document discusses pricing interest rate derivatives using the one factor Hull-White short rate model. It begins with an introduction to short rate models and the Hull-White model specifically. It describes how the Hull-White model can be calibrated to market prices by relating its parameter θ to the market term structure. The document then discusses implementing the Hull-White model using trinomial trees and pricing constant maturity swaps.
This document discusses filtering and likelihood inference. It begins by introducing filtering problems in economics, such as evaluating DSGE models. It then presents the state space representation approach, which models the transition and measurement equations with stochastic shocks. The goal of filtering is to compute the conditional densities of states given observed data over time using tools like the Chapman-Kolmogorov equation and Bayes' theorem. Filtering provides a recursive way to make predictions and updates estimates as new data arrives.
A statistical model which links field experiments with a simulator usually embeds a discrepancy function. The discrepancy function models the systematic gap between the simulator and the real system. Analyzing the discrepancy should help to understand to what extent the simulator is reliable. In particular, determining that some variables are active or inert in the discrepancy function is of major interest since it indicates which variables are correctly modeled or not by the simulator. Therefore, this could give some leads to improve the simulator and help to determine if extrapolation is safe or not with respect to a specific input.The discrepancy function is modeled as a Gaussian process which is parametrized as in Linkletter et al. (2006). This parametrization provides a simple distinction between active and inert variables. The variable selection is performed through a model selection where the models in competition differ on the prior distribution considered for the parameter associated with the variables in the Gaussian process.We resort to computations of Bayes factors by using a bridge sampling to perform the model selection. Contrasted synthetic examples are considered to support the proposed technique.
Co-authors: Rui Paulo, Anabel Forte
The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
The document discusses C4.5 algorithm for building univariate decision trees and methods for building multivariate decision trees. C4.5 uses entropy, gain, and pruning to build trees that classify instances based on one attribute per node. Multivariate trees can classify using linear combinations of attributes at nodes to better handle correlated attributes. Methods like absolute error correction and thermal perceptron are presented for training linear machines to construct multivariate trees. Examples of trees generated by both approaches are shown.
This document discusses various machine learning techniques including:
1. Tree pruning involves first growing a large tree and then pruning branches that do not improve the objective function. This prevents early stopping.
2. Boosting uses multiple weak learners sequentially to get an additive model that approximates the regression function. It combines many simple models to create a powerful ensemble model.
3. Unsupervised learning techniques like principal component analysis and clustering are used to find patterns in data without an outcome variable. These include reducing dimensions and partitioning data into subgroups.
This document describes an uncertain volatility model for pricing equity option trading strategies when the volatilities are uncertain. It uses the Black-Scholes Barenblatt equation developed by Avellaneda et al. to derive price bounds. The model is implemented in C++ using recombining trinomial trees to discretize the asset prices over time and space. The code computes the upper and lower price bounds by solving the Black-Scholes Barenblatt PDE using numerical techniques, with the volatility set based on the sign of the option gamma.
Estimating Financial Frictions under LearningGRAPE
The paper studies the implication of initial beliefs and associated confidence under adaptive learning. We first illustrate how prior beliefs determine learning dynamics and the evolution of endogenous variables in a small DSGE model with credit-constrained agents, in which rational expectations are replaced by constant-gain adaptive learning. We then examine how discretionary experimenting with new macroeconomic policies is affected by expectations that agents have in relation to these policies. More specifically, we show that a newly introduced macro-prudential policy that aims at making leverage counter-cyclical can lead to substantial increase in fluctuations under learning, when the economy is hit by financial shocks, if beliefs reflect imperfect information about the policy experiment.
This document discusses the Dickey-Fuller test for unit roots, which tests whether a time series is stationary or nonstationary. It presents the three regression equations used in the Dickey-Fuller test and the corresponding critical values. It then provides steps for performing the Dickey-Fuller test in EViews, including specifying the test type, level/difference of the series, regression model, lag length, and interpreting the test results by comparing the test statistic to the critical values.
Approximate Bayesian computation (ABC) is a computational technique for Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC approximates the likelihood by simulating data under different parameter values and comparing simulated and observed data using summary statistics. ABC produces a parameter sample without evaluating the full likelihood function, thus allowing Bayesian inference when likelihoods are unavailable or difficult to compute.
The dangers of policy experiments Initial beliefs under adaptive learningGRAPE
The paper studies the implication of initial beliefs and associated confidence on the system’s
dynamics under adaptive learning. We first illustrate how prior beliefs determine learning dynamics
and the evolution of endogenous variables in a small DSGE model with credit-constrained agents,
in which rational expectations are replaced by constant-gain adaptive learning. We then examine
how discretionary experimenting with new macroeconomic policies is affected by expectations that
agents have in relation to these policies. More specifically, we show that a newly introduced macroprudential policy that aims at making leverage counter-cyclical can lead to substantial increase in
fluctuations under learning, when the economy is hit by financial shocks, if beliefs reflect imperfect
information about the policy experiment. This is in the stark contrast to the effects of such policy
under rational expectations.
This document summarizes the Bayesian Additive Regression Trees (BART) model and the Monotone BART (MBART) extension. BART approximates an unknown function using an ensemble of regression trees with a regularization prior. It connects to ideas in Bayesian nonparametrics, dynamic random basis elements, and gradient boosting. The document outlines the BART MCMC algorithm and how it can provide automatic uncertainty quantification and variable selection. It then introduces MBART, which constrains trees to be monotonic, and describes an MCMC algorithm for fitting MBART models. Examples illustrate BART and MBART fits to simulated monotonic and non-monotonic functions.
Value Based Decision Control: Preferences Portfolio Allocation, Winer and Col...IOSRjournaljce
The paper presents an innovative approach to mathematical modeling of complex systems „humandynamical process”. The approach is based on the theory of measurement and utility theory and permits inclusion of human preferences in the objective function. The objective utility function is constructed by recurrent stochastic procedure which represents machine learning based on the human preferences. The approach is demonstrated by two case studies, portfolio allocation with Wiener process and portfolio allocation in the case of financial process with colored noise. The presented formulations could serve as foundation of development of decision support tools for design of management/control. This value-oriented modeling leads to the development of preferences-based decision support in machine learning environment and control/management value based design.
Statement of stochastic programming problemsSSA KPI
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 1.
More info at http://summerschool.ssa.org.ua
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
This document discusses sources of identification from market level data and solutions to precision problems when estimating demand models. It focuses on adding assumptions like a pricing equation to bring more information from existing data. The pricing equation assumes Nash equilibrium in prices and is estimated jointly with the demand equation. This adds degrees of freedom compared to estimating demand alone. The pricing equation provides information on price elasticities and markups that help identify demand parameters. The document also discusses using micro data and adding cost function assumptions to the model.
- The document analyzes forecasting volatility for the MSCI Emerging Markets Index using a Stochastic Volatility model solved with Kalman Filtering. It derives the Stochastic Differential Equations for the model and puts them into State Space form solved with a Kalman Filter.
- Descriptive statistics on the daily returns of the MSCI Emerging Markets Index ETF from 2011-2016 show a mean close to 0, standard deviation of 0.01428, negative skewness, and kurtosis close to a normal distribution. The model will be evaluated against a GARCH model.
This document discusses heterogeneous agent models without aggregate uncertainty. It introduces a model with a continuum of agents who face idiosyncratic income fluctuations but no aggregate shocks. There is a unique stationary equilibrium with constant interest rates and wages. The document discusses the recursive competitive equilibrium, existence and uniqueness of the stationary equilibrium, transition functions, computation methods, and some qualitative results from calibrating the model.
The document discusses three examples of nonlinear and non-Gaussian DSGE models. The first example features Epstein-Zin preferences to allow for a separation between risk aversion and the intertemporal elasticity of substitution. The second example models volatility shocks using time-varying variances. The third example aims to distinguish between the effects of stochastic volatility ("fortune") versus parameter drifting ("virtue") in explaining time-varying volatility in macroeconomic variables. The document outlines the motivation, structure, and solution methods for these three nonlinear DSGE models.
The document discusses projection methods for solving functional equations. Projection methods work by specifying a basis of functions and "projecting" the functional equation against that basis to find the parameters. This allows approximating different objects like decision rules or value functions. The document focuses on spectral methods that use global basis functions and covers various basis options like monomials, trigonometric series, Jacobi polynomials and Chebyshev polynomials. It also discusses how to generalize the basis to multidimensional problems, including using tensor products and Smolyak's algorithm to reduce the number of basis elements.
The document discusses pricing interest rate derivatives using the one factor Hull-White short rate model. It begins with an introduction to short rate models and the Hull-White model specifically. It describes how the Hull-White model can be calibrated to market prices by relating its parameter θ to the market term structure. The document then discusses implementing the Hull-White model using trinomial trees and pricing constant maturity swaps.
This document discusses filtering and likelihood inference. It begins by introducing filtering problems in economics, such as evaluating DSGE models. It then presents the state space representation approach, which models the transition and measurement equations with stochastic shocks. The goal of filtering is to compute the conditional densities of states given observed data over time using tools like the Chapman-Kolmogorov equation and Bayes' theorem. Filtering provides a recursive way to make predictions and updates estimates as new data arrives.
A statistical model which links field experiments with a simulator usually embeds a discrepancy function. The discrepancy function models the systematic gap between the simulator and the real system. Analyzing the discrepancy should help to understand to what extent the simulator is reliable. In particular, determining that some variables are active or inert in the discrepancy function is of major interest since it indicates which variables are correctly modeled or not by the simulator. Therefore, this could give some leads to improve the simulator and help to determine if extrapolation is safe or not with respect to a specific input.The discrepancy function is modeled as a Gaussian process which is parametrized as in Linkletter et al. (2006). This parametrization provides a simple distinction between active and inert variables. The variable selection is performed through a model selection where the models in competition differ on the prior distribution considered for the parameter associated with the variables in the Gaussian process.We resort to computations of Bayes factors by using a bridge sampling to perform the model selection. Contrasted synthetic examples are considered to support the proposed technique.
Co-authors: Rui Paulo, Anabel Forte
The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
The document discusses C4.5 algorithm for building univariate decision trees and methods for building multivariate decision trees. C4.5 uses entropy, gain, and pruning to build trees that classify instances based on one attribute per node. Multivariate trees can classify using linear combinations of attributes at nodes to better handle correlated attributes. Methods like absolute error correction and thermal perceptron are presented for training linear machines to construct multivariate trees. Examples of trees generated by both approaches are shown.
This document discusses various machine learning techniques including:
1. Tree pruning involves first growing a large tree and then pruning branches that do not improve the objective function. This prevents early stopping.
2. Boosting uses multiple weak learners sequentially to get an additive model that approximates the regression function. It combines many simple models to create a powerful ensemble model.
3. Unsupervised learning techniques like principal component analysis and clustering are used to find patterns in data without an outcome variable. These include reducing dimensions and partitioning data into subgroups.
This document describes an uncertain volatility model for pricing equity option trading strategies when the volatilities are uncertain. It uses the Black-Scholes Barenblatt equation developed by Avellaneda et al. to derive price bounds. The model is implemented in C++ using recombining trinomial trees to discretize the asset prices over time and space. The code computes the upper and lower price bounds by solving the Black-Scholes Barenblatt PDE using numerical techniques, with the volatility set based on the sign of the option gamma.
Estimating Financial Frictions under LearningGRAPE
The paper studies the implication of initial beliefs and associated confidence under adaptive learning. We first illustrate how prior beliefs determine learning dynamics and the evolution of endogenous variables in a small DSGE model with credit-constrained agents, in which rational expectations are replaced by constant-gain adaptive learning. We then examine how discretionary experimenting with new macroeconomic policies is affected by expectations that agents have in relation to these policies. More specifically, we show that a newly introduced macro-prudential policy that aims at making leverage counter-cyclical can lead to substantial increase in fluctuations under learning, when the economy is hit by financial shocks, if beliefs reflect imperfect information about the policy experiment.
This document discusses the Dickey-Fuller test for unit roots, which tests whether a time series is stationary or nonstationary. It presents the three regression equations used in the Dickey-Fuller test and the corresponding critical values. It then provides steps for performing the Dickey-Fuller test in EViews, including specifying the test type, level/difference of the series, regression model, lag length, and interpreting the test results by comparing the test statistic to the critical values.
Approximate Bayesian computation (ABC) is a computational technique for Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC approximates the likelihood by simulating data under different parameter values and comparing simulated and observed data using summary statistics. ABC produces a parameter sample without evaluating the full likelihood function, thus allowing Bayesian inference when likelihoods are unavailable or difficult to compute.
The dangers of policy experiments Initial beliefs under adaptive learningGRAPE
The paper studies the implication of initial beliefs and associated confidence on the system’s
dynamics under adaptive learning. We first illustrate how prior beliefs determine learning dynamics
and the evolution of endogenous variables in a small DSGE model with credit-constrained agents,
in which rational expectations are replaced by constant-gain adaptive learning. We then examine
how discretionary experimenting with new macroeconomic policies is affected by expectations that
agents have in relation to these policies. More specifically, we show that a newly introduced macroprudential policy that aims at making leverage counter-cyclical can lead to substantial increase in
fluctuations under learning, when the economy is hit by financial shocks, if beliefs reflect imperfect
information about the policy experiment. This is in the stark contrast to the effects of such policy
under rational expectations.
This document summarizes the Bayesian Additive Regression Trees (BART) model and the Monotone BART (MBART) extension. BART approximates an unknown function using an ensemble of regression trees with a regularization prior. It connects to ideas in Bayesian nonparametrics, dynamic random basis elements, and gradient boosting. The document outlines the BART MCMC algorithm and how it can provide automatic uncertainty quantification and variable selection. It then introduces MBART, which constrains trees to be monotonic, and describes an MCMC algorithm for fitting MBART models. Examples illustrate BART and MBART fits to simulated monotonic and non-monotonic functions.
Value Based Decision Control: Preferences Portfolio Allocation, Winer and Col...IOSRjournaljce
The paper presents an innovative approach to mathematical modeling of complex systems „humandynamical process”. The approach is based on the theory of measurement and utility theory and permits inclusion of human preferences in the objective function. The objective utility function is constructed by recurrent stochastic procedure which represents machine learning based on the human preferences. The approach is demonstrated by two case studies, portfolio allocation with Wiener process and portfolio allocation in the case of financial process with colored noise. The presented formulations could serve as foundation of development of decision support tools for design of management/control. This value-oriented modeling leads to the development of preferences-based decision support in machine learning environment and control/management value based design.
Statement of stochastic programming problemsSSA KPI
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 1.
More info at http://summerschool.ssa.org.ua
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
This document discusses sources of identification from market level data and solutions to precision problems when estimating demand models. It focuses on adding assumptions like a pricing equation to bring more information from existing data. The pricing equation assumes Nash equilibrium in prices and is estimated jointly with the demand equation. This adds degrees of freedom compared to estimating demand alone. The pricing equation provides information on price elasticities and markups that help identify demand parameters. The document also discusses using micro data and adding cost function assumptions to the model.
1. The document summarizes commonly used instrumental variables in structural demand estimation, including characteristics-based instruments proposed by BLP and cost-based instruments.
2. It discusses applications that use these IVs, including BLP's 1995 study of the automobile market and Nevo's 2001 analysis of the ready-to-eat cereal industry.
3. The document raises issues with some commonly used IVs and outlines challenges in structural demand estimation, such as the need to model supply side behavior.
This document provides an introduction to dynamic demand modeling for storable and durable goods. It discusses how consumer stockpiling behavior and sales can lead to biases in static demand estimation. A simple demand model is then presented that accounts for demand anticipation effects using two consumer types - storers and non-storers. The model assumes perfect foresight, a fixed storage period, and derives different purchasing patterns between the four states of current and past price periods.
This document summarizes a lecture on using moment inequalities to estimate preference parameters from discrete choice models. It discusses how this approach differs from standard empirical models by working directly with the inequalities that define optimal behavior. The approach assumes the utility from the actual choice should be larger than the utility from a considered but discarded counterfactual choice. Parameter values that satisfy these inequalities on average are accepted. The document then provides a single agent example to illustrate this approach.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
The document discusses using machine learning methods to estimate heterogeneous causal effects. It proposes an approach of using regression trees on a transformed outcome variable to estimate individual treatment effects. However, this approach is critiqued as it can introduce noise. An improved approach is presented that uses the sample average treatment effect within each leaf as the estimator, and uses the variance of predictions for model fitting criteria and a matching estimator for out-of-sample evaluation. The approach separates the tasks of model selection and treatment effect estimation to enable valid statistical inference on estimated effects in subgroups.
Big Data analysis involves building predictive models from high-dimensional data using techniques like variable selection, cross-validation, and regularization to avoid overfitting. The document discusses an example analyzing web browsing data to predict online spending, highlighting challenges with large numbers of variables. It also covers summarizing high-dimensional data through dimension reduction and model building for prediction versus causal inference.
This document discusses recommendation systems and topic modeling for documents using machine learning techniques. It begins by introducing recommendation systems and different types of recommendation literature, including item similarity, collaborative filtering, and hierarchical models. It then discusses bringing in user choice data and different collaborative filtering approaches like k-nearest neighbor prediction and matrix factorization. The document also covers topic modeling, including latent Dirichlet allocation, and how topic models can be combined with user choice models. It concludes by discussing challenges in causal inference when using machine learning.
The document discusses practical computing issues that arise when working with large datasets. It begins by noting that many statistical analyses can be done on a single laptop. It then discusses storing very large datasets, which may require terabytes of storage. The document outlines some basic computing concepts for working with big data, including software engineering practices, databases, and distributed computing.
The document discusses various applications of dimension reduction techniques to extract low-dimensional representations from high-dimensional data for purposes of prediction, descriptive analysis, and input into subsequent causal analysis. It provides examples of such applications using Google search data, genetic data, medical claims data, credit scores, online purchases, and congressional roll call votes. It also discusses issues around text as data, including bag-of-words representations and the use of automated and manual steps in text analysis.
Econometrics of High-Dimensional Sparse ModelsNBER
The document discusses high-dimensional sparse econometric models where the number of predictors (p) is much larger than the sample size (n). It outlines an approach for estimating regression functions using penalization methods like the LASSO. Specifically, it discusses:
1. Using the LASSO estimator to minimize squared errors while penalizing the l1-norm of coefficients, inducing sparsity.
2. Choosing the optimal penalty level as a function of the error variance and sample size. Variants like the square-root LASSO provide a tuning-free approach.
3. Examples showing how sparse approximations can better capture patterns in population data than traditional low-dimensional approximations.
High-Dimensional Methods: Examples for Inference on Structural EffectsNBER
This document describes a study that uses high-dimensional methods to estimate the effect of 401(k) eligibility on measures of accumulated assets. It begins by outlining the baseline model and notes areas for improvement, such as controlling for income. It then discusses using regularization like LASSO for variable selection in high-dimensional settings. The document explores more flexible specifications by generating many interaction and polynomial terms but notes the need for dimension reduction. It describes using LASSO to select important variables from a large set. The results select a parsimonious set of variables and estimate similar 401(k) effects as the baseline.
The document discusses how economic shocks propagate through networks of production and inputs. It begins by presenting a simple model of an economy consisting of sectors that use each other's outputs as inputs. Shocks to individual sectors can spread to other sectors through this production network. While diversification across many sectors could cause microeconomic shocks to "wash out", the structure of the network influences how shocks aggregate. Asymmetric networks with some sectors having outsized importance can lead to greater aggregate volatility than more regular networks where all sectors are equally important. Empirical analysis of input-output data supports the theory by finding significant downstream effects of sectoral shocks.
This document summarizes key points from a lecture on diffusion, identification, and network formation. It discusses how diffusion of products can be modeled, including information passing between neighbors. Estimation techniques are described to model information diffusion on actual networks by simulating propagation over time. The challenges of identification when networks are endogenous are also covered. Forming models of network formation that account for link dependencies is an important area of current research.
Daron Acemoglu presents a document on networks, games over networks, and peer effects. The document discusses how networks can be used to model externalities and peer effects. It presents a model of a game over networks where players' payoffs are determined by their own actions, the actions of their network neighbors, and potential strategic interactions. The best responses in this game are characterized. Under certain conditions, such as the game being a potential game, the game will have a unique Nash equilibrium where each player's action is determined by their position in the network. The document discusses applications of this type of network game model.
This document provides an overview of social and economic networks. It discusses why networks are important to study, as interactions are shaped by relationships. Some examples of networks are presented, such as marriage networks, friendship networks in high schools, military alliances, and interbank payment networks. The document then discusses how to represent networks mathematically and introduces concepts like degree, paths, average path length, and degree distributions. It also covers homophily, or the tendency for similar people to connect, and shows examples of homophily along attributes. Finally, it introduces the idea of centrality and influence within a network, discussing measures like degree centrality and eigenvector centrality.
This document outlines exercises using Dynare to analyze a simple New Keynesian model. The exercises explore the rationale for the Taylor principle, potential conflicts between monetary policy channels, the sensitivity of inflation and output to shock persistence, cases when the Taylor rule does not adjust interest rates enough, and instances when news shocks cause the Taylor rule to move rates in unintended directions.
The document discusses a model of financial frictions that arise from asymmetric information between borrowers and lenders. It first presents a simple model where entrepreneurs with private information about project returns borrow from banks, and the optimal contract balances risk for the bank. The document then explores integrating this costly state verification model into a dynamic stochastic general equilibrium framework to analyze how financial shocks may influence business cycles.
The document presents a new analytical framework for modeling the classical theory of market price gravitation towards normal prices. It proposes that market prices react to the difference between effectual demand and quantity supplied, rather than the rate of change of that difference as in previous "cross-dual" models. This approach avoids instability issues found in cross-dual models. The paper explores this approach through several models that describe how market prices can converge asymptotically to normal prices over time through adjustments in supply.
The document presents results from a model examining how the financial sector's ability to perform liquidity transformation affects aggregate responses to fiscal and monetary policies.
Result 1 shows that aggregate responses depend on the financial sector through a liquid asset supply function, characterized by own-price and cross-price elasticities. Result 2 shows policies affect output through goods, liquid asset, and modified Keynesian cross channels. Result 3 finds that cross-price elasticities are quantitatively important, with output responses being three times larger with inelastic versus elastic supply.
This document discusses three approaches to consumer behavior analysis: cardinal, ordinal, and revealed preference. It provides an illustrative example using the cardinal approach to demonstrate how a consumer allocates income between two goods to maximize total utility, given prices and budget constraints. It also shows how demand would change if price or income changed. Next, it covers the ordinal and indifference curve analysis, defining key concepts. Finally, it provides a numerical example to illustrate demand curves, welfare measurement, and relationships between compensating variation and price indices.
The document presents a New Keynesian model with a small open economy. It introduces the model, which includes Calvo staggered prices, perfect international financial markets, PPP in the long run, and identical preferences across countries. The model derives a two-equation system for inflation and output gaps, as well as characterizes monetary policy. It then calibrates the model under technology and cost-push shocks for closed, slightly open, and moderately open economies.
Demand Dynamics Under Consumer Regret: An Empirical AnalysisMeisam Hejazi Nia
Many studies have shown that consumers are strategic and forward looking, so they conclude that in the context of finite season products firms should adopt everyday low price strategy to optimize their revenue. Other studies are proponent of limited availability policy, emphasizing the effect of consumer regret. The effect of consumer regret on consumer’s decision has not been tested yet, to the best of our knowledge. In this study we structurally model and analyze the effect of consumer regret on consumer choice behavior, and firms’ revenue. Our model considers two types of consumer regret: high price regret and stock out regret. At the beginning of the first period, Consumers decide which period of two-period selling season select to purchase. Consumer experiences high price regret when she purchases the product in the first period, but the product is available in the second period with lower price. On the other hand, if consumers decide to purchase in the second period and the product is not available then, she will experience stock out regret. Consumers are uncertain about the availability of product in the second period, so they form an expectation about the availability of product. Our structural model incorporates five aspect of consumers’ decision including changing price, limited availability, consumption utility, ownership utility, and regret. We consider that consumers are heterogeneous in terms of their level of price sensitivity and regret. Our data set includes aggregate sales and inventory data. Counterfactual analysis will tell us how firms can affect consumers regret by signaling to optimize its revenue.
The document summarizes Miguel Robles' presentation on tools to measure the impact of changes in international food prices on household welfare. It discusses an analytical framework for estimating compensating variation to measure welfare impacts. Empirical estimates are provided for Bangladesh, Pakistan, and Vietnam using household survey data and defining commodity groups. Scenarios analyze observed food price changes between 2006-2008 and a hypothetical 10% price increase. Results show mostly negative welfare impacts, with urban areas and poorer households hurt more. Losses represent a large share of consumption for poorer households.
The document presents a dynamic model of wholesale electricity markets using a state-based game theoretic framework. The model represents (1) the generation companies as players who dynamically adjust production based on real-time pricing signals, (2) the consumption companies as players who adjust demand similarly, and (3) the system operator as coordinating the market to a generalized Nash equilibrium based on the power flows. The dynamics are modeled as a gradient-based adjustment process that can be analyzed for stability properties to ensure proper market operation. Simulation results are discussed but not shown to validate the dynamic modeling approach.
Problem Set 2.pdfProblem Set 2 ECO105 Industrial Organiz.docxwkyra78
Problem Set 2.pdf
Problem Set 2
ECO105 Industrial Organization and Firm Strategy
Professor Michael Noel
University of California San Diego
------
1. A single-product monopolist sells output to two geographically separated markets. Arbitrage is not
possible. The inverse demand curve in market 1 is given by p1 = 8 - q1 and in market 2, it is p2 = 4 -
q2. Note: if you get stuck in one part, move on to the next.)
a. The monopolist currently produces the output at a single plant (called plant "A") with total cost
function C(qA) = 0.5qA
2. Hence qA = q1 + q2 is the total output of the firm (which we can also
call Q). Find the quantities sold in each market, q1 and q2, and total output Q. Find the profits of
the firm. (Hint: this is the basic third degree price discriminating problem. Make sure to
substitute qA out of the problem using the constraint, so that the problem is all in terms of the
two “free” variables, q1 and q2.)
b. (From an old exam.) The monopolist is considering adding a second plant, plant "B". The total
cost function for plant "B" is C(qB) = qB + 0.25qB
2. Hence, if the monopolist purchases the new
plant, her total production is qA + qB = Q, where qA is produced in plant A and qB is produced in
plant B. She then sells Q = q1 + q2 in output, where q1 is sold in market 1 and q2 is sold in market
2. Find the output sold in each market, q1 and q2, the amount of good produced at each plant, qA
and qB, and the total output, Q. Find the profits of the firm in this case. How much is the firm
willing to pay to purchase plant B? (Hint: Do NOT assume that a single plant is matched solely
with a single market! You only know that the total produced equals to the total sold, so q1 + q2 =
qA + qB ( = Q). There are now four quantities to find, q1, q2, qA, qB, but only three of them are
“free variables”, i.e. can be independently chosen. So for example, if you know q1, q2 and qA, it
necessarily determines what qB must be. So set up your profit function initially in terms of all
four quantities and but then use the constraint q1 + q2 = qA + qB to substitute out one of the four.
Which three quantities you keep in the equation is up to you. Then solve for three first order
conditions.)
c. (Challenging.) Assume a new law prohibits price discrimination and requires the monopolist to
charge the same price in both markets. Repeat part a. under this assumption, i.e. only plant A. Has
welfare increased, decreased or remained unchanged relative to part a.? Warning: check to see if
both markets are being served in equilibrium! (You will need to “add” up the two demand curves
– add the q’s, not the p’s! A diagram will help. Watch out for the kinks, it may be that only one
group will be served and it may be that both will be served. At the end it may help to plot your
numerical answer on your graph and make sure it makes sense. For example, if you find both
groups are served, you should be on the segment of the ...
1. The document discusses incentive separability in mechanism design problems and its implications for classic results in optimal taxation theory.
2. It introduces a framework to study incentive separability, which is when perturbing a set of decisions along agents' indifference curves preserves incentive constraints.
3. The main result is that the optimal mechanism allows unrestricted choice over incentive-separable decisions given prices and budgets, generalizing theorems by Atkinson-Stiglitz and Diamond-Mirrlees.
Should Commodity Investors Follow Commodities' Prices?guasoni
Most institutional investors gain access to commodities through diversified index funds, even though mean-reverting prices and low correlation among commodities returns indicate that two-fund separation does not hold for commodities. In contrast to demand for stocks and bonds, we find that, on average, demand for commodities is largely insensitive to risk aversion, with intertemporal hedging demand playing a major role for more risk averse investors. Comparing the optimal strategies of investors who observe only the index to those of investors who observe all commodities, we find that information on commodity prices leads to significant welfare gains, even if trading is confined to the index only.
Tools to Measure Impacts over Households of Changes in International Prices
Presented by Miguel Robles at the AGRODEP Workshop on Analytical Tools for Food Prices
and Price Volatility
June 6-7, 2011 • Dakar, Senegal
For more information on the workshop or to see the latest version of this presentation visit: http://www.agrodep.org/first-annual-workshop
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATESNBER
1) State deficits can boost job growth in the deficit state but also in neighboring states, showing significant spillover effects. Coordinated fiscal policies across states are more cost-effective than individual state policies.
2) Federal aid to states, when coordinated, can effectively stimulate the overall economy. Targeted aid linked to services for lower income households is more effective than untargeted aid.
3) The economic stimulus of the American Recovery and Reinvestment Act could have been 30% more effective if it relied more on targeted aid and less on untargeted aid. Coordinated fiscal policies that account for spillovers across economic regions are optimal for stimulus programs.
Business in the United States Who Owns it and How Much Tax They PayNBER
This document analyzes business ownership and tax payments in the United States using administrative tax data from 2011. It finds:
1. Pass-through business income, such as from partnerships and S-corporations, is highly concentrated.
2. The average federal income tax rate on pass-through business income is 19%.
3. 30% of income earned by partnerships cannot be uniquely traced to an identifiable, ultimate owner.
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...NBER
This document analyzes the program linkages and budgetary spillovers of minimum wage regulation using data from recent federal minimum wage increases. It finds that wages increased for some low-skilled workers but employment declined significantly. While safety net programs provided some income replacement, earnings and tax revenues decreased substantially. Overall, the analysis suggests minimum wage increases reallocated income from employers and taxpayers to low-wage workers, with program and tax revenue spillovers of approximately $1-2 billion annually.
The Distributional Effects of U.S. Clean Energy Tax CreditsNBER
This document summarizes a study examining the distributional effects of US clean energy tax credits from 2006-2012. It finds that higher-income households claimed a disproportionate share of the $18 billion in credits. Specifically, the study analyzes tax return data to see who claimed credits for investments like home weatherization, solar panels, hybrid vehicles, and electric vehicles. It aims to provide insights into how the inequitable distribution may inform future program design and the debate around subsidies versus carbon taxes.
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...NBER
This document summarizes a study that tested different strategies for increasing property tax compliance in Philadelphia. The researchers worked with the city's Department of Revenue to randomly assign taxpayers with overdue property taxes to receive one of four letters: a standard letter, or a standard letter plus an additional sentence appealing to civic duty, public services benefits, or potential home loss. They found the civic duty appeal significantly increased tax payments, especially for those with lower debts. Appealing to public services benefits also showed some effect on higher debt taxpayers. The researchers conclude strategically targeting messages could further improve compliance.
This document summarizes a discussion between Susan Athey and Guido Imbens on the relationship between machine learning and causal inference. It notes that while machine learning excels at prediction problems using large datasets, it has weaknesses when it comes to causal questions. Econometrics and statistics literature focuses more on formal theories of causality. The document proposes combining the strengths of both fields by developing machine learning methods that can estimate causal effects, accounting for issues like endogeneity and treatment effect heterogeneity. It outlines some open problems and directions for future research at the intersection of these fields.
The NBER Working Paper Series at 20,000 - Joshua GansNBER
This document discusses publication lags in economics research, with working papers appearing years before peer-reviewed published work. It questions whether publication means anything given the large number of working papers now available. It also considers options for the National Bureau of Economic Research's web repository, such as providing open access to working papers along with links to related materials, peer reviews, and published versions of the papers.
The NBER Working Paper Series at 20,000 - Claudia GoldinNBER
This document analyzes trends in the NBER Working Paper series from 1978 to 2013. It finds that the number of working papers published annually has increased dramatically over time, from around 100 in the late 1970s to over 1,200 by 2013. The number of NBER research programs has also expanded significantly, from 7 originally to over 20 currently. Individual working papers now tend to involve more programs and more authors than in the past as well. The working paper series has become less specialized and more collaborative over four decades of growth and evolution.
The NBER Working Paper Series at 20,000 - James PoterbaNBER
This document summarizes the origin and evolution of the NBER Working Paper series from its beginning in 1972 to the present. It started as an outlet for NBER research and has grown tremendously over time. Some key points:
- The first working paper was published in June 1973 and there were only 3 papers in the first month.
- Growth accelerated after Martin Feldstein became NBER President in 1977, with over 200 papers published in 1981.
- There are now over 20,000 working papers published and about 5.5 million downloads per year from around the world.
- The most popular papers focus on topics like financial crises, economic growth, and corporate governance.
The NBER Working Paper Series at 20,000 - Scott SternNBER
The NBER Working Paper series recently reached 20,000 papers published and is recognized as one of the leading economics working paper series in the world. According to 2014 Google Scholar Metrics, the NBER Working Paper series ranked 18th out of thousands of journals by its H-5 index, which measures the productivity and impact of published work. The high ranking of the NBER Working Paper series demonstrates its important role in disseminating new economic research and ideas worldwide.
The NBER Working Paper Series at 20,000 - Glenn EllisonNBER
This document summarizes trends in the publication process and the role of working papers. It finds that publication times at economics journals have increased significantly over the past 30 years. Acceptance rates at top journals have also declined. These changes mean that published papers cannot address current issues or reflect the latest state of knowledge as quickly. The document also finds that working papers, such as those from the NBER, play an increasingly important role, as economists can disseminate their work more quickly through working paper series than through the traditional publication process. NBER working papers account for a large share of papers eventually published in top journals and those NBER papers go on to be well-cited.
Procrastination is a common challenge that many individuals face when it comes to completing tasks and achieving goals. It can hinder productivity and lead to feelings of stress and frustration.
However, with the right strategies and mindset, it is possible to overcome procrastination and increase productivity.
In this article, we will explore the causes of procrastination, how to recognize the signs of procrastination in oneself, and effective strategies for overcoming procrastination and boosting productivity.
ProSocial Behaviour - Applied Social Psychology - Psychology SuperNotesPsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
As we navigate through the ebbs and flows of life, it is natural to experience moments of low motivation and dwindling passion for our goals.
However, it is important to remember that this is a common hurdle that can be overcome with the right strategies in place.
In this guide, we will explore ways to rekindle the fire within you and stay motivated towards your aspirations.
Aggression - Applied Social Psychology - Psychology SuperNotesPsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
You may be stressed about revealing your cancer diagnosis to your child or children.
Children love stories and these often provide parents with a means of broaching tricky subjects and so the ‘The Secret Warrior’ book was especially written for CANSA TLC, by creative writer and social worker, Sally Ann Carter.
Find out more:
https://cansa.org.za/resources-to-help-share-a-parent-or-loved-ones-cancer-diagnosis-with-a-child/
Understanding of Self - Applied Social Psychology - Psychology SuperNotesPsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
1. Introduction Hausman (96) Consumer Welfare using the DC Model
Measurement of Consumer Welfare
NBER Methods Lectures
Aviv Nevo
Northwestern University and NBER
July 2012
2. Introduction Hausman (96) Consumer Welfare using the DC Model
Introduction
A common use of empirical demand models is to compute
consumer welfare
We will focus on welfare gains from the introduction of new
goods
The methods can be used more broadly:
other events: e.g., mergers, regulation
CPI
In this lecture we will cover
Hausman (96): valuation of new goods using demand in
product space
consumer welfare in DC models
3. Introduction Hausman (96) Consumer Welfare using the DC Model
Hausman, “Valuation of New Goods Under Perfect and
Imperfect Competition” (NBER Volume, 1996)
Suggests a method to compute the value of new goods under
perfect and imperfect competition
Looks at the value of a new brand of cereal – Apple
Cinnamon Cheerios
Basic idea:
Estimate demand
Compute “virtual price” – the price that sets demand to zero
Use the virtual price to compute a welfare measure (essentially
integrate under the demand curve)
Under imperfect competition need to compute the e¤ect of the
new good on prices of other products. This is done by
simulating the new equilibrium
4. Introduction Hausman (96) Consumer Welfare using the DC Model
Data
Monthly (weekly) scanner data for RTE cereal in 7 cities over 137
weeks
Note: the frequency of the data. Also no advertising data.
5. Introduction Hausman (96) Consumer Welfare using the DC Model
Multi-level Demand Model
Lowest level (demand for brand wn segment): AIDS
Jg
sjt = αj + βj ln(ygt /π gt ) + ∑ γjk ln(pkt ) + εjt
k =1
where,
sjt dollar sales share of product j out of total segment
expenditure
ygt overall per capita segment expenditure
π gt segment level price index
pkt price of product k in market t.
π gt (segment price index) is either Stone logarithmic price index
Jg
π gt = ∑ skt ln(pkt )
k =1
or
Jg J J
1 g g
π gt = α0 + ∑ αk pk +
2 j∑ k∑ kj
γ ln(pk ) ln(pj ).
k =1 =1 =1
6. Introduction Hausman (96) Consumer Welfare using the DC Model
Multi-level Demand Model
Middle level (demand for segments)
G
ln(qgt ) = αg + βg ln(YRt ) + ∑ δk ln(πkt ) + εgt
k =1
where
qgt quantity sold of products in the segment g in market t
YRt total category (e.g., cereal) expenditure
π kt segment price indices
7. Introduction Hausman (96) Consumer Welfare using the DC Model
Multi-level Demand Model
Top level (demand for cereal)
ln(Qt ) = β0 + β1 ln(It ) + β2 ln π t + Zt δ + εt
where
Qt overall consumption of the category in market t
It real income
π t price index for the category
Zt demand shifters
8. Introduction Hausman (96) Consumer Welfare using the DC Model
Estimation
Done from the bottom level up;
IV: for bottom and middle level prices in other cities.
9. Introduction Hausman (96) Consumer Welfare using the DC Model
Table 5.6: overall elasticities for family segment
10. Introduction Hausman (96) Consumer Welfare using the DC Model
Welfare
Value of AC-Cheerios
Under perfect competition approx. $78.1 million per year for
the US
Imperfect competition: needs to simulate the world without
AC Cheerios
assumes Nash Bertrand
ignores e¤ects on competition
…nds approx $66.8 million per year;
Extrapolates to an overall bias in the CPI 20%-25% bias.
11. Introduction Hausman (96) Consumer Welfare using the DC Model
Comments
Most economists …nd these numbers too high
are they really?
Questions about the analysis
IVs (advertsing)
computation of Nash equilibrium (has small e¤ect)
12. Introduction Hausman (96) Consumer Welfare using the DC Model
Consumer Welfare Using the Discrete Choice Model
Assume the indirect utility is given by
uijt = xjt βi + αi pjt + ξ jt + εijt
εijt i.i.d. extreme value
The inclusive value (or social surplus) from a subset
A f1, 2, ..., J g of alternatives:
!
ω iAt = ln ∑ exp xjt βi αi pjt + ξ jt
j 2A
The expected utility from A prior to observing (εi 0t , ...εiJt ),
knowing choice will maximize utility after observing shocks.
Note
If no hetero (βi = β, αi = α) IV captures average utility in the
population;
wn hetero need to integrate over it
if utility linear in price convert to dollars by dividing by αi
with income e¤ects conversion to dollars done by simulation
13. Introduction Hausman (96) Consumer Welfare using the DC Model
Applications
Trajtenberg (JPE, 1989) estimates a (nested) Logit model
and uses it to measure the bene…ts from the introduction of
CT scanners
does not control for endogeneity (pre BLP) so gets positive
price coe¢ cient
needs to do "hedonic" correction in order to do welfare
Petrin (JPE, 2003) uses the BLP data to repeat the
Trajtenberg exercise for the introduction of mini-vans
adds micro moments to BLP estimates
predictions of model with micro moments more plausible
attributes this to "micro data appear to free the model from a
heavy dependence on the idiosyncratic logit “taste” error
14. Introduction Hausman (96) Consumer Welfare using the DC Model
Table 5: RC estimates
15. Introduction Hausman (96) Consumer Welfare using the DC Model
Table 8: welfare estimates
16. Introduction Hausman (96) Consumer Welfare using the DC Model
Discussion
The micro moments clearly improve the estimates and help
pin down the non-linear parameters
What is driving the change in welfare?
One option
welfare is an order statistic
by adding another option we increase the number of draws
hence (mechanically) increase welfare
as we increase the variance of the RC we put less and less
weight on this e¤ect
17. Introduction Hausman (96) Consumer Welfare using the DC Model
A di¤erent take
The analysis has 2 steps
1. Simulate the world withoutnwith minivans (depending on the
starting point)
2. Summarize the simulatednobserved prices and quantities into a
welfare measure
Both steps require a model
If we observe pre- and post- introduction data might avoid
step 1
does not isolate the e¤ect of the introduction
Logit model fails (miserably) in the …rst step, but can deal
with the second
just to be clear: heterogeneity is important
NOT advocating for the Logit model
just trying to be clear where it fails
18. Introduction Hausman (96) Consumer Welfare using the DC Model
Red-bus-Blue-bus problem Debreu (1960)
Originally, used to show the IIA problem of Logit
Worst case scenario for Logit
Consumers choose between driving car to work or (red) bus
working at home not an option
decision of whether to work does not depend on transportation
Half the consumers choose a car and half choose the red bus
Arti…cially introduce a new option: a blue bus
consumers color blind
no price or service changes
In reality half the consumers choose car, rest split between the
two color buses
Consumer welfare has not changed
19. Introduction Hausman (96) Consumer Welfare using the DC Model
Example (cont)
Suppose we want to use the Logit model to analyze consumer
welfare generated by the introduction of the blue bus
uijt = ξ jt + εijt
t=0 t=1
observed predicted observed
option share ξ j 0 share ξ j 1 share ξ j 1
car 0.5
red bus 0.5
blue bus –
welfare
20. Introduction Hausman (96) Consumer Welfare using the DC Model
Example (cont)
uijt = ξ jt + εijt
t=0 t=1
observed predicted observed
option share ξ j 0 share ξ j 1 share ξ j 1
car 0.5 0
red bus 0.5 0
blue bus – –
welfare ln(2)
normalizing ξ car 0 = 0, therefore ξ bus 0 = 0
21. Introduction Hausman (96) Consumer Welfare using the DC Model
Example (cont)
uijt = ξ jt + εijt
t=0 t=1
observed predicted observed
option share ξ j 0 share ξ j 1 share ξ j 1
car 0.5 0 0.33 0
red bus 0.5 0 0.33 0
blue bus – – 0.33 0
welfare ln(2) ln(3)
If nothing changed, one might be tempted to hold ξ jt …xed.
This is the usual result: with predicted shares Logit gives gains
22. Introduction Hausman (96) Consumer Welfare using the DC Model
Example (cont)
uijt = ξ jt + εijt
t=0 t=1
observed predicted observed
option share ξ j 0 share ξ j 1 share ξ j 1
car 0.5 0 0.33 0 0.5
red bus 0.5 0 0.33 0 0.25
blue bus – – 0.33 0 0.25
welfare ln(2) ln(3)
Suppose we observed actual shares
23. Introduction Hausman (96) Consumer Welfare using the DC Model
Example (cont)
uijt = ξ jt + εijt
t=0 t=1
observed predicted observed
option share ξ j 0 share ξ j 1 share ξj1
car 0.5 0 0.33 0 0.5 0
red bus 0.5 0 0.33 0 0.25 ln(0.5)
blue bus – – 0.33 0 0.25 ln(0.5)
welfare ln(2) ln(3) ln(2)
To rationalize observed shares we need to let ξ jt vary
What exactly did we mean when we introduced blue bus?
24. Introduction Hausman (96) Consumer Welfare using the DC Model
Generalizing from the example
In the example, the Logit model fails in the …rst step
Holds more generally,
with Logit, expected utility is ln(1/s0t )
since s0t did not change in the observed data the Logit model
predicted no welfare gain
Monte Carlo results in Berry and Pakes (2007) give similar
answer
…nd that pure characteristics model matters for the estimated
elasticities (and mean utilities) but not the welfare numbers
conclude: "the fact that the contraction …ts the shares exactly
means that the extra gain from the logit errors is o¤set by
lower δ’ and this roughly counteracts the problems generated
s,
for welfare measurement by the model with tastes for
products."
25. Introduction Hausman (96) Consumer Welfare using the DC Model
Generalizing from the example
With more heterogeneity. Logit will get second step wrong
di¤erence with RC
R
1 1 s0,t 1 s dP (τ )
ln ln = ln = ln R i ,0,t 1 τ
s0,t s0,t 1 s0,t si ,0,t dPτ (τ )
and
Z Z
1 1 si ,0,t 1
ln ln dPτ (τ ) = ln dPτ (τ )
si ,0,t si ,0,t 1 si ,0,t
the di¤erence depends on the change in the heterogeneity in
the probability of choosing the outside option, si ,0,t
di¤erence can be positive or negative
26. Introduction Hausman (96) Consumer Welfare using the DC Model
Final comments
The key in the above example is that ξ jt was allowed to
change to …t the data.
This works when we see data pre and post (allows us to tell
how we should change ξ jt )
What if we do not not have data for the counterfactual?
have a model of how ξ jt is determined
make an assumption about how ξ jt changes
bound the e¤ects
Nevo (ReStat, 2003) uses the latter approach to compute
price indexes based on estimated demand systems