1. The document summarizes commonly used instrumental variables in structural demand estimation, including characteristics-based instruments proposed by BLP and cost-based instruments.
2. It discusses applications that use these IVs, including BLP's 1995 study of the automobile market and Nevo's 2001 analysis of the ready-to-eat cereal industry.
3. The document raises issues with some commonly used IVs and outlines challenges in structural demand estimation, such as the need to model supply side behavior.
1. An industry supply curve can be obtained by horizontally summing individual firm supply curves in the short run. In the long run, two complications arise: factor prices can change and firms can enter or exit the industry in response to profitability.
2. A firm is price-taking if it is too small to influence the market price and must accept the market price. Its demand curve is horizontal. If a firm has market power, its demand curve slopes downward as it can alter output to influence price.
3. The industry supply curve shows the total quantity supplied at each price level and is obtained by summing the individual firm supply curves. It shifts due to changes in input prices and the number of firms in the long
This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
This document discusses sources of identification from market level data and solutions to precision problems when estimating demand models. It focuses on adding assumptions like a pricing equation to bring more information from existing data. The pricing equation assumes Nash equilibrium in prices and is estimated jointly with the demand equation. This adds degrees of freedom compared to estimating demand alone. The pricing equation provides information on price elasticities and markups that help identify demand parameters. The document also discusses using micro data and adding cost function assumptions to the model.
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
This document provides an introduction to dynamic demand modeling for storable and durable goods. It discusses how consumer stockpiling behavior and sales can lead to biases in static demand estimation. A simple demand model is then presented that accounts for demand anticipation effects using two consumer types - storers and non-storers. The model assumes perfect foresight, a fixed storage period, and derives different purchasing patterns between the four states of current and past price periods.
This document summarizes a lecture on using moment inequalities to estimate preference parameters from discrete choice models. It discusses how this approach differs from standard empirical models by working directly with the inequalities that define optimal behavior. The approach assumes the utility from the actual choice should be larger than the utility from a considered but discarded counterfactual choice. Parameter values that satisfy these inequalities on average are accepted. The document then provides a single agent example to illustrate this approach.
1. An industry supply curve can be obtained by horizontally summing individual firm supply curves in the short run. In the long run, two complications arise: factor prices can change and firms can enter or exit the industry in response to profitability.
2. A firm is price-taking if it is too small to influence the market price and must accept the market price. Its demand curve is horizontal. If a firm has market power, its demand curve slopes downward as it can alter output to influence price.
3. The industry supply curve shows the total quantity supplied at each price level and is obtained by summing the individual firm supply curves. It shifts due to changes in input prices and the number of firms in the long
This document discusses two methods for measuring consumer welfare using demand models: Hausman (1996) and the discrete choice model. Hausman estimates demand for cereal and values the introduction of Apple Cinnamon Cheerios at $78.1 million annually under perfect competition and $66.8 million under imperfect competition. The discrete choice model measures welfare as the inclusive value from a choice set and can value new products by simulating choices with and without them. It is more flexible but still relies on accurate demand estimation.
This document discusses sources of identification from market level data and solutions to precision problems when estimating demand models. It focuses on adding assumptions like a pricing equation to bring more information from existing data. The pricing equation assumes Nash equilibrium in prices and is estimated jointly with the demand equation. This adds degrees of freedom compared to estimating demand alone. The pricing equation provides information on price elasticities and markups that help identify demand parameters. The document also discusses using micro data and adding cost function assumptions to the model.
- The document summarizes a lecture on using micro data with characteristics-based choice models. It discusses two key advantages of micro data: 1) It provides information on how observed individual characteristics interact with product characteristics. 2) It includes data on individuals who did not purchase products as well as second choices, giving insight into unobserved product characteristics.
- The model specifies utility as depending on observed and unobserved individual characteristics as well as product characteristics. Micro data on first choices matches individual characteristics to chosen products, while second choice data helps account for unobserved characteristics by holding individual conditions constant.
Estimation of Static Discrete Choice Models Using Market Level DataNBER
This document discusses methods for estimating static discrete choice models using market-level data rather than individual consumer data. It covers several key topics:
1) The types of market-level and consumer-level data that can be used. Market-level data is easier to obtain but poses challenges for identification and estimation.
2) A common linear random coefficients logit model framework. It includes observed and unobserved product characteristics as well as observed and unobserved consumer heterogeneity.
3) The key challenges of estimating heterogeneity parameters without consumer-level data. It also discusses how to deal with potential endogeneity of unobserved product characteristics.
4) The two-step estimation approach when consumer-level data is available, and
This document provides an introduction to dynamic demand modeling for storable and durable goods. It discusses how consumer stockpiling behavior and sales can lead to biases in static demand estimation. A simple demand model is then presented that accounts for demand anticipation effects using two consumer types - storers and non-storers. The model assumes perfect foresight, a fixed storage period, and derives different purchasing patterns between the four states of current and past price periods.
This document summarizes a lecture on using moment inequalities to estimate preference parameters from discrete choice models. It discusses how this approach differs from standard empirical models by working directly with the inequalities that define optimal behavior. The approach assumes the utility from the actual choice should be larger than the utility from a considered but discarded counterfactual choice. Parameter values that satisfy these inequalities on average are accepted. The document then provides a single agent example to illustrate this approach.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
The document discusses using machine learning methods to estimate heterogeneous causal effects. It proposes an approach of using regression trees on a transformed outcome variable to estimate individual treatment effects. However, this approach is critiqued as it can introduce noise. An improved approach is presented that uses the sample average treatment effect within each leaf as the estimator, and uses the variance of predictions for model fitting criteria and a matching estimator for out-of-sample evaluation. The approach separates the tasks of model selection and treatment effect estimation to enable valid statistical inference on estimated effects in subgroups.
This document discusses recommendation systems and topic modeling for documents using machine learning techniques. It begins by introducing recommendation systems and different types of recommendation literature, including item similarity, collaborative filtering, and hierarchical models. It then discusses bringing in user choice data and different collaborative filtering approaches like k-nearest neighbor prediction and matrix factorization. The document also covers topic modeling, including latent Dirichlet allocation, and how topic models can be combined with user choice models. It concludes by discussing challenges in causal inference when using machine learning.
This document discusses various machine learning techniques including:
1. Tree pruning involves first growing a large tree and then pruning branches that do not improve the objective function. This prevents early stopping.
2. Boosting uses multiple weak learners sequentially to get an additive model that approximates the regression function. It combines many simple models to create a powerful ensemble model.
3. Unsupervised learning techniques like principal component analysis and clustering are used to find patterns in data without an outcome variable. These include reducing dimensions and partitioning data into subgroups.
Big Data analysis involves building predictive models from high-dimensional data using techniques like variable selection, cross-validation, and regularization to avoid overfitting. The document discusses an example analyzing web browsing data to predict online spending, highlighting challenges with large numbers of variables. It also covers summarizing high-dimensional data through dimension reduction and model building for prediction versus causal inference.
The document discusses how economic shocks propagate through networks of production and inputs. It begins by presenting a simple model of an economy consisting of sectors that use each other's outputs as inputs. Shocks to individual sectors can spread to other sectors through this production network. While diversification across many sectors could cause microeconomic shocks to "wash out", the structure of the network influences how shocks aggregate. Asymmetric networks with some sectors having outsized importance can lead to greater aggregate volatility than more regular networks where all sectors are equally important. Empirical analysis of input-output data supports the theory by finding significant downstream effects of sectoral shocks.
The document discusses practical computing issues that arise when working with large datasets. It begins by noting that many statistical analyses can be done on a single laptop. It then discusses storing very large datasets, which may require terabytes of storage. The document outlines some basic computing concepts for working with big data, including software engineering practices, databases, and distributed computing.
This document summarizes key points from a lecture on diffusion, identification, and network formation. It discusses how diffusion of products can be modeled, including information passing between neighbors. Estimation techniques are described to model information diffusion on actual networks by simulating propagation over time. The challenges of identification when networks are endogenous are also covered. Forming models of network formation that account for link dependencies is an important area of current research.
Daron Acemoglu presents a document on networks, games over networks, and peer effects. The document discusses how networks can be used to model externalities and peer effects. It presents a model of a game over networks where players' payoffs are determined by their own actions, the actions of their network neighbors, and potential strategic interactions. The best responses in this game are characterized. Under certain conditions, such as the game being a potential game, the game will have a unique Nash equilibrium where each player's action is determined by their position in the network. The document discusses applications of this type of network game model.
The document discusses various applications of dimension reduction techniques to extract low-dimensional representations from high-dimensional data for purposes of prediction, descriptive analysis, and input into subsequent causal analysis. It provides examples of such applications using Google search data, genetic data, medical claims data, credit scores, online purchases, and congressional roll call votes. It also discusses issues around text as data, including bag-of-words representations and the use of automated and manual steps in text analysis.
Econometrics of High-Dimensional Sparse ModelsNBER
The document discusses high-dimensional sparse econometric models where the number of predictors (p) is much larger than the sample size (n). It outlines an approach for estimating regression functions using penalization methods like the LASSO. Specifically, it discusses:
1. Using the LASSO estimator to minimize squared errors while penalizing the l1-norm of coefficients, inducing sparsity.
2. Choosing the optimal penalty level as a function of the error variance and sample size. Variants like the square-root LASSO provide a tuning-free approach.
3. Examples showing how sparse approximations can better capture patterns in population data than traditional low-dimensional approximations.
High-Dimensional Methods: Examples for Inference on Structural EffectsNBER
This document describes a study that uses high-dimensional methods to estimate the effect of 401(k) eligibility on measures of accumulated assets. It begins by outlining the baseline model and notes areas for improvement, such as controlling for income. It then discusses using regularization like LASSO for variable selection in high-dimensional settings. The document explores more flexible specifications by generating many interaction and polynomial terms but notes the need for dimension reduction. It describes using LASSO to select important variables from a large set. The results select a parsimonious set of variables and estimate similar 401(k) effects as the baseline.
This document provides an overview of social and economic networks. It discusses why networks are important to study, as interactions are shaped by relationships. Some examples of networks are presented, such as marriage networks, friendship networks in high schools, military alliances, and interbank payment networks. The document then discusses how to represent networks mathematically and introduces concepts like degree, paths, average path length, and degree distributions. It also covers homophily, or the tendency for similar people to connect, and shows examples of homophily along attributes. Finally, it introduces the idea of centrality and influence within a network, discussing measures like degree centrality and eigenvector centrality.
The document discusses three examples of nonlinear and non-Gaussian DSGE models. The first example features Epstein-Zin preferences to allow for a separation between risk aversion and the intertemporal elasticity of substitution. The second example models volatility shocks using time-varying variances. The third example aims to distinguish between the effects of stochastic volatility ("fortune") versus parameter drifting ("virtue") in explaining time-varying volatility in macroeconomic variables. The document outlines the motivation, structure, and solution methods for these three nonlinear DSGE models.
This document discusses heterogeneous agent models without aggregate uncertainty. It introduces a model with a continuum of agents who face idiosyncratic income fluctuations but no aggregate shocks. There is a unique stationary equilibrium with constant interest rates and wages. The document discusses the recursive competitive equilibrium, existence and uniqueness of the stationary equilibrium, transition functions, computation methods, and some qualitative results from calibrating the model.
This document outlines exercises using Dynare to analyze a simple New Keynesian model. The exercises explore the rationale for the Taylor principle, potential conflicts between monetary policy channels, the sensitivity of inflation and output to shock persistence, cases when the Taylor rule does not adjust interest rates enough, and instances when news shocks cause the Taylor rule to move rates in unintended directions.
The document discusses a model of financial frictions that arise from asymmetric information between borrowers and lenders. It first presents a simple model where entrepreneurs with private information about project returns borrow from banks, and the optimal contract balances risk for the bank. The document then explores integrating this costly state verification model into a dynamic stochastic general equilibrium framework to analyze how financial shocks may influence business cycles.
1) The document outlines the nonlinear equilibrium conditions of a simple New Keynesian model without capital. It discusses formulating the model's nonlinear equations to study optimal monetary policy and higher-order solutions.
2) It presents the key components of the model, including household and firm behavior assumptions. Households maximize utility from consumption and labor. Firms set prices according to Calvo pricing and maximize profits.
3) The document derives the nonlinear equilibrium conditions that characterize household and firm optimization, including the household's intertemporal FOC and the intermediate firm's price-setting problem. It expresses the model's equilibrium objects like marginal costs and the price index.
The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATESNBER
1) State deficits can boost job growth in the deficit state but also in neighboring states, showing significant spillover effects. Coordinated fiscal policies across states are more cost-effective than individual state policies.
2) Federal aid to states, when coordinated, can effectively stimulate the overall economy. Targeted aid linked to services for lower income households is more effective than untargeted aid.
3) The economic stimulus of the American Recovery and Reinvestment Act could have been 30% more effective if it relied more on targeted aid and less on untargeted aid. Coordinated fiscal policies that account for spillovers across economic regions are optimal for stimulus programs.
Business in the United States Who Owns it and How Much Tax They PayNBER
This document analyzes business ownership and tax payments in the United States using administrative tax data from 2011. It finds:
1. Pass-through business income, such as from partnerships and S-corporations, is highly concentrated.
2. The average federal income tax rate on pass-through business income is 19%.
3. 30% of income earned by partnerships cannot be uniquely traced to an identifiable, ultimate owner.
This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
The document discusses using machine learning methods to estimate heterogeneous causal effects. It proposes an approach of using regression trees on a transformed outcome variable to estimate individual treatment effects. However, this approach is critiqued as it can introduce noise. An improved approach is presented that uses the sample average treatment effect within each leaf as the estimator, and uses the variance of predictions for model fitting criteria and a matching estimator for out-of-sample evaluation. The approach separates the tasks of model selection and treatment effect estimation to enable valid statistical inference on estimated effects in subgroups.
This document discusses recommendation systems and topic modeling for documents using machine learning techniques. It begins by introducing recommendation systems and different types of recommendation literature, including item similarity, collaborative filtering, and hierarchical models. It then discusses bringing in user choice data and different collaborative filtering approaches like k-nearest neighbor prediction and matrix factorization. The document also covers topic modeling, including latent Dirichlet allocation, and how topic models can be combined with user choice models. It concludes by discussing challenges in causal inference when using machine learning.
This document discusses various machine learning techniques including:
1. Tree pruning involves first growing a large tree and then pruning branches that do not improve the objective function. This prevents early stopping.
2. Boosting uses multiple weak learners sequentially to get an additive model that approximates the regression function. It combines many simple models to create a powerful ensemble model.
3. Unsupervised learning techniques like principal component analysis and clustering are used to find patterns in data without an outcome variable. These include reducing dimensions and partitioning data into subgroups.
Big Data analysis involves building predictive models from high-dimensional data using techniques like variable selection, cross-validation, and regularization to avoid overfitting. The document discusses an example analyzing web browsing data to predict online spending, highlighting challenges with large numbers of variables. It also covers summarizing high-dimensional data through dimension reduction and model building for prediction versus causal inference.
The document discusses how economic shocks propagate through networks of production and inputs. It begins by presenting a simple model of an economy consisting of sectors that use each other's outputs as inputs. Shocks to individual sectors can spread to other sectors through this production network. While diversification across many sectors could cause microeconomic shocks to "wash out", the structure of the network influences how shocks aggregate. Asymmetric networks with some sectors having outsized importance can lead to greater aggregate volatility than more regular networks where all sectors are equally important. Empirical analysis of input-output data supports the theory by finding significant downstream effects of sectoral shocks.
The document discusses practical computing issues that arise when working with large datasets. It begins by noting that many statistical analyses can be done on a single laptop. It then discusses storing very large datasets, which may require terabytes of storage. The document outlines some basic computing concepts for working with big data, including software engineering practices, databases, and distributed computing.
This document summarizes key points from a lecture on diffusion, identification, and network formation. It discusses how diffusion of products can be modeled, including information passing between neighbors. Estimation techniques are described to model information diffusion on actual networks by simulating propagation over time. The challenges of identification when networks are endogenous are also covered. Forming models of network formation that account for link dependencies is an important area of current research.
Daron Acemoglu presents a document on networks, games over networks, and peer effects. The document discusses how networks can be used to model externalities and peer effects. It presents a model of a game over networks where players' payoffs are determined by their own actions, the actions of their network neighbors, and potential strategic interactions. The best responses in this game are characterized. Under certain conditions, such as the game being a potential game, the game will have a unique Nash equilibrium where each player's action is determined by their position in the network. The document discusses applications of this type of network game model.
The document discusses various applications of dimension reduction techniques to extract low-dimensional representations from high-dimensional data for purposes of prediction, descriptive analysis, and input into subsequent causal analysis. It provides examples of such applications using Google search data, genetic data, medical claims data, credit scores, online purchases, and congressional roll call votes. It also discusses issues around text as data, including bag-of-words representations and the use of automated and manual steps in text analysis.
Econometrics of High-Dimensional Sparse ModelsNBER
The document discusses high-dimensional sparse econometric models where the number of predictors (p) is much larger than the sample size (n). It outlines an approach for estimating regression functions using penalization methods like the LASSO. Specifically, it discusses:
1. Using the LASSO estimator to minimize squared errors while penalizing the l1-norm of coefficients, inducing sparsity.
2. Choosing the optimal penalty level as a function of the error variance and sample size. Variants like the square-root LASSO provide a tuning-free approach.
3. Examples showing how sparse approximations can better capture patterns in population data than traditional low-dimensional approximations.
High-Dimensional Methods: Examples for Inference on Structural EffectsNBER
This document describes a study that uses high-dimensional methods to estimate the effect of 401(k) eligibility on measures of accumulated assets. It begins by outlining the baseline model and notes areas for improvement, such as controlling for income. It then discusses using regularization like LASSO for variable selection in high-dimensional settings. The document explores more flexible specifications by generating many interaction and polynomial terms but notes the need for dimension reduction. It describes using LASSO to select important variables from a large set. The results select a parsimonious set of variables and estimate similar 401(k) effects as the baseline.
This document provides an overview of social and economic networks. It discusses why networks are important to study, as interactions are shaped by relationships. Some examples of networks are presented, such as marriage networks, friendship networks in high schools, military alliances, and interbank payment networks. The document then discusses how to represent networks mathematically and introduces concepts like degree, paths, average path length, and degree distributions. It also covers homophily, or the tendency for similar people to connect, and shows examples of homophily along attributes. Finally, it introduces the idea of centrality and influence within a network, discussing measures like degree centrality and eigenvector centrality.
The document discusses three examples of nonlinear and non-Gaussian DSGE models. The first example features Epstein-Zin preferences to allow for a separation between risk aversion and the intertemporal elasticity of substitution. The second example models volatility shocks using time-varying variances. The third example aims to distinguish between the effects of stochastic volatility ("fortune") versus parameter drifting ("virtue") in explaining time-varying volatility in macroeconomic variables. The document outlines the motivation, structure, and solution methods for these three nonlinear DSGE models.
This document discusses heterogeneous agent models without aggregate uncertainty. It introduces a model with a continuum of agents who face idiosyncratic income fluctuations but no aggregate shocks. There is a unique stationary equilibrium with constant interest rates and wages. The document discusses the recursive competitive equilibrium, existence and uniqueness of the stationary equilibrium, transition functions, computation methods, and some qualitative results from calibrating the model.
This document outlines exercises using Dynare to analyze a simple New Keynesian model. The exercises explore the rationale for the Taylor principle, potential conflicts between monetary policy channels, the sensitivity of inflation and output to shock persistence, cases when the Taylor rule does not adjust interest rates enough, and instances when news shocks cause the Taylor rule to move rates in unintended directions.
The document discusses a model of financial frictions that arise from asymmetric information between borrowers and lenders. It first presents a simple model where entrepreneurs with private information about project returns borrow from banks, and the optimal contract balances risk for the bank. The document then explores integrating this costly state verification model into a dynamic stochastic general equilibrium framework to analyze how financial shocks may influence business cycles.
1) The document outlines the nonlinear equilibrium conditions of a simple New Keynesian model without capital. It discusses formulating the model's nonlinear equations to study optimal monetary policy and higher-order solutions.
2) It presents the key components of the model, including household and firm behavior assumptions. Households maximize utility from consumption and labor. Firms set prices according to Calvo pricing and maximize profits.
3) The document derives the nonlinear equilibrium conditions that characterize household and firm optimization, including the household's intertemporal FOC and the intermediate firm's price-setting problem. It expresses the model's equilibrium objects like marginal costs and the price index.
The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
FISCAL STIMULUS IN ECONOMIC UNIONS: WHAT ROLE FOR STATESNBER
1) State deficits can boost job growth in the deficit state but also in neighboring states, showing significant spillover effects. Coordinated fiscal policies across states are more cost-effective than individual state policies.
2) Federal aid to states, when coordinated, can effectively stimulate the overall economy. Targeted aid linked to services for lower income households is more effective than untargeted aid.
3) The economic stimulus of the American Recovery and Reinvestment Act could have been 30% more effective if it relied more on targeted aid and less on untargeted aid. Coordinated fiscal policies that account for spillovers across economic regions are optimal for stimulus programs.
Business in the United States Who Owns it and How Much Tax They PayNBER
This document analyzes business ownership and tax payments in the United States using administrative tax data from 2011. It finds:
1. Pass-through business income, such as from partnerships and S-corporations, is highly concentrated.
2. The average federal income tax rate on pass-through business income is 19%.
3. 30% of income earned by partnerships cannot be uniquely traced to an identifiable, ultimate owner.
Redistribution through Minimum Wage Regulation: An Analysis of Program Linkag...NBER
This document analyzes the program linkages and budgetary spillovers of minimum wage regulation using data from recent federal minimum wage increases. It finds that wages increased for some low-skilled workers but employment declined significantly. While safety net programs provided some income replacement, earnings and tax revenues decreased substantially. Overall, the analysis suggests minimum wage increases reallocated income from employers and taxpayers to low-wage workers, with program and tax revenue spillovers of approximately $1-2 billion annually.
The Distributional Effects of U.S. Clean Energy Tax CreditsNBER
This document summarizes a study examining the distributional effects of US clean energy tax credits from 2006-2012. It finds that higher-income households claimed a disproportionate share of the $18 billion in credits. Specifically, the study analyzes tax return data to see who claimed credits for investments like home weatherization, solar panels, hybrid vehicles, and electric vehicles. It aims to provide insights into how the inequitable distribution may inform future program design and the debate around subsidies versus carbon taxes.
An Experimental Evaluation of Strategies to Increase Property Tax Compliance:...NBER
This document summarizes a study that tested different strategies for increasing property tax compliance in Philadelphia. The researchers worked with the city's Department of Revenue to randomly assign taxpayers with overdue property taxes to receive one of four letters: a standard letter, or a standard letter plus an additional sentence appealing to civic duty, public services benefits, or potential home loss. They found the civic duty appeal significantly increased tax payments, especially for those with lower debts. Appealing to public services benefits also showed some effect on higher debt taxpayers. The researchers conclude strategically targeting messages could further improve compliance.
This document summarizes a discussion between Susan Athey and Guido Imbens on the relationship between machine learning and causal inference. It notes that while machine learning excels at prediction problems using large datasets, it has weaknesses when it comes to causal questions. Econometrics and statistics literature focuses more on formal theories of causality. The document proposes combining the strengths of both fields by developing machine learning methods that can estimate causal effects, accounting for issues like endogeneity and treatment effect heterogeneity. It outlines some open problems and directions for future research at the intersection of these fields.
The NBER Working Paper Series at 20,000 - Joshua GansNBER
This document discusses publication lags in economics research, with working papers appearing years before peer-reviewed published work. It questions whether publication means anything given the large number of working papers now available. It also considers options for the National Bureau of Economic Research's web repository, such as providing open access to working papers along with links to related materials, peer reviews, and published versions of the papers.
The NBER Working Paper Series at 20,000 - Claudia GoldinNBER
This document analyzes trends in the NBER Working Paper series from 1978 to 2013. It finds that the number of working papers published annually has increased dramatically over time, from around 100 in the late 1970s to over 1,200 by 2013. The number of NBER research programs has also expanded significantly, from 7 originally to over 20 currently. Individual working papers now tend to involve more programs and more authors than in the past as well. The working paper series has become less specialized and more collaborative over four decades of growth and evolution.
The NBER Working Paper Series at 20,000 - James PoterbaNBER
This document summarizes the origin and evolution of the NBER Working Paper series from its beginning in 1972 to the present. It started as an outlet for NBER research and has grown tremendously over time. Some key points:
- The first working paper was published in June 1973 and there were only 3 papers in the first month.
- Growth accelerated after Martin Feldstein became NBER President in 1977, with over 200 papers published in 1981.
- There are now over 20,000 working papers published and about 5.5 million downloads per year from around the world.
- The most popular papers focus on topics like financial crises, economic growth, and corporate governance.
The NBER Working Paper Series at 20,000 - Scott SternNBER
The NBER Working Paper series recently reached 20,000 papers published and is recognized as one of the leading economics working paper series in the world. According to 2014 Google Scholar Metrics, the NBER Working Paper series ranked 18th out of thousands of journals by its H-5 index, which measures the productivity and impact of published work. The high ranking of the NBER Working Paper series demonstrates its important role in disseminating new economic research and ideas worldwide.
The NBER Working Paper Series at 20,000 - Glenn EllisonNBER
This document summarizes trends in the publication process and the role of working papers. It finds that publication times at economics journals have increased significantly over the past 30 years. Acceptance rates at top journals have also declined. These changes mean that published papers cannot address current issues or reflect the latest state of knowledge as quickly. The document also finds that working papers, such as those from the NBER, play an increasingly important role, as economists can disseminate their work more quickly through working paper series than through the traditional publication process. NBER working papers account for a large share of papers eventually published in top journals and those NBER papers go on to be well-cited.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
1. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Applications and Choice of IVs
NBER Methods Lectures
Aviv Nevo
Northwestern University and NBER
July 2012
2. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Introduction
In the previous lecture we discussed the estimation of DC
model using market level data
The estimation was based on the moment condition
E (ξ jt jzjt ) = 0.
In this lecture we will
discuss commonly used IVs
survey several applications
3. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
The role of IVs
IVs play dual role
generate moment conditions to identify θ 2
deal with the correlation of prices and error
Simple example (Nested Logit model)
sjt sjt
ln( ) = xjt β + αpjt + ρ ln( ) + ξ jt
s0t sGt
even if price exogenous, "within market share" is endogenous
Price endogeneity can be handled in other ways (e.g., panel
data)
4. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Commonly used IVs: competition in characteristics space
Assume that E (ξ jt jxt ) = 0, observed characteristics are mean
independent of unobserved characteristics
BLP propose using
own characteristics
sum of char of other products produced by the …rm
sum of char of competitors products
Power: proximity in characteristics space to other products
! markup ! price
Validity: xjt are assumed set before ξ jt is known
Not hard to come up with stories that make these invalid
Most commonly used
do not require data we do not already have
Often (mistakenly) called "BLP Instruments"
5. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Commonly used IVs: cost based
Cost data are rarely directly observed
BLP (1995, 1999) use characteristics that enter cost (but not
demand)
Villas-Boas (2007) uses prices of inputs interacted with
product dummy variables (to generate variation by product)
Hausman (1996) and Nevo (2001) rely on indirect measures
of cost
use prices of the product in other markets
validity: after controlling for common e¤ects, the unobserved
characteristics are assumed independent across markets
power: prices will be correlated across markets due to common
marginal cost shocks
easy to come up with examples where IVs are not valid (e.g.,
national promotions)
6. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Commonly used IVs: dynamic panel
Ideas from the dynamic panel data literature (Arellano and
Bond, 1991, Blundell and Bond, 1998) have been used to
motivate the use of lagged characteristics as instruments.
Proposed in a footnote in BLP
For example, Sweeting (2011) assumes ξ jt = ρξ jt 1 + η jt ,
where E (η jt jxt 1 ) = 0.Then
E (ξ jt ρξ jt 1 jxt 1 ) =0
is a valid moment condition
7. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Berry, Levinsohn, Pakes “Automobile Prices in Market
Equilibrium” (EMA, 95) – BLP
Points to take away:
1. The e¤ect of IV
2. Logit versus RC Logit
8. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Data
20 years of annual US national data, 1971-90 (T=20): 2217
model-years
Quantity data by name plate (excluding ‡eet sales)
Prices – list prices
Characteristics from Automotive News Market Data Book
Price and characteristics correspond to the base model
Note: little/no use of segment and origin information
9. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Demand Model
The indirect utility is
uijt = xjt βi + α ln(yi pjt ) + ξ jt + εijt
Note: income enters di¤erently than before.
βk
i = βk + σk vik vik N (0, 1)
The outside option has utility
uijt = α ln(yi ) + ξ jt + σ0 vi 0 + εijt
10. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Estimation
Basically estimate as we discussed before.
add supply-side moments (changes last step of the algorithm)
help pin down demand parameters
adds cost side IVs
Instrumental variables. assume E (ξ jt jxt ) = 0, and use
(i) own characteristics
(ii) sum of char of other products produced by the …rm
(iii) sum of characteristics products produced by other …rms
Cost side: E (ξ jt jwt ) = 0
E¢ ciency:
(i) importance sampling for the simulation of market shares
(ii) discussion of optimal instruments
(iii) parametric distribution for income (log-normal)
11. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Table 3: e¤ect of IV (in Logit)
16. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Summary
Powerful method with potential for many applications
Clearly show:
e¤ect of IV
RC logit versus logit
Common complaints:
instruments
supply side: static, not tested, driving the results
demand side dynamics
17. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Goldberg “Product Di¤erentiation and Oligopoly in
International Markets: The Case of the Automobile
Industry” (EMA, 95)
I will focus on the demand model and not the application
Points to take away
endogeneity with household-level date
Nested Logit versus RC Logit
18. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Demand Model
Nested Logit nests determined by buy/not buy, new/used,
county of origin (foreign vs domestic) and segment
This model can be viewed as using segment and county of
origin as (dummy) characteristics, and assuming a particular
distribution on their coe¢ cients.
19. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Data
Household-level survey from the Consumer Expenditure
Survey:
20,571, HH between 83-87
6,172 (30%) bought a car
1,992 (33%) new car
1,394 (70%) domestic and 598 foreign
Prices (and characteristics) are obtained from Automotive
News Market Data Book
20. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Estimation
The model is estimated by ML
The likelihood is partitioned and estimated recursively:
At the lowest level the choice of model conditional on origin,
segment and neweness, based on the estimated parameters an
“inclusive value” is computed and used to estimate the choice
of origin conditional on segment and neweness, etc.
Does not deal with endogeneity. Origin and segment …xed
e¤ects are included, but these do not fully account for brand
unobserved characteristics
21. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Table II: price elasticities by class
24. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Nevo, "Measuring Market Power in the Ready-to-eat
Cereal Industry" (EMA, 2001)
Points to take away:
1. industry where characteristics are less obvious.
2. e¤ects of various IV’
s
3. testing the model of competition
4. comparison to alternative demand models (later)
25. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
The RTE cereal industry
Characterized by:
high concentration (C3 75%, C6 90%)
high price-cost margins ( 45%)
large advertising to sales ratios ( 13%)
numerous introductions of brands (67 new brands by top 6 in
80’s)
This has been used to claim that this is a perfect example of
collusive pricing
26. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Questions
Is pricing in the industry collusive?
What portion of the markups in the industry due to:
Product di¤erentiation?
Multi-product …rms?
Potential price collusion?
27. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Strategy
Estimate brand level demand
Compute PCM predicted by di¤erent industry
structuresnmodels of conduct:
Single-product …rms
Current ownership (multi-product …rms)
Fully collusive pricing (joint ownership)
Compare predicted PCM to observed PCM
28. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Supply
The pro…ts of …rm f
Πf = ∑ (pj mcj )qj (p ) Cf
j 2Ff
the …rst order conditions are
∂qr (p )
sj ( p ) + ∑ (pr mcr )
∂pj
=0
r 2Ff
De…ne Sjr = ∂sr /∂pj j, r = 1, ..., J, and
Sjr if 9fr , j g Ff
Ωjr =
0 otherwise
s (p ) + Ω (p mc ) = 0 and (p mc ) = Ω 1
s (p )
Therefore by: (1) assuming a model of conduct; and (2) using
estimates of the demand substitution; we are able to compute
price-cost margins under di¤erent “ownership” structures
29. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Demand
Utility, as before
uijt = xjt βi + αi pjt + ξ jt + εijt
Allow for brand dummy variables (to capture the part of ξ jt
that does not vary by market)
captures characteristics that do not vary over markets
30. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Data
IRI Infoscan scanner data
market shares – de…ned by converting volume to servings
prices – pre-coupon real transaction per serving price
25 brands (top 25 in last quarter), in 67 cities (number
increases over time) over 20 quarters (1988-1992); 1124
markets, 27,862 observations
LNA advertising data
Characteristics from cereal boxes
Demographics from March CPS
Cost instruments from Monthly CPS
Market size – one serving per consumer per day
31. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Estimation
Follows the method we discussed before
Uses only demand side moments
Explores various IVs:
characteristics of competition; problematic for this sample,
with brand FE
prices in other cities
proxies for city level costs: density, earning in retail sector, and
transportation costs
Brand …xed e¤ects
control for unobserved quality (instead of instrumenting for it)
identify taste coe¢ cients by minimum distance
36. Commonly Used IVs BLP 95 Goldberg 95 Nevo 2001
Comments/Issues
Is choice discrete?
Ignores the retailer – uses retailer prices to study
manufacturer competition
retail margins go into marginal cost
marginal costs do not vary with quantity, therefore this
restricts the retailers pricing behavior
which direction will this bias the …nding? Most likely towards
…nding collusion where there is none (the retailer behavior
might take into account e¤ects across products)
So…a Villas Boas (2007) extends the model
Much of the price variation at the store-level is coming from
"sales". How does this impact the estimation?
data is quite aggregated:quarter-brand-city
"sales" generate incentives for consumer to stockpile
Follow up work by Hendel and Nevo looked at this