This document summarizes a lecture on analyzing demand systems for differentiated products. It discusses:
1) Demand systems provide information to analyze firm incentives and responses to policy changes. They are important for welfare analysis and constructing price indices.
2) Demand models can consider representative or heterogeneous agents, and model demand in product or characteristic space. Heterogeneous agent models in characteristic space are preferred as they allow combining different data sources.
3) Demand estimation requires simulating aggregate demand from individual demands, which provides unbiased estimates that can be made precise with large simulations.
Variational Inequalities Approach to Supply Chain Network Equilibriumiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Variational Inequalities Approach to Supply Chain Network Equilibriumiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Maxmizing Profits with the Improvement in Product Composition - ICIEOM - Mer...MereoConsulting
Article produced by María Alejandra Guerrero Hernandez, Rodrigo A. Batistelo and Andrew FH Librantz in which emphasize maximizing profit by improving the quality of the product using genetic algorithms and K-means.
Naive Bayes is a classification algorithm that is suitable for binary and multiclass classification. It is suitable for binary and multiclass classification. Naïve Bayes performs well in cases of categorical input variables compared to numerical variables. It is useful for making predictions and forecasting data based on historical results.
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...Waqas Tariq
This paper uses fuzzy arithmetic approach to the system cost for perishable items with instant deterioration for the discounted entropic order quantity model. Traditional crisp system cost observes that some costs may belong to the uncertain factors. It is necessary to extend the system cost to treat also the vague costs. We introduce a new concept which we call entropy and show that the total payoff satisfies the optimization property. We show how special case of this problem reduce to perfect results, and how post deteriorated discounted entropic order quantity model is a generalization of optimization. It has been imperative to demonstrate this model by analysis, which reveals important characteristics of discounted structure. Further numerical experiments are conducted to evaluate the relative performance between the fuzzy and crisp cases in EnOQ and EOQ separately.
This overview discusses the predictive analytical technique known as Gradient Boosting Regression, an analytical technique that explore the relationship between two or more variables (X, and Y). Its analytical output identifies important factors ( Xi ) impacting the dependent variable (y) and the nature of the relationship between each of these factors and the dependent variable. Gradient Boosting Regression is limited to predicting numeric output so the dependent variable has to be numeric in nature. The minimum sample size is 20 cases per independent variable. The Gradient Boosting Regression technique is useful in many applications, e.g., targeted sales strategies by using appropriate predictors to ensure accuracy of marketing campaigns and clarify relationships among factors such as seasonality, product pricing and product promotions, or for an agriculture business attempting to ascertain the effects of temperature, rainfall and humidity on crop production. Gradient Boosting Regression is just one of the numerous predictive analytical techniques and algorithms included in the Assisted Predictive Modeling module of the Smarten augmented analytics solution. This solution is designed to serve business users with sophisticated tools that are easy to use and require no data science or technical skills. Smarten is a representative vendor in multiple Gartner reports including the Gartner Modern BI and Analytics Platform report and the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms Report.
Sabatelli relationship between the uncompensated price elasticity and the inc...GLOBMOD Health
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis.
Citation: Sabatelli L (2016) Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences. PLoS ONE 11(3): e0151390. doi:10.1371/journal.pone.0151390
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0151390
The Paired Sample T Test is used to determine whether the mean of a dependent variable. For example, weight, anxiety level, salary, or reaction time is the same in two related groups. It is particularly useful in measuring results before and after a particular event, action, process change, etc.
Maxmizing Profits with the Improvement in Product Composition - ICIEOM - Mer...MereoConsulting
Article produced by María Alejandra Guerrero Hernandez, Rodrigo A. Batistelo and Andrew FH Librantz in which emphasize maximizing profit by improving the quality of the product using genetic algorithms and K-means.
Naive Bayes is a classification algorithm that is suitable for binary and multiclass classification. It is suitable for binary and multiclass classification. Naïve Bayes performs well in cases of categorical input variables compared to numerical variables. It is useful for making predictions and forecasting data based on historical results.
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...Waqas Tariq
This paper uses fuzzy arithmetic approach to the system cost for perishable items with instant deterioration for the discounted entropic order quantity model. Traditional crisp system cost observes that some costs may belong to the uncertain factors. It is necessary to extend the system cost to treat also the vague costs. We introduce a new concept which we call entropy and show that the total payoff satisfies the optimization property. We show how special case of this problem reduce to perfect results, and how post deteriorated discounted entropic order quantity model is a generalization of optimization. It has been imperative to demonstrate this model by analysis, which reveals important characteristics of discounted structure. Further numerical experiments are conducted to evaluate the relative performance between the fuzzy and crisp cases in EnOQ and EOQ separately.
This overview discusses the predictive analytical technique known as Gradient Boosting Regression, an analytical technique that explore the relationship between two or more variables (X, and Y). Its analytical output identifies important factors ( Xi ) impacting the dependent variable (y) and the nature of the relationship between each of these factors and the dependent variable. Gradient Boosting Regression is limited to predicting numeric output so the dependent variable has to be numeric in nature. The minimum sample size is 20 cases per independent variable. The Gradient Boosting Regression technique is useful in many applications, e.g., targeted sales strategies by using appropriate predictors to ensure accuracy of marketing campaigns and clarify relationships among factors such as seasonality, product pricing and product promotions, or for an agriculture business attempting to ascertain the effects of temperature, rainfall and humidity on crop production. Gradient Boosting Regression is just one of the numerous predictive analytical techniques and algorithms included in the Assisted Predictive Modeling module of the Smarten augmented analytics solution. This solution is designed to serve business users with sophisticated tools that are easy to use and require no data science or technical skills. Smarten is a representative vendor in multiple Gartner reports including the Gartner Modern BI and Analytics Platform report and the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms Report.
Sabatelli relationship between the uncompensated price elasticity and the inc...GLOBMOD Health
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis.
Citation: Sabatelli L (2016) Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences. PLoS ONE 11(3): e0151390. doi:10.1371/journal.pone.0151390
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0151390
The Paired Sample T Test is used to determine whether the mean of a dependent variable. For example, weight, anxiety level, salary, or reaction time is the same in two related groups. It is particularly useful in measuring results before and after a particular event, action, process change, etc.
“Beyond Experiments: General Equilibrium Simulation Methods for Impact Evaluation" presented by Xinshen Diao, IFPRI and Edward Taylor, University of California at the ReSAKSS-Asia Conference, Nov 14-16, 2011, in Kathmandu, Nepal.
Customer Satisfaction Data - Multiple Linear Regression Model.pdfruwanp2000
In this project, we will discuss the results of our analysis of customer satisfaction
conducted using R Studio. Our team did this by carefully analyzing a number of factors that would have an effect on customer happiness of that certain company, such as Complaint
Resolution, Delivery Speed, Order Billing, Warranty Claims, Technical Support, E- commerce, Product Quality, Sales Force Image, Advertising, Price, and Product Line.
We performed a thorough study using a random sample of 70 data points, using a pre- chosen seed value which we obtained from the largest student number in our group. To
ensure the accuracy of our results, we replace missing values of the dataset with the mean of the dataset.
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand.
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0151390
$TITLE ECON 480 FINAL EXAM MATHEMATICAL PROGRAMMING QUESTION T.docxmayank272369
$TITLE ECON 480 FINAL EXAM MATHEMATICAL PROGRAMMING QUESTION
* This problem is from "Computational Economics" CH 1, Problem 7
* The product is Bar Stool.
$OFFSYMLIST OFFSYMXREF
*OPTION LIMROW = 0
*OPTION LIMCOL = 0
SETS
I PLANTS Production Sites
/P1,P2,P3,P4/
J MARKETS Market Sites
/M1,M2,M3,M4,M5,M6/;
*Data
TABLE NETREV(I,J) Net Revenue in Dollars
M1 M2 M3 M4 M5 M6
P1 29 51 59 54 56 27
P2 44 43 29 63 53 46
P3 63 37 63 62 46 47
P4 51 53 54 51 43 38;
* The Net Revenues are different because the product is
* sold at different prices in the six markets and the
* transportation cost from each plant to each market is also different
PARAMETERS
CAPACITY(I) Capacity at each plant
/P1 1200
P2 1800
P3 1100
P4 1900/
DEMAND(J) Demand at each market
/M1 1000
M2 1200
M3 700
M4 1500
M5 500
M6 1000/;
VARIABLES
X(I,J) Montly Shippments from plant i to market j
TOTPROF Objective Function Value;
POSITIVE VARIABLE X;
EQUATIONS
OBJFUNC OBJECTIVE FUNCTION
PLANCONS(I) PLANT CONSTRAINT
DEMCONS(J) DEMAND CONSTRAINT;
OBJFUNC .. TOTPROF =E= SUM((I,J), NETREV(I,J) * X(I,J));
PLANCONS(I) .. SUM(J, X(I,J)) =L= CAPACITY(I);
DEMCONS(J) .. SUM(I, X(I,J)) =G= DEMAND(J);
MODEL MAXPROF /OBJFUNC,PLANCONS,DEMCONS/;
SOLVE MAXPROF USING LP MAXIMIZING TOTPROF;
�
[EXECUTE]
[OPENWINDOW_1]
MAXIM=1
TOP=0
LEFT=0
HEIGHT=254
WIDTH=480
FILE0=w:\econ480\final.gms
FILE1=w:\econ480\final.lst
FILE2=w:\econ480\final.gms
[MRUFILES]
1=w:\econ480\final.lst
2=w:\econ480\final.gms
NOT Transforming the Data Can Be Fatal to Your Analysis
A case study, with real data, describes the need for data transformation.
[Type here]
Donald Wheeler stated in his second article "Transforming the Data Can Be Fatal to Your Analysis,"
"out of respect for those who are interested in learning how to better analyze data, I feel the need to
further explain why the transformation of data can be fatal to your analysis."
My motivation for writing this article is not only improved data analysis but formulating analyses so
that a more significant business performance reporting question is also addressed within the
assessment; i.e., a paradigm shift from the objectives of the traditional four-step Shewhart system,
which Wheeler referenced in his article.
The described statistical business performance charting methodology can, for example, reduce
firefighting when the approach replaces organizational goal-setting red-yellow-green scorecards,
which often have no structured plan for making improvements. This article will show, using real
data, why an appropriate data transformation can be essential to determine the best action or non-
action to take when applying this overall system in both manufacturing and transactional processe ...
Running Head: Demand Estimation 1
Demand Estimation 7
Demand Estimation
Student Name
Institutional Affiliation
Instructor’s Name
Demand Estimation
Compute the elasticity for each independent variable. Note: Write down all of your calculations.
When P= 500, C=600, I=5,500, A=10,000, & M=5000, using regression equation,
QD= - 5200 – 42(500) + 20(600) + 5.2(5500) + 0.20(10000) + 0.25(5000) = 17,650
Price Elasticity = (P/Q) (∆Q/∆P)
With regard to the regression equation, ∆Q/∆P = -42.
Price Elasticity is therefore (Ep) = (P/Q) (-42) (500/17650) = -1.19, likewise,
Ec = 20(600/17560) = 0.68.
EA will be got as = (P/Q) (0.20) (10000/17650) = 0.11
EI = (P/Q) (5.2) (5500/17650) = 1.62
EM = (P/Q) (0.25) (5000/17650) = 0.07
Determine the implications for each of the computed elasticity for the business in terms of short-term and long-term pricing strategies. Provide a rationale in which you cite your results.
Price elasticity is basically -1.19. This is representing the point that a 1 percent increase in the product’s price to lead to 1.19 percent decrease in the demanded quantity. This therefore makes the demand for this specific product to be elastic. Consequently, an increase in the income may dismiss away consumers (Chiappori & Ekeland, 2009).
Cross price elasticity is got to be 0.68 meaning that if the competitor’s price of the products increases by 1%, the quantity demanded will basically increase by 0.68% for this particular product. It is therefore important to note that this product is somehow inelastic to the price of the competitor consequently creating no point of concern on the side of the competitor because their prices will not affect the sales (Kreps, 2013).
Income elasticity is at 1.62. This indicates that a 1 % rise in the average area income will lead to an increase in the quantity demanded by 1.62%. Based on this, the product is considered to be elastic and the company can as well opt to increase the price if the average income rises.
Advertisement elasticity is at 0.11 resulting to a perception that a 1% increase in the advertisement expenditure will increase the demanded quantity by 0.11%. Therefore demand is relatively high with regard to advertisement (Henderson, 2008). Due to this, additional advertisementdoes not automatically mean that an organization can increase the prices of its products since it can as well discourage its consumers.
Considering microwave ovens situated on the region, elasticity is got at 0.07. This is a clear indication that if there is an increase of 1% in the quantity of ovens in the region, the quantity demanded will basically increase by 0.07%. With regard to this, demand is considered as inelastic and price strategy can neglect this element.
Recommend whether you believe that this firm should or should not cut its price to increase its market share. Provide support f ...
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
1. Lecture 1: NBERMetrics.
Ariel Pakes
July 17, 2012
We will be concerned with the analysis of demand systems
for differentiated products and some of its implications (e.g.’s;
for the analysis of markups and profits; for the construction of
price indices, and for the interpretation of hedonic regression
functions).
Why Are We Concerned With Demand Systems?
• The demand system provides much of the information re-
quired to analyze the incentive facing firms in a given market
place. These incentives play a (if not the) major role in the
determination of the profits to be earned from either
• different pricing decisions
• or alterantive investments (in the development of new prod-
ucts, or in advertising)
The response of prices and of product development to policy
or environmental changes are typically the first two questions
when analyzing either the historical or the likely future impacts
of such changes. Advertising is a large part of firm investment
and there is often a question of its social value and the extent
1
2. to which we should regulate it. The answer to this question
requires an answer to how advertising affects utility.
Note. There are other components determining the profitabil-
ity of pricing and product development decisions, but typically
none that we can get as good a handle on as the demand system.
For example an analysis of the static response to a change in the
environment would require also a cost function, and an equilib-
rium assumption. Cost data are typically proprietary, and we
have made much less progress in analyzing the nature of com-
petition underlying the equilibrium assumptions than we have
in analyzing demand systems (indeed it would be easy to argue
that we should be devoting our research efforts more towards
understanding that aspect of the problem).
• Welfare analysis. As we shall see modern demand system
analysis starts with individual specific utility functions and choices
and then explicitly aggregates up to the market level when the
aggregate demand system is needed for pricing or product place-
ment decisions. It therefore provides an explicit framework for
the analysis of the distribution of changes in utility that result
from policy or environmental changes. These are used in several
different ways.
• Analyzing the consumer surplus gains to product introduc-
tions. These gains are a large part of the rationale for
subsidizing private activity (e.g. R&D activity) and one
might want to measure them (we come back to an example
of this from Petrin’s, JPE 2002, work, but it was preceeded
by a simpler analysis of Trajtenburg, JPE 1989).
2
3. • Analyzing the impact of regulatory policies. These include
pricing policies and the agencies decisions to either foster
or limit the marketing of goods and services in regulated
markets (regulatory delay, auctioning off the spectrum, ...).
Many regulatory decisions are either motivated by non-
market factors (e.g. we have decided to insure access to
particular services to all members of the community), or
are politically sensitive (partly because either the regula-
tors or those who appointed them are elected). As a result
to either evaluate them or consider their impacts we need
estimates of the distributional implications.
• The construction of “Ideal price Indices”. Should be the
basis of CPI, PPI, . . . . Used for indexation of entitlement
programs, tax brackets, government salaries, private con-
tracts, measurement of aggregates.
3
4. Frameworks for the Analysis
There is really a two-way classification of differentiated product
demand systems. There are
• representative agent (demand is derived from a single utility
function), and
• heterogenous agents
models, and within each of these we could do the analysis in
either
• product space, or
• characteristics space.
Multi-Product Systems: Product Space.
Demand systems in product space have a longer history than
those in characteristic space, and partly because they were de-
veloped long ago, when they are applied to analyze market level
data, they tend to use a representative agent framework. This
is because the distributional assumptions needed to obtain a
closed form expression for aggregate demand from a heteroge-
nous agent demand system are extremely limiting, and comput-
ers were not powerful enough to sum up over a large number of
individuals every time a new parameter vector had to be eval-
uated in the estimation algorithm. This situation has changed
with the introduction of simulation estimators.
4
5. Digression; Aggregation and Simulation in Estima-
tion. Used for prediction by McFadden, Reid, Travilte , and
Johnson ( 1979, BART Project Report), and in estimation by
Pakes, 1986.
Though we often do not have information on which consumer
purchased what good, we do have the distribution of consumer
characteristics in the market of interest (e.g. the CPS). Let z
denote consumer characteristics, F (z) be their distribution, pj
be the price of the analyzed, p−j be the prices of competing
goods.
We have a model for individual utility given (z, pj , p−j ) which
depends on a parameter vector to be estimated (θ) and gener-
ates that indiviudal’s demand for good j,say q(z, pj , p−j ; θ).
Then aggregate demand is
D(pj , p−j ; F, θ) = z
q(z, pj , p−j ; θ)d F (z).
This may be hard to calculate exactly but it is easy enough to
simulate an estimate of it. Take ns random draws of z from
F (·), say (z1, . . . , zns), and define your estimate of D(pj , p−j ; F, θ)
to be ns
ˆ j , p−j ; F, θ) =
D(p q(zi, pj , p−j ; θ).
i=1
If E(·) provides the expectation and V ar(·) generates the vari-
ance from the simulation process
ˆ
E[D(pj , p−j ; F, θ)] = D(pj , p−j ; F, θ),
and
ˆ
V ar[D(pj , p−j ; F, θ)] = ns−1V ar(q(zi, pj , p−j ; θ)).
5
6. So our estimate is unbiased and can be made as precise as we
like by increasing ns.
Product space and heterogenous agents. We will fo-
cus on heterogenous agent models in characteristic space for the
reasons discussed below. However, at least in principle, demand
systems in characteristic space are approximations to product
space demand systems, and the approximations can be prob-
lematic1. On the other hand when it is feasible to use a product
space model there are good reasons to use heterogeneous agent
versions of it. This is because with heterogenous agents the
researhcer
• can combine data from different markets and/or time peri-
ods in a sensible way, and in so doing add the information
content of auxiliary data sets,
• and can get some idea of the entire distribution of responses.
The first point is important. Say we are interested in the im-
pact of price on demand. Micro studies typically show that the
price coefficient depends in an important way on income (or
some more inclusive measure of wealth); lower income people
care about price more. Consequently if the income distribution
varies across the markets studied, we should expect the price
coefficients to vary also, and if we did not allow for that it is
not clear what our estimate applies to.
1
I say in principle, because it may be impossible to add products every time a change in a
product’s characteristics occurs. In these cases researchers often aggregate products with slightly
different characteristics into one product, and allow for new products when characteristics change a
lot. So in effect they are using a “hybrid” model. It is also possible to explicitly introduces utility
as a function both products and characteristics; for a recent example see Dubois,Griffith, and Nevo
(2012, Working Paper.)
6
7. Further, even if we could chose markets with similar income
distributions, we typically would not want to. Markets with
similar characteristics should generate similar prices, so a data
set with similar income distributions would have little price vari-
ance to use in estimation.
In fact as the table below indicates, there is quite a bit of
variance in income distributions across local markets in the U.S.
The table gives means and standard deviations of the fraction of
the population in different income groups across U.S. counties.
The standard deviations are quite large, especially in the higher
income groups where most goods are directed.
Table I: Cross County Differences in Household Income∗
Income Fraction of U.S. Distribution of Fraction
Group Population in Over Counties
(thousands) Income Group Mean Std. Dev.
0-20 0.226 0.289 0.104
20-35 0.194 0.225 0.035
35-50 0.164 0.174 0.028
50-75 0.193 0.175 0.045
75-100 0.101 0.072 0.033
100-125 0.052 0.030 0.020
125-150 0.025 0.013 0.011
150-200 0.022 0.010 0.010
200 + 0.024 0.012 0.010
∗
From Pakes (2004, RIO) “Common Sense and Simplicity in Empirical Industrial
Organization”.
7
8. Multi-product Systems in Product
Space: Modelling Issues.
We need a model for the single agent and the remaining issues
with proudct space are embodied in that. They are
• The “too many parameter problem”. J goods implies on
the order of J 2 parameters (even from linear or log-linear
systems) and having on the order of 50 or more goods is
not unusual
• Can not analyze demand for new goods prior to product
introduction.
The Too Many Parameter Problem.
In order to make progress we have to aggregate over commodi-
ties. Two possibilities that do not help us are
• Hicks composite commodity theorem. pt = θtp0. Then
j j
demand is only a function of θ and p1, but we can not
study substitution patterns, just expansion along a path.
• Dixit-Stiglitz (or CES) preferences. Usually something like
( xρ)1/ρ. This rules out differential substitutions between
i
goods, so it rules out any analysis of competition between
goods. When we use it to analyze welfare, it is even more
restrictive than using simple logits (since they have an idi-
dosyncratic component), and you will see when we move to
welfare that the simple logit has a lot of problems.
8
9. Gorman’s Polar Forms.
Basic idea of multilevel budgeting is
• First allocate expenditures to groups
• Then allocate expenditures within the group.
If we use a (log) linear system and there are J goods K groups
we get J 2/K + K 2 parameters vs J 2 + J (modulus other re-
strictions on utility surface). Still large. Moreover multilevel
budgeting can only be consistent if the utility function has two
peculiar properties
• We need to be able to order the bundles of commodities
within each group independent of the levels of purchases of
the goods outside the group, or weak separability
u(q1, ..., qJ ) = u[v1(q1), ..., vK (qK )].
where (q1, ..., qJ ) = (q1, ..., qK ). which means that for i
and j in different groups
∂qi ∂qi ∂qj
∝ × .
∂pj ∂x ∂x
(This implies that we determine all cross price elasticities
between goods in different groups by a number of parame-
ters equal to the number of groups).
• To be able to do the upper level allocation problem there
must be an aggregate quantity and price index for each
group which can be calculated without knowing the choices
within the groups.
9
10. These two requirements lead to Gorman’s (1968) polar forms
(these are if and only if conditions);
• Weak separability and constant budget shares within a
group (i.e. ηq,y = 1) (Weak separability is as above and
it implies that we can do the lower level allocation, con-
stant budget shares allows one to derive the price index for
the group without knowing the “income” allocated to the
group).
• Strong separability (additive), and budget shares need not
go through the origin.
These two restrictions are (approximately) implemented in the
Almost Ideal Demand Systems; see Deaton and Muelbauer
(1980)
Grouping Procedures More Generally.
If you actually consider the intuitive basis for Gorman’s condi-
tions for a given grouping, they often seem a priori unreason-
able. More generally, grouping is a movement to characteristics
models where we group on characteristics. There are problems
with it as we have to put products in mutually disjoint sets
(which can mean lots of sets), and we have to be extremely
careful when we “group” according to a continuous variable.
For example if you group on price and there is a demand side
residual correlated with price (a statement which we think is
almost always correct, see the next lecture) you will induce a
selection problem that is hard to handle. These problems disap-
pear when you move to characteristic space (unless you group
10
11. there also, like in Nested Logit, where they reappear). Empir-
ically when grouping procedures are used, the groups chosen
often depend on the topics of interest (which typically makes
different results on the same industry non-comparable).
11
12. An Intro to Characteristic Space.
Basic Idea. Products are just bundles of characteristics and
each individual’s demand is just a function of the characteristics
of the product (not all of which need be observed). This changes
the “space” in which we analyze demand with the following
consequences.
• The new space typically has a much smaller number of ob-
servable dimensions (the number of charactertistics) over
which we have preferences. This circumvents the “too many
parameter” problem. E.g. if we have k dimensions, and
the distribution of preferences over those dimensions was
say normal, then we would have k means and k(k + 1)/2
covariances, to estimate. If there were no unobserved char-
acteristics the k(1 + (k + 1)/2) parameters would suffice to
analyze own and cross price elasticities for all J goods, no
matter how large J is.
• Now if we have the demand system estimated, and we know
the proposed characteristics of a new good, we can, again
at least in principle, analyze what the demand for the new
good would be; at least conditional on the characteristics
of competing goods when the new good is introduced.
Historical Note. This basic idea first appeared in the de-
mand literature in Lancaster (1966, Journal of Political Econ-
omy), and in empirical work in McFadden (1973, “’Conditional
logit analysis of qualitative choice behavior’ in P. Zarembka,
(ed.), Frontiers in Econometrics). There was also an exten-
sive related literature on product placement in I.O. where some
12
13. authors worked with vertical(e.g. Shaked and Sutton) and some
with horizontal (e.g. Salop) characteristics. For more on this
see Anderson, De Palma and Thisse (and the reading list has
all these sites ).
Modern. Transfer all this into an empirically useable the-
ory of market demands and provide the necessary estimation
strategies. Greatly facilitated by modern computers.
Product space problems in characteristic space analysis. The
two problems that appear in product space re-appear in char-
acteristic space, but in a modified form that, at least to some
extent, can be dealt with.
• If there are lots of characteristics the too many parameter
problem re-appears and now we need data on all these char-
acteristics as well as price and quantity. Typically this is
more of a problem for consumer goods than producer goods
(which are often quite well defined by a small number of
characteristics). The “solution” to this problem will be to
introduce unobserved characteristics, and a way of dealing
with them. This will require a solution to an “endogeneity”
problem similar to the one you have seen in standard de-
mand and suppy analysis, as the unobserved characteristic
is expected to be correlated wth price.
• We might get a reasonable prediction for the demand for
a new good whose characteristics are similar (or are in the
span) of goods already marketed. However there is no in-
formation in the data on the demand for a characteristic
13
14. bundle that is in a totally new part of the characteristic
space. E.g. the closest thing to a laptop that had been
available before the laptop was a desktop. The desktop
had better screen resolution, more memory, and was faster.
Moreover laptops’ initial prices were higher than desktops;
so if we would have relied on demand and prices for desk-
tops to predict demand for laptops we would have predicted
zero demand.
There are other problems in characteristic space of varying im-
portance that we will get to, many of which are the subject of
current research. Indeed just as in any other part of empirical
economics these systems should be viewed as approximations to
more complex processes. One chooses the best approximation
to the problem at hand.
14
15. Utility in Characteristic Space.
We will start with the problem of a consumer choosing at most
one of a finite set of goods. The model
• Defines products as bundles of characteristics.
• Assumes preferences are defined on these characteristics.
• Each consumer chooses the bundle that maximizes its util-
ity. Consumers have different relative preferences (usually
just marginal preferences) for different characteristics. ⇒
different choices by different consumers.
• Aggregate demand. Sum over individual demands. So it
depends on entire distribution of consumer preferences.
Formalities.
Utility of individual i for product j is given by
Ui,j = U (xj , pj , νi; θ), for j = 0, 1, 2, . . . , J.
Note. Typically j = 0 is the “outside” good. This is a “fic-
tional” good which accounts for people who do not buy any of
the “inside” goods (or j = 1, . . . , J). Sometimes it is modelled
as the utility from income not spent on the inside goods. It
is “outside” in the sense that the model assumes that its price
does not respond to factors affecting the prices of the inside
goods. Were we to consider a model without it
• we could not use the model to study aggregate demand,
15
16. • we would have to make a correction for studying a selected
(rather than a random) sample of consumers (those who
bought the good in question),
• it would be hard to analyze new goods, as presumably they
would pull in some of the buyers who had purchased the
outside good.
νi represents individual specific preferenc differences, xj is a
vector of product characteristics and pj is the price (this is just
another product characteristic, but one we will need to keep
track of separately because it adjusts to equilibrium considera-
tions). θ parameterizes the impact of preferences and product
characteristics on utility. The product characteristics do not
vary over consumers.
The subset of ν that lead to the choice of good j are
Aj (θ) = {ν : Ui,j > Ui,k , ∀k} (assumes no ties),
and, if f (ν) is the distribution of preferences in the population
of interest, the model’s predictions for market shares are given
by
sj (x, p; θ) = ν∈Aj (θ)
f (ν)d(ν),
where (x, p) without subscripts refer to the vector consisting
of the characteristics of all J products (so the function must
differ by j). Total demand is then M sj (x, p; θ), where M is the
number of households.
16
17. Details.
Our utility functions are “cardinal”, i.e. the choices of each
individual are invariant to affine transformations, i.e.
1. multiplication of utility by a person specific positive con-
stant, and
2. addition to utility of any person specifc number.
This implies that we have two normalizations to make. In mod-
els where utility is (an individual specific) linear function of
product characteristics, the normalizations we usually make are
• normalize the value of the outside good to zero. Equivalent
to subtracting Ui,0 from all Ui,j .
• normalize the coefficient of one of the variables to one (in
the logit case this is generally the i.i.d error term; see be-
low).
The interpretation of the estimation results has to take these
normalizations into account; e.g. when we estimate utility for
choice j we are measuring the difference between its utility and
that for choice zero, and so on.
Simple Examples.
• Pure Horizontal. In the Hotelling story the product charac-
teristic is its location, person characteristic is its location.
With quadratic transportation costs we have
Ui,j = u + (yi − pj ) + θ(δj − νi)2.
17
18. Versions of this model have been used extensively by theo-
rists to gain intuition for product placement problems (see
the Tirole readings). There is only two product character-
istics in this model (price and product location), and the
coefficient of one (here price) is normalized to one. Every
consumer values a given location differently. When con-
sumers disagree on the relative values of a characteristic we
usually call the characteristic “horizontal”. Here there is
only one such characteristic (product location).
• Pure Vertical. Mussa-Rosen, Gabsewicz-Thisse, Shaked-
Sutton, Bresnahan. Two characteristics: quality and price,
and this time the normalization is on the quality coefficient
(or, equivalently, the inverse of the price coefficient). The
distinguishing feature here is that all consumers agree on
the ranking of the characteristic (quality) of the different
products. This defines a vertical characteristic.
Ui,j = u − νipj + δj , with νi > 0.
This is probably the most frequently used other model for
analyzing product placement (see also Tirole and the refer-
ences therein). When all consumers agree on the ranking of
the characteristic across products we (like quality here) we
call it a “vertical” characteristic. Price is usually considered
a vertical characteristic.
• “Logit”. Utiilty is mean quality plus idiosyncratic tastes
for a product.
Ui,j = δj + i,j .
18
19. You have heard of the logit because it has tractible analytic
properties. In praticular if distributes i.i.d. (over both
products and individuals), and F ( ) = exp(−[exp(− )]),
then there is a closed, analytic form for both
– the probability of the maximum choice conditioned on
the vector of δ’s and
– the expected utility conditional on δ.
The closed form for the probability of choice eases esti-
mation, and the closed form for expected utility eases the
subsequent analysis of welfare. Further these are the only
known distributional assumption that has closed form ex-
pressions for these two magnitudes. For this reason it was
used extensively in empirical work (especially before com-
puters got good). Note that implicit in the distributional
assumption is an assumption on the distribution of tastes,
an assumption called the independence of irrelevant alter-
natives (or IIA) property. This is not very attractive for
reasons we review below.
The Need to Go to the Generalizations.
It is useful to see what the restrictions the simple forms of these
models imply, as they are the reasons we do not use them in
empirical work.
The vertical model. Ui,j = δj − νipj , where δj is the quality
of the good, and νi differs across individuals.
19
20. First, this specification implies that prices of goods must be
ordered by their qualities; if one good has less quality than
another but a higher price, no one would ever buy the lower
quality good. So we order the products by their quality, with
product J being the highest quality good. We can also order
individuals by their ν values; individuals with lower ν values
will buy higher priced, and hence higher quality, goods.
Now consider how demand responds to an increase in price of
good j. There will be some ν values that induce consumers that
would have been in good j to move to good j +1, the consumers
who were almost indifferent between these two goods. Similarly
some of the ν values would induce consumers who were at good
j to move to j − 1. However no consumer who choses any
other good would change their choice; if the other good was the
optimal choice before, it will certainly be optimal now.
So we have the implication that
• Cross price elasticities are only non-zero with neighboring
goods; with neighbors defined by prices, and
• Own price elasticities only depend on the density of the
distribution of consumers about the two indifference points.
Now consider a particular market, say the auto market. If we
line cars up by price we might find that a Ford Explorer (which
is an SUV) is the “neighbor” of a Mini Cooper S, with the Mini’s
neighbor being a GM SUV. The model would say that if the
Explorer’s price goes up none of the consumer’s who substitute
away from the Explorer would subsitute to the GM SUV, but
a lot would substitute to the Cooper S. Similarly the density of
20
21. consumer’s often has little to do with what we think determines
price elasticities (and hence in a Nash pricing model, markups).
For example, we often think of symmetric or nearly symmetric
densities in which case low and high priced goods might have
the same densities about their indifference points, but we never
expect them to have the same elasticities or markups.
These problems will occur whenever the vertical model is
estimated; that is, if these properties are not generated by the
estimates, you have a computer programming error.
The Pure Logit. I.e; Ui,j = δj + i,j with independent over
products and agents. IIA problem. The distribution of a con-
sumer’s preferences over products other than the product it
bought, does not depend on the product if bought. Intuitively
this is problematic; we think that when a consumer switches out
of a car, say because of a price rise, the consumer will switch to
other cars with similar characteristics.
With the double exponential functional form for the distri-
bution it gives us an analytic form for the probabilities
exp[δj − pj ]
sj (θ) =
1 + q exp[δq − pq ]
1
s0(θ) =
1 + q exp[δq − pq ]
This implies the follwing.
• Cross price derivatives are sj sk . Two goods with the same
shares have the same cross price elasticities with any other
good regardless of that goods characteristics. So if both a
21
22. Yugo and a high end Mercedes increased their price by a
thouand dollars, since they had about the same share2 the
model would predict that those who left the Yugo would
be likely to switch to exactly the same cars that those who
left the Mercedes did.
• Own price derivative (∂s/∂p) = −s(1 − s). Only depends
on shares. Two goods with same share must have same
markup in a single product firm “Nash in prices” equilib-
rium. So the model would also predict the same equilibrium
markups for the Yugo and Mercedes.
Just as in the prior case, no data will ever change these
implications of the two models. If your estimates do not
satisfy them, there is a programing error.
Generalizations.
The generalizations below are designed to mitigate these prob-
lems, but when the generalizations are not used in a rich enough
way in empirical work, one can see the implications of a lesser
version of the problems in the empirical results.
• Generalization 1. ADT(1993) call this the “Ideal Type”
model. Berry and Pakes (2007) call this the “Pure Char-
acteristic” model. It combines and generalizes the vertical
and the horizontal type models (consumers can agree on
2
The Yugo was the lowest price car introduced in the states in the 1980-90’s and it only lasted
in the market for a few years; it had a low share despite a low price because it ws undersirable; the
Mercedes had a low share because of a high price
22
23. the ordering of some characteristics and not others).
Ui,j = f (νi, pj ) + xj,k νi,r θk,r ,
k r
where again we have distinguished price from the other
product characteristics as we often want to treat price some-
what differently. A special case of this (after appropriate
normalizations) is the “ideal type” model
Ui,j = f (νi, pj ) + δj + αk (xj,k − νi,k )2.
k
• Generalization 2. BLP(1995)
Ui,j = f (νi, pj ) + xj,k νr,k θk,r + i,j .
k
BLP vs The Pure Characteristic Model. The only difference
between BLP and the pure characteristic model is the addi-
tion of the logit errors (the i,j ) to the utility of each choice.
As a result, from the point of view of modelling shares, it can
be shown that the pure characteristics is a a limiting case of
BLP; but it is not a special case and the difference matters3. In
particular it can only be approached if parameters grow with-
out bound. This is a case computers have difficulty handling,
and Monte Carlo evidence indicates that when we use the BLP
specification to approximate a data set which abides by the pure
characteristic model, it can do poorly.
This could be problematic, because the logit errors are there
for computatinal convenience rather than for their likely ability
3
It is easiest to see this if you rewrite the BLP utility equation with a different normalization.
Instead of normalizing the coefficient of to one, let it be σ > 0. Now divide by σ . As σ → 0,
the shares from the pure characteristics model converge to those from BLP.
23
24. to mimic real-world phenomena. Disturbances in the utility are
usually thought of as emanating from either missing product
or missing consumer characteristics, and either of these would
generate correlation over products for a given individual. As
a result, we might like a BLP-type specification that nests the
pure-characteristic model.
When might the logit errors cause problems? In markets
with a lot of competing goods, the BLP specification does not
allow for cross-price elasticities which are too large. This is
because the independence of the logit errors implies that there
are always going to be some consumers who like each good a
lot more than other goods, and this bounds markups and cross
price-elasticities. There is also the question of the influence of
these errors on welfare estimates, which we will come back to
in another lecture.
These are reasons to prefer the pure characteristics model.
However, for reasons to be described below, it is harder (though
not impossible) to compute, and so far the recently introduced
computational techniques which you will be introduced to in
the subsequent lecture, have not helped much on this problem.
This is the reason that you have seen more of BLP than the
pure characteristic model. Despite this its intuitive appeal and
increased computer power have induced a number of papers us-
ing the pure characteristic model; often in markets where there
are one or two dominant vertical characteristics (like computer
chips, see Nosko, 2011). We come back to this in our discussion
of computation.
24