Topic 4.2

737
-1

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
737
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
17
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Topic 4.2

  1. 1. ECON 377/477<br />
  2. 2. Topic 4.2<br />Stochastic Frontier Analysis<br />Part 2<br />
  3. 3. Outline<br />Distance functions<br />Cost frontiers<br />Decomposing cost efficiency<br />Scale efficiency<br />Panel data models<br />Accounting for the production environment<br />Conclusions<br />3<br />ECON377/477 Topic 4.2<br />
  4. 4. Distance functions<br />Distance functions can be used to estimate the characteristics of multiple-output production technologies in cases where we have no price information and/or it is inappropriate to assume firms minimise costs or maximise revenues<br />Examples arise when an industry is regulated<br />Input distance functions tend to be used instead of output distance functions when firms have more control over inputs than outputs, and vice versa<br />We consider only input distance functions<br />4<br />ECON377/477 Topic 4.2<br />
  5. 5. Distance functions<br />Assume we have access to cross-sectional data on I firms<br />An input distance function defined over M outputs and N inputs takes the form<br /> where xniis the n-th input of firm i; qmi is the m-th output; and diI ≥ 1 is the maximum amount by which the input vector can be radially contracted without changing the output vector<br />ECON377/477 Topic 4.2<br />5<br />
  6. 6. Distance functions<br />The function dI(.) is non-decreasing, linearly homogeneous and concave in inputs, and non-increasing and quasi-concave in outputs<br />The first step in econometric estimation of an input distance function is to choose a functional form for dI(.)<br />It is convenient to choose a functional form that expresses the log-distance as a linear function of (transformations of) inputs and outputs<br />6<br />ECON377/477 Topic 4.2<br />
  7. 7. Distance functions<br />For example, if we choose the Cobb-Douglas functional form then the model becomes<br /> where viis a random variable introduced to account for errors of approximation and other sources of statistical noise<br />This function is non-decreasing, linearly homogeneous and concave in inputs if βn ≥ 0 for all n and if<br />7<br />ECON377/477 Topic 4.2<br />
  8. 8. Distance functions<br />It is also quasi-concave in outputs if non-linear functions of the first- and second-order derivatives of diI with respect to the outputs are non-negative<br />Econometric estimation would be reasonably straightforward were it not for the fact that the dependent variable is unobserved<br />8<br />ECON377/477 Topic 4.2<br />
  9. 9. Distance functions<br />Some substitution and re-arrangement enables us to obtain a homogeneity-constrained model<br /> where is a non-negative variable associated with technical inefficiency<br />Our decision to express ln diI as a linear function of inputs and outputs results in a model that is in the form of the stochastic production frontier<br />9<br />ECON377/477 Topic 4.2<br />
  10. 10. Distance functions<br />This model is discussed in Part 1 of this topic<br />It follows that we can estimate the parameters of the model using the ML technique that is also discussed in Part 1<br />A radial input-oriented measure of technical efficiency is:<br />But there are two common problems in the estimation of distance functions<br />10<br />ECON377/477 Topic 4.2<br />
  11. 11. Distance functions<br />These problems are:<br />The explanatory variables may be correlated with the composite error term<br />Estimated input distance functions often fail to satisfy the concavity and quasi-concavity properties implied by economic theory<br />A solution to the first problem is to estimate the model in an instrumental variables framework<br />A solution to the second problem is to impose regularity conditions by estimating the model in a Bayesian framework<br />11<br />ECON377/477 Topic 4.2<br />
  12. 12. Cost frontiers<br />When price data are available and it is reasonable to assume firms minimise costs, we can estimate the economic characteristics of the production technology (and predict cost efficiency) using a cost frontier<br />In the case where we have cross-sectional data, the cost frontier model can be written in the general form<br /> ci ≥ c(w1i, w2i, …, wNi, q1i, q2i, …, qMi)<br />12<br />ECON377/477 Topic 4.2<br />
  13. 13. Cost frontiers<br />In this equation, ci is the observed cost of firm i,wni is the n-th input price and qmi is the m-th output<br />Note that c(.) is a cost function that is non-decreasing, linearly homogeneous and concave in prices<br />The implication of the equation is that the observed cost is greater than or equal to the minimum cost<br />The first step in estimating the relationship is to specify a functional form for c(.)<br />13<br />ECON377/477 Topic 4.2<br />
  14. 14. Cost frontiers<br />The Cobb-Douglas cost frontier model is:<br /> where viis a symmetric random variable representing errors of approximation and other sources of statistical noise and uiis a non-negative variable representing inefficiency<br />This function is non-decreasing, linearly homogeneous and concave in inputs if the βn are non-negative and satisfy the constraint<br />14<br />ECON377/477 Topic 4.2<br />
  15. 15. Cost frontiers<br />A translog model is obtained in a similar way<br />Both models can be written in the compact form:<br />A measure of cost efficiency is the ratio of minimum cost to observed cost, which can be easily shown to be: CEi = exp(-ui)<br />Check CROB (pp. 267-269) where they present annotated SHAZAM output from the estimation of a half-normal translog cost frontier defined over a single output and three inputs<br />15<br />ECON377/477 Topic 4.2<br />
  16. 16. Decomposing cost efficiency<br />When we have data on input quantities or cost-shares, cost efficiency can be decomposed into technical and allocative efficiency components<br />One approach involves estimating a cost frontier together with a subset of cost-share equations<br />We focus on a slightly different decomposition method, estimating a production frontier together with a subset of the first-order conditions for cost minimisation<br />16<br />ECON377/477 Topic 4.2<br />
  17. 17. Decomposing cost efficiency<br />Consider a single-output Cobb-Douglas production frontier:<br />Minimising cost subject to this technology constraint entails writing out the Lagrangean, and setting the first-order derivatives to zero<br />Taking the logarithm of the ratio of the first and n-th of these first-order conditions yields:<br />for n = 2, …, N<br />Allocative efficiency<br />17<br />ECON377/477 Topic 4.2<br />
  18. 18. Decomposing cost efficiency<br />In this equation, ηniis a random error term introduced to represent allocative inefficiency<br />It is positive, negative or zero depending on whether the firm over-utilises, under-utilises or correctly utilises input 1 relative to input n<br />A firm is regarded as being allocatively efficient if and only if ηni= 0 for all n<br />Observe that inputs appear in ratio form<br />18<br />ECON377/477 Topic 4.2<br />
  19. 19. Decomposing cost efficiency<br />Thus, a radial expansion in the input vector (an increase in technical inefficiency) will not cause a departure from the first-order conditions<br />But a change in the input mix (allocative inefficiency) will clearly cause a departure from the first-order conditions<br />We can estimate the N equations by ML under the (reasonable) assumptions that the vis, uis and ηnis are iid as univariate normal, half-normal and multivariate normal random variables, respectively<br />19<br />ECON377/477 Topic 4.2<br />
  20. 20. Decomposing cost efficiency<br />That is,<br />Scale economies are measured by<br />20<br />ECON377/477 Topic 4.2<br />
  21. 21. Decomposing cost efficiency<br />CROB (p. 271) show that the cost function and its associated system takes the form:<br /> where:<br /> and α is a non-linear function of the βn<br />Technical efficiency<br />Allocative efficiency<br />21<br />ECON377/477 Topic 4.2<br />
  22. 22. Decomposing cost efficiency<br />The term ui/r measures the increase in the log-cost due to technical inefficiency<br />The term Ai – ln r measures the increase due to allocative inefficiency<br />A measure of cost efficiency is the ratio of minimum cost to observed cost:<br />CEi = CTEi × CAEi<br /> where the component CTEi = exp(–ui/r) is due to technical inefficiency, and the component CAEi = exp(ln r – Ai) is due to allocative inefficiency<br />22<br />ECON377/477 Topic 4.2<br />
  23. 23. Decomposing cost efficiency<br />We can obtain point predictions for CTEi and CAEi by substituting predictions for ui and ηni into these expressions<br />If the technology exhibits constant returns to scale (r = 1), then:<br /> CTEi = TEi = exp(–ui)<br /> CAEi= AEi ≡ exp(–Ai)<br />Thus, CEi = TEi × AEi, which is the familiar expression from Topic 2<br />23<br />ECON377/477 Topic 4.2<br />
  24. 24. Decomposing cost efficiency<br />CROB illustrate the method and present annotated SHAZAM output in Table 10.2 from the estimation of a three-input Cobb-Douglas production frontier and decomposition of cost efficiency into its two components<br />For simplicity, they estimate the production frontier in a single-equation framework, although more efficient estimators could be obtained by estimating the frontier in a seemingly unrelated regression framework<br />24<br />ECON377/477 Topic 4.2<br />
  25. 25. Scale efficiency<br />To measure scale efficiency, we must have a measure of productivity and a method for identifying the most productive scale size (MPSS)<br />In the case of a single-input production function, we can measure productivity using the AP<br />The MPSS is the point of maximum AP(x)<br />The first-order condition for a maximum can be easily rearranged to show that the MPSS is the point where the elasticity of scale is 1 and the firm experiences local constant returns to scale<br />25<br />ECON377/477 Topic 4.2<br />
  26. 26. Scale efficiency<br />To measure scale efficiency, we set the elasticity of scale to 1 and solve for the MPSS, denoted x*<br />Scale efficiency at any input level x is:<br />This procedure generalises to the multiple-input case, although a measure of productivity is a little more difficult to conceptualise<br />26<br />ECON377/477 Topic 4.2<br />
  27. 27. Scale efficiency<br />Think of the input vector x as one unit of a composite input, so that kx represents k units of input<br />A measure of productivity is the ray average product (RAP):<br />Set the elasticity of scale to 1 and solve for the optimal number of units of the composite input, denoted k*<br />27<br />ECON377/477 Topic 4.2<br />
  28. 28. Scale efficiency<br />A measure of scale efficiency at input level kx is:<br /> or, if k = 1:<br />A solution can be obtained for a translog functional form and the associated measure of scale efficiency derived<br />28<br />ECON377/477 Topic 4.2<br />
  29. 29. Scale efficiency<br />If the production frontier takes the translog form<br /> the scale efficiency measure becomes<br /> where<br />29<br />ECON377/477 Topic 4.2<br />
  30. 30. Scale efficiency<br />Now, ε(x) is the elasticity of scale evaluated at x, and<br />If the production frontier is concave in inputs, β will be less than zero and the scale efficiency measure will be less than or equal to one<br />30<br />ECON377/477 Topic 4.2<br />
  31. 31. Panel data models<br />We now extend discussion of frontier models to the case where panel data are available<br />Panel data sets enable us to obtain more efficient estimators of the unknown parameters and more efficient predictors of technical efficiencies<br />They often allow us to:<br /><ul><li>relax some of the strong distributional assumptions
  32. 32. obtain consistent predictions of TEs
  33. 33. investigate changes in technical efficiencies</li></ul>31<br />ECON377/477 Topic 4.2<br />
  34. 34. Panel data models<br />They also enable us to investigate changes in the underlying production technology over time<br />A panel data model can be written as:<br /> where a subscript ‘t’ is added to represent time<br />If we assume the vits and uits are independently distributed, we can estimate the parameters of this model using the methods described in Topic 4.1<br />32<br />ECON377/477 Topic 4.2<br />
  35. 35. Panel data models<br />A problem with assuming the uits are independently distributed is that we fail to reap any of the benefits listed above<br />Moreover, for many industries the independence assumption is unrealistic – all other things being equal, we expect efficient firms to remain reasonably efficient from period to period, and we hope that inefficient firms improve their efficiency levels over time<br />For these reasons, we need to impose some structure on the inefficiency effects<br />33<br />ECON377/477 Topic 4.2<br />
  36. 36. Panel data models<br />It is common to classify different structures on the inefficiency effects according to whether they are time-invariant or time-varying<br />One of the simplest structures we can impose on the inefficiency effects is<br /> uit = uii = 1, …, I; t = 1, …., T<br /> where ui is treated as either a fixed parameter or a random variable<br />These models are known as the fixed effects model and random effects model, respectively<br />34<br />ECON377/477 Topic 4.2<br />
  37. 37. Panel data models<br />The fixed effects model can be estimated in a standard regression framework using dummy variables<br />The estimated model can only be used to measure efficiency relative to the most efficient firm in the sample so our estimates may be unreliable if the number of firms is small<br />The random effects model can be estimated using either least squares or ML techniques<br />35<br />ECON377/477 Topic 4.2<br />
  38. 38. Panel data models<br />The ML approach involves making stronger distributional assumptions concerning the uis<br />Estimating models in a random effects framework using the ML method allows us to disentangle the effects of inefficiency and technological change<br />36<br />ECON377/477 Topic 4.2<br />
  39. 39. Panel data models<br />The likelihood function for this model is a generalisation of the likelihood function for the half-normal stochastic frontier model discussed in Topic 4.1<br />Formulas for firm-specific and industry efficiencies are also generalisations of the formulas presented in Topic 4.1<br />The hypothesis testing procedures discussed in Topic 4.1 are also applicable<br />37<br />ECON377/477 Topic 4.2<br />
  40. 40. Panel data models<br />Models with time-invariant inefficiency effects can be conveniently estimated using FRONTIER and LIMDEP<br />CROB illustrate this estimation in Table 10.3, which contains annotated FRONTIER output from the estimation of a truncated-normal frontier<br />Note that significant differences exist between the first-order coefficient estimates reported in this table and those reported in Table 9.6 where no account is taken of the panel nature of the data<br />38<br />ECON377/477 Topic 4.2<br />
  41. 41. Panel data models<br />Two models that allow for time-varying technical inefficiency take the form:<br /> where α, β and η are unknown parameters to be estimated<br />The Battese and Coelli function involves only one unknown parameter, and is less flexible<br />Kumbhakar model<br />Battese and Coelli model<br />39<br />ECON377/477 Topic 4.2<br />
  42. 42. Panel data models<br />A limitation of both functions is that they do not allow for a change in the rank ordering of firms over time<br />The firm that is ranked n-th at the first time period is always ranked n-th<br />That is, if ui < uj, then <br /> for all t<br />40<br />ECON377/477 Topic 4.2<br />
  43. 43. Panel data models<br />The Kumbhakar and Battese and Coelli models can both be estimated under the assumption that ui has a truncated normal distribution:<br />Again, the likelihood function is a generalisation of the likelihood function for the half-normal stochastic frontier model, as are formulas for firm-specific and industry efficiencies<br />Hypotheses concerning individual coefficients can be tested using a z test or LR test, but they are usually tested using an LR test if there is more than one coefficient in the test<br />41<br />ECON377/477 Topic 4.2<br />
  44. 44. Panel data models<br />Null hypotheses of special interest are<br /><ul><li>H0: α = β = 0 or H0: η = 0 (time-invariant efficiency effects)
  45. 45. H0: µ = 0 (half-normal inefficiency effects at time period T)</li></ul>CROB present annotated FRONTIER output from the estimation of a frontier in Table 10.4<br />They are unable to reject both null hypotheses that the technological change effect is zero and η = 0<br />42<br />ECON377/477 Topic 4.2<br />
  46. 46. Panel data models<br />These hypothesis test results suggest that the model is having difficulty distinguishing between output increases due to technological progress and output increases due to improvements in technical efficiency<br />Several more flexible models are discussed in the efficiency literature<br />Notably, Cuesta (2000) specifies a model of the form that generalises the Battese and Coelli model and allows the temporal pattern of inefficiency effects to vary across firms <br />43<br />ECON377/477 Topic 4.2<br />
  47. 47. Accounting for the production environment<br />The ability of a manager to convert inputs into outputs is often influenced by exogenous variables that characterise the environment in which production takes place<br />It is useful to distinguish between non-stochastic variables that are observable at the time key production decisions are made and unforeseen stochastic variables that can be regarded as sources of production risk (events of any type that might lead managers to seek some form of liability insurance)<br />44<br />ECON377/477 Topic 4.2<br />
  48. 48. Accounting for the production environment<br />The simplest way to account for non-stochastic environmental variables is to incorporate them directly into the non-stochastic component of the production frontier<br />In the case of cross-sectional data this leads to a model of the form:<br /> where zi is a vector of (transformations of) environmental variables and γ is a vector of unknown parameters<br />45<br />ECON377/477 Topic 4.2<br />
  49. 49. Accounting for the production environment<br />This model has exactly the same error structure as the conventional stochastic frontier model discussed in Topic 4.1<br />Thus, all the estimators and testing procedures discussed in that part of the topic are available<br />Our predictions of firm-specific technical efficiency now vary with both the traditional inputs and the environmental variables<br />46<br />ECON377/477 Topic 4.2<br />
  50. 50. Accounting for the production environment<br />The preferred method to deal with observable environmental variables is to allow them directly to influence the stochastic component of the production frontier<br />Assume<br />and<br />47<br />ECON377/477 Topic 4.2<br />
  51. 51. Accounting for the production environment<br />The inefficiency effects in the frontier model have distributions that vary with zi, so they are no longer identically distributed<br />The likelihood function is a generalisation of the likelihood function for the conventional model, as are measures of firm-specific and industry efficiency<br />The model has also been generalised to the panel data case<br />48<br />ECON377/477 Topic 4.2<br />
  52. 52. Accounting for the production environment<br />A simple way to account for production risk is to append another random variable to the frontier model to represent the combined effects of any variables that are unobserved at the time input decisions are made<br />If we assume this random variable has a symmetric distribution, then it is difficult to distinguish it from the noise vi<br />Alternatively, if we assume it has a non-negative distribution, it is difficult to distinguish it from the inefficiency effect ui<br />49<br />ECON377/477 Topic 4.2<br />
  53. 53. Accounting for the production environment<br />This suggests that, for all intents and purposes, we can persist with the conventional stochastic frontier model, although we should recognise that the two error components now measure the effects of noise, inefficiency and risk<br />But the conventional frontier model has two undesirable risk properties<br /><ul><li>The signs of the MPs are the same as the signs of the associated marginal risks
  54. 54. The model does not permit substitutability between state-contingent outputs</li></ul>50<br />ECON377/477 Topic 4.2<br />
  55. 55. Accounting for the production environment<br />One way to overcome the first problem is to assume the composed error term is heteroskedastic<br />One way to allow for substitution between state-contingent outputs is to estimate a state-contingent stochastic frontier of the form<br /> where βj is a vector of unknown parameters and viand ui represent noise and inefficiency, respectively (but not risk)<br />51<br />ECON377/477 Topic 4.2<br />
  56. 56. Accounting for the production environment<br />This model is identical to the conventional stochastic frontier model, except the coefficient vector βj is permitted to vary across risky states of nature, j = 1, …, J<br />Estimation is complicated by the fact that states of nature are typically unobserved or data are sparse<br />This problem can be overcome by estimating the model in a Bayesian mixtures framework, and using this model to identify output shortfalls due to inefficiency and output shortfalls due to adverse conditions<br />52<br />ECON377/477 Topic 4.2<br />
  57. 57. Conclusions<br />Two other possible methods for estimating multiple-output technologies are not discussed<br />First, we can use profit frontiers when input and output prices are available and it is reasonable to assume firms maximise profits<br />Methods to estimate profit frontiers are similar to those available for estimating cost frontiers<br />Second, we can aggregate multiple outputs into a single output measure using index number methods, and estimate the technology in a conventional single-output framework<br />53<br />ECON377/477 Topic 4.2<br />
  58. 58. Conclusions<br />The decision to estimate a distance function, cost frontier, profit frontier or single-output production frontier is one of the many decisions facing researchers who want to estimate efficiency using a parametric approach<br />Researchers must also make choices concerning functional forms, error distributions, estimation methods and software<br />The need to make so many choices is often seen as a disadvantage of the parametric approach<br />54<br />ECON377/477 Topic 4.2<br />
  59. 59. Conclusions<br />We have two simple pieces of advice:<br />Always make decisions on a case-by-case basis<br />Whenever it is possible, explore alternative models and estimation methods and (formally or informally) assess the adequacy and robustness of the results obtained<br />55<br />ECON377/477 Topic 4.2<br />

×