Topic 1.4

1,527 views

Published on

Published in: Technology, Economy & Finance
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,527
On SlideShare
0
From Embeds
0
Number of Embeds
345
Actions
Shares
0
Downloads
15
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Topic 1.4

  1. 1. ECON 377/477<br />
  2. 2. Topic 1.4<br />Econometric estimation of production technologies<br />
  3. 3. Outline<br />Introduction<br />Production, cost and profit functions<br />Single equation estimation<br />Imposing equality restrictions<br />Hypothesis testing<br />Systems estimation<br />ECON377/477 Topic 1.4<br />3<br />
  4. 4. Introduction<br />This section provides an overview of econometric methods for estimating economic relationships<br />We focus mainly on the case where we have access to cross-sectional data (observations on several firms in a single period) but panel data (observations on several firms in several periods) are used in some assignment work<br />The econometric concepts and methods we discuss carry over, often with little or no modification, to cases where we have time-series data (observations on a single firm in several periods) or panel data<br />ECON377/477 Topic 1.4<br />4<br />
  5. 5. Introduction<br />Some of the more common functional forms found in the applied economics literature include the linear, Cobb-Douglas, normalised quadratic and translog functional forms<br />In the next section, we describe how the parameters of different models can be estimated using the methods of ordinary least squares (OLS), maximum likelihood (ML) and nonlinear least squares<br />In the following section, we show how these methods can be adapted to ensure the parameters of our models satisfy equality constraints implied by economic theory<br />ECON377/477 Topic 1.4<br />5<br />
  6. 6. Introduction<br />In the final section, we show how to test the validity of these constraints using F-tests and likelihood-ratio (LR) tests<br />At this point, students will have developed enough econometrics background to understand the basic stochastic frontier models<br />6<br />ECON377/477 Topic 1.4<br />
  7. 7. Production, cost and profit functions<br />Production, cost and profit functions express a single dependent variable as a function of one or more explanatory variables<br />Mathematically, all these different functions can be written in the form:<br /> where y is the dependent variable; the xn (n = 1, …, N) are explanatory variables; and f(.) is a mathematical function<br />The first step in estimating the relationship between the dependent and explanatory variables is to specify the algebraic form of f(.)<br />ECON377/477 Topic 1.4<br />7<br />
  8. 8. Common functional forms<br />Different algebraic forms of f(.) give rise to different models<br />Some common functional forms are listed in CROB, Table 8.1, where y and the βn and βmn are unknown parameters to be estimated<br />When choosing between these different forms, we usually give preference to those that are:<br /><ul><li>Flexible
  9. 9. Linear in the parameters
  10. 10. Regular
  11. 11. Parsimonious</li></ul>ECON377/477 Topic 1.4<br />8<br />
  12. 12. Common functional forms<br />The βmn parameters satisfy the identifying condition βmn = βnm for all n and m<br />This condition is sometimes known as a symmetrycondition<br />Note that some functional forms are special cases of others<br />For example, the Cobb-Douglas can be obtained from the translog by setting all βmn= 0<br />ECON377/477 Topic 1.4<br />9<br />
  13. 13. Accounting for technological change<br />Technological advances often cause economic relationships (especially production functions) to change over time<br />If we have observations over time, we usually account for technological change by including a time trend in our model<br />See the examples in CROB (p. 213) for the linear, Cobb-Douglas and translog functions<br />The way we introduce a time trend into the model should reflect industry-specific knowledge of technological developments and how they affect important characteristics of economic behaviour<br />ECON377/477 Topic 1.4<br />10<br />
  14. 14. Single equation estimation<br />We can account for the combined effects of errors (known as statistical noise) by including random variables in our models<br />For the time being, we focus on models that are linear in the parameters and write:<br /> , i = 1, …, I<br /> where yi denotes the i-th observation on the dependent variable; xi is a Kx1 vector containing the explanatory variables; β is an associated Kx1 vector of unknown parameters; vi is a random error term representing statistical noise; and I denotes the number of observations in the data set<br />ECON377/477 Topic 1.4<br />11<br />
  15. 15. Single equation estimation<br />Depending on the functional form chosen and the type of economic relationship being estimated, the dependent and explanatory variables in the model may be different functions of input and output prices and quantities<br />Examples are logarithms, square roots and ratios<br />If we choose a translog model to explain variations in an output qi as a function of inputs x1i, x2i and x3i, then the model for yi = lnqi can be expressed in compact form as shown on the next slide<br />ECON377/477 Topic 1.4<br />12<br />
  16. 16. Single equation estimation<br />, (8.13)<br />and<br />13<br />
  17. 17. Single equation estimation<br />Our next task is to estimate the unknown parameter vector β<br />The two main estimation methods are OLS and ML<br />Both estimation methods are underpinned by important assumptions concerning the error terms<br />ECON377/477 Topic 1.4<br />14<br />
  18. 18. Ordinary least squares<br />The most common assumptions made in OLS concerning the errors are:<br /> zero mean<br />homoskedastic<br /> for all  uncorrelated<br />ECON377/477 Topic 1.4<br />15<br />
  19. 19. Single equation estimation<br />The least squares approach to estimating β involves minimising the sum of squared deviations between the yis and their means<br />The function that expresses this sum of squares as a function of β is:<br />ECON377/477 Topic 1.4<br />16<br />
  20. 20. Single equation estimation<br />The OLS estimator has a mean of β (unbiased)<br />It is also a linear function of the yi observations and can be shown to have a variance that is no larger than the variance of any other linear unbiased estimator (efficient)<br />We summarise these properties by saying that the OLS estimator is the best linear unbiased estimator (BLUE) of β<br />ECON377/477 Topic 1.4<br />17<br />
  21. 21. Maximum likelihood estimation<br />The concept of ML estimation is underpinned by the idea that a particular sample of observations is more likely to have been generated from some distributions than from others<br />The MLestimate of an unknown parameter is defined to be the value of the parameter that maximises the probability (or likelihood) of randomly drawing a particular sample of observations<br />To use the ML principle to estimate the parameters of the classical linear regression model, we first need to make an assumption concerning the distributions of the error terms<br />ECON377/477 Topic 1.4<br />18<br />
  22. 22. Maximum likelihood estimation<br />The most common assumption is that the errors are normally distributed. Combining this assumption with the three OLS assumptions, we write<br /> which says the errors are independently and identically distributed normal random variables with zero means and variances, σ2<br />Using the relationship between yiand vi together with well-known properties of normal random variables, we can then write:<br />ECON377/477 Topic 1.4<br />19<br />
  23. 23. Maximum likelihood estimation<br />The joint density function for the vector of observations is<br />This joint probability density function (pdf) is known as the likelihood function<br />It expresses the likelihood of observing the sample observations as a function of the unknown parameters β and σ2<br />ECON377/477 Topic 1.4<br />20<br />
  24. 24. Maximum likelihood estimation<br />The ML estimator of β is obtained by maximising this function with respect to β<br />Equivalently, it can be obtained by maximising the logarithm of the likelihood function:<br />In the special case of the classical linear regression model with normally distributed errors, the ML estimator for β is identical to the OLS estimator<br />ECON377/477 Topic 1.4<br />21<br />
  25. 25. Maximum likelihood estimation<br />The ML estimator has several desirable large-sample (i.e., asymptotic) properties<br />It can be shown to be consistent and asymptotically normally distributed (CAN) with variances that are no larger than the variances of any other CAN estimator<br />That is, the ML estimator is asymptotically efficient<br />An estimator is consistent, for a scalar parameter, if its values approach the true parameter value and if its variances get smaller as the sample size increases indefinitely<br />ECON377/477 Topic 1.4<br />22<br />
  26. 26. Estimation of non-linear models<br />The linear regression framework discussed above can be generalised to models that are nonlinear in the parameters<br />Computations using a non-linear estimator can be done easily using well-known econometrics software packages<br />Iterative optimisation procedures are not foolproof – sometimes they take us to a local rather than a global maximum, and sometimes they do not converge (to a maximum) at all <br />ECON377/477 Topic 1.4<br />23<br />
  27. 27. Estimation of non-linear models<br />To confirm that the procedure has converged, it is important to check that the first derivatives of the log-likelihood function (i.e., the gradients) are close to zero<br />To confirm that the maximum is a global maximum, we should establish that the log-likelihood function is globally concave<br />Alternatively, we can simply re-estimate the model using different sets of starting values<br />If very different starting values yield the same ML estimates, it is likely that a global maximum has been reached<br />ECON377/477 Topic 1.4<br />24<br />
  28. 28. Imposing equality constraints<br />Specialist knowledge of an industry may give rise to an analyst imposing equality constraints<br />For example, knowledge of smallholder rice production may lead the analyst to believe the production technology exhibits constant returns to scale (CRS)<br />In the case of the translog production function model, this property implies:<br /> β1 + β2 + β3 = 1<br /> β11 + β12 + β13 = 0<br /> β12 + β22 + β23 = 0<br /> β13 + β23 + β33 = 0<br />ECON377/477 Topic 1.4<br />25<br />
  29. 29. Imposing equality constraints<br />OLS and ML estimation methods can both be adapted to enforce these types of constraints on estimates for β<br />We minimise the sum of squares function or maximise the likelihood function subject to the constraints<br />Both of these constrained optimisation problems involve setting up a Lagrangean function, equating the first-order derivatives to zero and solving for β<br />ECON377/477 Topic 1.4<br />26<br />
  30. 30. Imposing equality constraints<br />Consider the problem of imposing the constraints on the parameters of the translog model<br />This model can be written as:<br />ECON377/477 Topic 1.4<br />27<br />
  31. 31. Imposing equality constraints<br />The constraints can also be rewritten as<br /> β1 = 1 – β2 – β3<br /> β11 = – β12 – β13<br /> β22 = – β12 – β23<br /> β33 = – β13 – β23<br />Substituting the constraints into the model yields (after some re-arrangement):<br /> where:<br />ECON377/477 Topic 1.4<br />28<br />
  32. 32. Imposing equality constraints<br />This so-called restricted model is linear in the parameters and can be estimated by OLS<br />Following estimation, estimates of β1, β11, β22 and β33 can be computed by substituting the estimates of β2, β3, β12, β13 and β23 into the right-hand sides of the constraints<br />ECON377/477 Topic 1.4<br />29<br />
  33. 33. Imposing equality constraints<br />CROB illustrate these procedures, presenting in Table 8.4 annotated SHAZAM output from estimating the constant-returns-to-scale translog model<br />Results from estimating the unrestricted version of this model are presented in Table 8.2<br />We consider methods for testing the validity of the constant returns to scale assumption in the next section<br />ECON377/477 Topic 1.4<br />30<br />
  34. 34. Hypothesis testing<br />If the errors are normally distributed, or if the sample size is large, we can test hypotheses concerning a single coefficient using a t-test<br />Let βk denote the k-th element of the vector β and let c be a known constant<br />To test H0: βk = c against H1: βk ≠ c, we use the test statistic:<br /> where bk is the estimator for βk and se(bk) is the estimator for its standard error<br />ECON377/477 Topic 1.4<br />31<br />
  35. 35. Hypothesis testing<br />Thus, we reject H0 if the absolute value of the test statistic is greater than the critical value<br />If the alternative hypothesis is H1: βk < c, then we reject H0at the 100α% level of significance if the t-statistic is less thantα(I – K), the critical value<br />If the alternative hypothesis is H1: βk > c, we reject H0 if the t-statistic is greater than t1–α(I–K)<br />Sometimes we wish to conduct a joint test of several hypotheses concerning the coefficients, such as the joint null hypothesis that a translog production function exhibits constant returns to scale<br />ECON377/477 Topic 1.4<br />32<br />
  36. 36. Hypothesis testing<br />Several procedures are available for testing hypotheses of this form<br />All are underpinned by the idea that if the restrictions specified under the null hypothesis are true, then our restricted and unrestricted estimates of β should be very close<br />This is because the restricted and unrestricted estimators are both consistent<br />The F-statistic can be used, as a measure of the difference between the sums of squared residuals obtained from the restricted and unrestricted models<br />ECON377/477 Topic 1.4<br />33<br />
  37. 37. Hypothesis testing<br />The value of the log-likelihood function evaluated at the unrestricted estimates should be close to the value of the log-likelihood function evaluated at the restricted estimates<br />The likelihood ratio statistic is a measure of this closeness:<br /> where lnLR and lnLU denote the maximised values of the restricted and unrestricted log-likelihood functions and J is the number of restrictions<br />ECON377/477 Topic 1.4<br />34<br />
  38. 38. Hypothesis testing<br />We reject H0 at the 100α% level of significance if the LR statistic exceeds the critical chi-squared value<br />The F and LR tests are simple to construct but require estimation of both the restricted and unrestricted models<br />Alternative testing procedures that require estimation of only one model are the Wald and Lagrange Multiplier (LM) tests<br />ECON377/477 Topic 1.4<br />35<br />
  39. 39. Hypothesis testing<br />The Wald statistic measures how well the restrictions are satisfied when evaluated at the unrestricted estimates, so it only requires estimation of the unrestricted model<br />The LM statistic looks at the first-order conditions for a maximum of the log-likelihood function when evaluated at the restricted estimates, so it only requires estimation of the restricted model<br />All the tests discussed in this section are asymptotically justified (i.e., we can use them if the sample size is large)<br />ECON377/477 Topic 1.4<br />36<br />
  40. 40. Systems estimation<br />We often want to estimate the parameters of several economic relationships<br />Consider the single-output three-input translog cost function:<br /> where ci is the i-th observation on cost, the wniare input prices and qi is output<br />We can obtain estimates of the parameters using OLS or ML<br />ECON377/477 Topic 1.4<br />37<br />
  41. 41. Systems estimation<br />We can also use Shephard’s Lemma to derive the cost-share equations<br /> for n = 1, 2, 3<br /> where sni is the n-th cost share and the vnis are random errors that are likely to be correlated with each other<br />The correlations between these errors provide information that can be used in the estimation process to obtain more efficient estimates<br />ECON377/477 Topic 1.4<br />38<br />
  42. 42. Systems estimation<br />In the present context, this means estimating a system of N = 3 equations comprising the cost function and, for reasons given on the next slide, only N – 1 = 2 of the cost-share equations<br />Such systems of equations, where the errors in the equations are correlated, are known as seemingly unrelated regression (SUR) models<br />Again, the parameters of SUR models can be estimated using OLS or ML techniques<br />ECON377/477 Topic 1.4<br />39<br />
  43. 43. Systems estimation<br />If the dependent variables in a set of SUR equations sum to the value of a variable appearing on the right-hand sides of those equations, the covariance matrix of the errors is usually singular<br />In this case, the estimation breaks down unless one equation is deleted from the system<br />ECON377/477 Topic 1.4<br />40<br />

×