Fall 2006


Published on

Published in: Business, Economy & Finance
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Fall 2006

  1. 1. ACCT 665 Fall 2006 Midterm Exam Answer any two of the following (50 points each): 1. Accounting researchers assume efficient markets in the semi-strong form. What are efficient markets? Why is the semi-strong form important for accounting research? Are stock markets really efficient? What evidence does Ball & Brown (1968) and Sloan (1996) present to suggest efficient market anomalies? Shiller (2003) suggests several reasons for inefficiency based on behavioral finance, including price-to-price feedback theory, biased self-attribution, and prospect theory. Explain what these reasons mean and how they relate to market efficiency. Markets are efficient when information is impounded into price immediately and in an unbiased fashion. In the semi-strong form, the focus is on publicly available information. Accounting is interested in the semi-strong form, because of the importance of publicly available financial information, especially earnings. Most market analysis suggests relatively efficient markets. However, in Ball & Brown abnormal returns continued in the same direction (up for “good news”) considerably beyond the announcement date (post-announcement drift), contrary to market efficiency expectations. Sloan found that earnings were persistent relative to cash flows, but not accruals. Stock prices “fixated” on earnings. Therefore, a trading strategy based on going long on the lowest accrual decile portfolio and short on the highest “beat” the market, again contrary to efficiency expectations. Shiller incorporated behavioral finance to explain anomalies. Price-to-price feedback theory suggests that speculative success increases investor demand, feeding into a speculative bubble. Individuals use biased self-attribution to confirm the validity of their action to their own ability and attribute “bad events” to bad luck or other factors beyond their control. These also can facilitate speculative bubbles and other irrational behavior. Prospect theory suggests that investors are more upset by losses than pleased by gains, resulting in various irrational behaviors. They may avoid selling losers for example. 2. Swanson (2004) talks about q-r theory to describe the academic review process in business. Kachelmeier (2004) claims that the review process evolves toward preoccupation with r-quality. What is ―q-r theory‖ and what are the implications of Kachelmeier’s point?
  2. 2. Bauer (1992) discusses revolutionary shifts in theory (―paradigm shifts,‖ also related to the ―unknown unknown‖). How do paradigm shifts relate to q-r theory? Q-r theory is used to predict the differences in quality norms across disciples and over time. Q-quality is the inherent importance and interest in the major ideas of a paper. R-quality refers to tangential issues such as robustness checks and discussion to related literature. The point of Kachelmeier reinforces Swanson’s point that over time the shift of reviewers is toward r-quality (e.g., getting bogged down in the minutia) rather than considering the major contributions generated from papers. Bauer’s discussion of revolutionary shifts in theory requires high q-quality to generate “paradigm shifts,” less likely when r-quality is stressed. 3. Management forecast precision was analyzed by Baginski & Hassell (1997), Bamber & Cheon (1998), and Choi et al. (2006). In Baginski & Hassell (1997) precision is significantly related to days, number of analysts, and size; while Bamber & Cheon (1998) find blockshare, concentration, horizon, venue, and year significant. What do these results mean? Choi et al. (2006) are interested in the effect of ―bad news‖ on precision. What does this mean and how is it tested? Precisions is defined as a categorical variable ranging from imprecise (e.g., =0) to a point estimated (e.g., =3). This is the dependent variable in the articles using logit models. The sign and significance of the independent variables indicate the impact on forecast precision. The results of Baginski & Hassel indicate: (1) the negative sign for days (forecast horizon, number of calendar days from forecast to period-end)greater earnings uncertainty and lower precision, (2) that more analysts require higher precision (private information), and (3) larger firm size decreases precision (public information). Bamber & Cheon’s findings indicate: (1) that both legal liability (block ownership, % of shares held in blocks greater than 5%) and proprietary information (product market concentration ratio) decreases precision, (2) horizon (same at B&H days) and size also decrease precision, while (3) venue (source, 1=press release, 3=analysts or reporters) and year (smaller values for earlier years) increase precision. “Bad news” earnings forecasts (i.e., based on the sign of the news, typically MF<AF) are expected to be less precise since management acts strategically. However, neither paper provides a clear indication of the relative difference on precision of earnings “good news” vs. “bad news” (it seems to be marginally significant in Bamber & Cheon). Choi et al. demonstrate that bad news is less precise for both forecast errors and CAR analysis, using a logit model where precision is the dependent variable and “bad news” is measured as (1)
  3. 3. BAD_CAR (CAR<0) and (2) BAD_UF (UF<0). The coefficients of both are negative and significant; consistent with bad news decreasing precision.
  4. 4. ACCT 665 Fall 2005 Midterm Exam Answer any two of the following (50 points each): 1. Some capital market studies such as Ball & Brown (1968) use the error term to determine ―abnormal return.‖ However, abnormal return can be cumulative or not; short or long windows can be used. Ball & Brown used long windows. Bernard & Thomas (1991) primarily used short windows. What are short and long windows? Sloan (1996, Table 8) included a comparison of both. Why are abnormal returns cumulative or not? What are the relative advantages & drawbacks of short and long windows? The error term of the market model is used as an indicator of market reaction to specific news events, such as earnings announcements. Based on market efficiency, the market response should be immediate and in the expected direction (positive for “good news,” which should result in positive abnormal returns). Thus, the market reaction should be immediate and therefore, captured on the announcement date. Because of “information leakage,” a researcher may also in include the day before the announcement date (and other days nearby). These two- and three-day periods for testing are short windows. Bernard & Thomas used short windows to capture quarterly market reaction up to eight quarters after the initial earnings announcement. In much of their testing B&T also used long-windows, from one quarterly announcement to the next (the results were less robust than using short-windows). Ball & Brown used monthly data & long windows, 12 months before & 6 months after the annual announcement date. Their measure (API) is cumulative to capture the earnings impact over the preceding year, culminating in the announcement month. Most testing using cumulative abnormal returns (CARs), although the results for specific dates also may be analyzed. Sloan used size-adjusted returns (rather than the market model) and used long- windows in an earnings response coefficient framework. In Table 8, Sloan compares short-and long-window results across decile portfolios measuring accrual risks. The results varied by portfolio. Thus, both short- and long-window abnormal returns can be used in a variety of circumstances, often requiring the comparison of both to determine the most appropriate measures of performance.
  5. 5. 2. Empirical testing based on the Scientific Method requires internal and external validity. What are internal & external validity? Dr. Wolfe introduced construct and statistical validity. What do these four terms mean in the context of empirical research based on archival data? Internal validity demonstrates a cause-and-effect relationship; that is, causality is inferred. In an experiment (say, drug testing), it is demonstrated that A (a new drug) causes outcome B (a cure). There are many possible threats to internal validity, including temporal precedence (the drug is given before the cure happens) and selection bias (random samples usually work well). External validity infers generality to the population; that is, findings can be generalized to other or broader groups. Construct validity asks the question: are you measuring what you think you’re measuring? Statistical validity is correct empirical testing. Are the statistical tests the right ones & results accurately determined? These validity constructs are essential both for experiments and empirical analysis of archival data. By definition, archival data represent results that cannot be directly manipulated. The after-the-fact analysis requires that the experimental design correctly controls for internal validity and other validity measures, including statistical and construct validity. Particularly important is empirical surrogates that accurately measure theoretical constructs. 3. What is market efficiency? There are several anomalies in market research that suggest that the stock market is not efficient. Following Ball & Brown (1968) and Bernard & Thomas (1991), post announcement drift is one such anomaly. What is post announcement drift and how is it tested in Bernard and Thomas (1990)? Sloan’s (1996) results (e.g., Table 6) suggest that the market can be beat, suggesting inefficient markets. Compare the results of Bernard & Thomas with Sloan in terms of what they tell up about market efficiency. Markets are efficient if information is impounded immediately in an unbiased fashion (i.e., in the expected direction). Markets are usually expected to be efficient in the semi-strong form, impounding all public information. Ball & Brown found post announcement drift, as abnormal returns continued after the earnings announcement date. Efficient markets suggest that price reaction after announcements should be random (as likely to be positive as negative). Bernard & Thomas test post announcement drift by looking at earnings up to 8 quarters after the original earnings announcement date and then. Standardized earnings are in the same direction for the next three quarters and then change signs four quarters out. The market response follows this earnings pattern: following the original earnings/returns for 3 quarters (based on 3 days windows),
  6. 6. then reversing sign in the fourth quarter. The implication is that there is information in the earnings announcements that is not impounded in stock price, an indicator of market inefficiency. Sloan considers the persistence of earnings by comparing them to accruals and cash flows, assuming that “naïve investors” focus on reported earnings that follow accruals more closely than cash flows. Sloan’s Table 6 using accruals- based deciles that demonstrate that size-adjusted returns are positive & significant for the lowest portfolios and negative & significant for the higher portfolios one to three years later. This suggests that a trading strategy of buying to lowest-accrual decile portfolio and selling short the high decile portfolio would beat the market. This is complementary to B&T and also suggests inefficient markets. Both focused on the extreme positive & negative deciles for testing. By using two distinct but related contexts of “earnings information,” the papers provide evidence that markets are inefficient in some contexts.
  7. 7. ACCT 665 Fall 2004 Midterm Exam Answer any two of the following (50 points each): 1. Important for using the Scientific Method is the use of formal hypotheses stated in the alternative form. What are these formal hypotheses and how do they fit into the Scientific Method? Give a specific example of a formal hypothesis. The Scientific Method includes Theory Construction & Theory Verification. Using theory construction, the underlying theory is used to construct formal hypotheses. These are predicting results in a specific direction, as a formal test of the theory. Theory verification (usually empirical testing) is used to test the specific hypotheses. For example, earnings “good news” (greater than expected) is hypothesized to result in positive residuals as measured by the abnormal performance index in Ball & Brown (1968). 2. Assume that the data you’re using have extreme values and are not normally distributed. You could use a log transformation, delete extreme observations, rank the data and use ranks, or divide the sample in half and create a dummy variable. Explain what these terms mean, and the advantages and disadvantages of each approach. Extreme values are common when using economic data and should be dealt with appropriately. Information on distribution (e.g., using Proc Univariate) can be used to get a sense of the problem(s). For example, population and income numbers are often skewed to the right and a log transformation can be used to reduce the level of skewness. Note that logs cannot be used for negative observations and this procedure is likely to reduce rather than eliminate extreme values. When there are large extreme values (e.g., beyond 3 standard deviations), these can be deleted. The problem is that these observations are gone from the analysis. An alternative approach to avoid deletion (useful when the sample is small) is to use ranks and conduct statistical test using non-parametric tests (or modified procedures with parametrics). The problem is that specific values related to magnitude are lost. The data can be divided into two groups to run as a dummy variable. This is useful when the distribution is “unusual” and non- normal (e.g., bi-modal). The problem is that most information in this distribution is lost.
  8. 8. 3. Important in event studies such as Ball & Brown (1968), Beaver, or Leftwich (1981) is the expectation model (essentially predicting direction). How are expectations handled (or not handled) in each of these studies? Event studies test the impact of the information content of the event (usually earnings) on stock prices. Usually, the market model is used and the impact measured as abnormal returns associated with the market model residual. Ball & Brown (1968) used changes in earnings as the event (measured by random walk and an accounting earnings market index model). The purpose was to determine whether the current year’s EPS represented “good news” (e.g., this year’s earnings greater than last year’s using random walk) or “bad news (EPS less than last year’s). Two portfolios were created (good news and bad news), with predictions that the abnormal performance index would be positive for good news and negative for bad news. Beaver (1968) used the market model, but did not use an expectations model. Instead, he squared the residual to develop his “U” statistic, which eliminated the sign. He tested only the magnitude of the earnings events, not the direction. Leftwich (1981) used specific events associated with the issuance of APB 16 & 17 on business combinations. Prediction errors were calculated for each of 21 events. The expectation was that each of these events represented “bad news” to acquisition companies and negative PEs were expected.
  9. 9. ACCT 665 Fall 2003 Midterm Exam Answer any two of the following (50 points each): 1. What is the relationship of hypothesis development & theory verification. How do these concepts relate to the Scientific Method? (Be sure to define these terms). Hypothesis development is part of the theory construction process, the point of which is to predict specific causal relationships from an existing theoretical structure or paradigm (and usually the direction of results). Theory verification is synonymous with theory validation, the procedure of developing an empirical model and statistically testing the hypotheses with appropriate data. These are fundamental parts of the Scientific Method, the systematic controlled empirical verification of hypotheses derived from a theoretical structure. 2. What is a Martingale process? How is this process used by Ball & Brown (1968)? A Martingale process is a time series model stated as: E(Xt) = ØXt-1 + δ. In other works, the current value is related to last period’s value plus an additional constant. Ball & Brown use a simplified version called random walk (where Ø=1 & δ=0), were current EPS are expected to be last year’s EPS: E(Xt) = Xt-1. The random walk model becomes one of the expectation models: when current year’s EPS is higher than last year’s EPS, that firm-year observation is higher than expected and goes to the ―good news‖ portfolio; if lower the firm-year observation goes to the ―bad news‖ portfolio. 3. Huck (1974) defines Type I & Type II errors. What are these definitions? How do these compare with the use of the terms by Altman (1968)? Why is this important? According to Huck: a Type I error is rejecting the null when it should be accepted. A Type II error is accepting the null when it should be rejected. Altman (1968) uses Type I & Type II errors for classification accuracy on specific observations. In Altman, a Type I error is predicting a bankrupt company as non-bankrupt and a Type II error predicting a non-bankrupt company as bankrupt. Type I errors are particularly critical since a decision based on this error (e.g., lending money to a company that will go bankrupt) will be costly. Note that Altman’s results had low type I errors.
  10. 10. 4. What is ordinal data? Is ordinal data usually continuous or categorical? Are there advantages of using ordinal data rather than interval (or cardinal) data? Explain. Ordinal data is ranked. Thus, ―first‖ is better than ―second‖ but the magnitude of the difference is unknown. Ordinal data is continuous and is usually considered inferior to interval data since information (magnitude) is lost. The descriptive analysis also is somewhat different; for example, central tendency is measured by the median rather than the mean. However, there are statistical advantages to ordinal data, particularly related to extreme values (or skewness). Generally non- parametric tests are used for analyzing ordinal data, but usually with tests considered the non-parametric equivalent to parametric tests (e.g., Spearmans rather than Pearsons).
  11. 11. ACCT 665 Fall 2002 Midterm Exam Answer any two of the following (50 points each): 1. Positive research in accounting requires the use of the Scientific Method plus the need for empirical testing. Particularly important are the use of theory and the development of testable hypotheses. Based on these concepts, what is accounting research? Be sure to define the above terms as part of your answer. The Scientific Method is the systematic, controlled observation or experiment whose results lead to hypotheses, which are found valid or invalid through further work, leading to theories that are reliable because of critical skepticism. There are 2 major components: (1) theory construction, where formal hypotheses are constructed to test specific research paradigms and (2) theory verification or tests of hypotheses, usually using empirical evidence and statistical testing. Theory is the conceptual framework to explain existing phenomena and predict new ones. Hypotheses are specific predictions of the theory that can be empirically tested. Positive research in accounting follows this approach, trying to determine how the accounting world ―really works‖ based on empirical testing of theory. 2. What are internal validity and external validity? Give an example of a research study in accounting that would be strong in both internal & external validity and explain why. The key to internal validity is insuring that the independent variable really produced a change in the dependent variable. This is based partly on a reasonable theory, appropriate hypotheses and valid empirical procedures. Statistical testing provides the relationship between variables, but ―causation‖ is difficult to determine. True experiments tend to be strong on internal validity. External validity represents the generalizability of the findings. This can be enhanced by validation testing, further testing, and multiple projects using the same basic theory and approach—this assumes that the findings hold. Any number of studies can be used to defend strong internal and external validity. True experiments are easier to defend, since the researcher controls the treatments. External validity is easier to defend in a well- tested area where the results are robust across different samples, periods and other factors. 3. Ball & Brown (1968), Beaver (1968) & Leftwich (1981) all use the market model. What is the market model and how was it used in these studies? Of particular importance is how each study used the error term of the market model for further analysis. The market model is: rit = i + irmt + eit. The model attempts to explain the relationship of the stock return (dividends may be included) of an individual company relative to the entire market at some time period. ―e‖ is the residual, the difference between expected return and actual return [rit - E(rit), where E(rit) = it + trmt]. ―e‖ is associated with unsystematic risk and represents an abnormal rate of return that is either positive (higher than expected) or negative. Ball & Brown (1968) describe ―e‖ as representing the impact of new information. B&B predict that ―bad news‖ (lower than expected accounting income) will be associated with a negative ―e‖, measured by the
  12. 12. abnormal performance index beginning 12 months before the earnings announcement (and visa versa for ―good news‖). BB found that market response followed earnings expectations (e.g., ―good news‖ portfolios associated with positive abnormal returns averaging over 5% spread out over the 12 month period). Beaver used the error term to develop his ―U‖ statistic, where U = e2it /σ2(ei), where e2 is the square of the residual from the market model and σ2(ei) is the residual variance from the same model (but based on the estimation period). A value significantly greater than one indicates abnormal returns associated with earnings announcement information. Beaver found a ―spike‖ at week 0 for both abnormal volume and his ―U‖ statistic (a measure of price variance), representing a significant abnormal return. Both approaches suggest a market response for earnings announcements. Leftwich (1981) uses the market model to analyze 21 events associated with the passage of APB 16 & 17, related to business combinations. In Stage 1 the market model is used to measure stock market reaction around an 11 day (or more) window for each event date. Significant reaction (based on the residuals from the market model) were noted for 9 events (8 in the expected negative direction) based on residual errors as measured by cumulative prediction errors. Stage 2: The accumulated residuals, standardized as prediction errors, become the dependent variables in additional regression runs where the independent variables are associated with specific agency factors of debt and size. 4. Altman (1968) used a matched pair design and also included validation testing. What is a matched pair design and why was it used by Altman? What is validation testing? How and why did Altman use this technique? A matched pair design uses firms that have specific characteristics, each matched to firms (which can be considered a control sample) without those characteristics. Altman matched a sample of bankrupt firms, where each bankrupt firm was matched to a non-bankrupt firm in the same industry and about the same size. The data also was matched by year around the year that the bankrupt firm declared bankruptcy. This indirectly controls for business cycle and other economic effects. Altman wants to determine if bankrupt and non-bankrupt firms can be classified correctly based on a specific model using financial ratios. His initial testing was successful, classifying about 95% of the firms correctly. The results were based on an empirical model using the original sample. The open question is whether the same model could ―predict‖ the correct category from another (holdout) sample. Using a separate holdout sample is a common form of validation testing, essentially testing the predictive ability of the original model. This validation testing improves the confidence in the internal validity of the model and also represents preliminary testing for generalizing the model—external validity. Altman used several holdout samples with reasonably good results.
  13. 13. ACCT 665 Fall 2001 Midterm Exam Answer any two of the following (50 points each): 1. Dummy variables are used frequently in multivariate accounting research models. Lowballing, time dummies, Big 5/Non-Big5, bankrupt/non-bankrupt are typical examples. Dummies also are handy to create interaction terms. What is a dummy variable and how is it interpreted in an OLS regression model? Assume that you want to evaluate four levels of bond rating in an OLS model. How can dummies be used? Dummy variables are 0/1 or ―yes/no‖ variables to split a sample into two components. They are quite useful for comparing obvious sub-groups & relatively easy to evaluate statistically. In an OLS model, dummies usually are used as independent variables—use standard interpretation of direction & significance. Additional analysis could include using dummies as interaction terms (splitting the variable of interest into 2 categories) or a BY statement to run the two sub-samples separately. Three dummies would be required to analyze four levels of bond ratings (0/1 for 3 levels & 1 category incorporated in the intercept). 2. Science has a long time horizon, basically matching observation with theory using some organizational structure (a ―paradigm‖ according to Kuhn—which can change with ―scientific revolutions‖). According to Wilson in Consilience, theory has four qualities: parsimony, generality, concilience, and predictiveness. Pick some area (in science, accounting …. it’s your choice) and describe how this process has worked effectively. Answer depends on topic picked. It’s probably easier to defend an example from science, such as a shift of perspective from Ptolemy to Copernicus to Kepler & Newton. Planetary motion fits nicely using the Wilson qualities, for example. 3. Two landmark capital market studies are Ball & Brown (1968) and Beaver (1968). Ball & Brown used the abnormal performance index (API), while Beaver used the ―U‖ statistic. What are these and how were they used? Using these concepts, what is the relationship of accounting information to stock price? Both papers use the market model to examine error terms as indicators of adjustment to new information. B&B standardize & sum error terms as the API for 12 months before the earnings announcement date to six months after. Thus, API looks at a lengthy time horizon to summarize directional effects of earnings (good news vs. bad news portfolios determined by earnings expectations). The famous API graph shows the rising API over –11 to 0 (goods news) & visa versa. Beaver standardizes errors using the ―U‖ statistic [e 2 / s2]. By squaring the terms, direction becomes irrelevant & focus is on magnitude rather than direction. Weekly returns show a ―spike‖ at earnings announcement date, indicating a significant market response to new information.
  14. 14. ACCT 665 Midterm Exam Fall 1996 Answer any two questions (50 points each). 1. Managerial bonus schemes are contractual terms designed to match principal and agent incentives. Why would a company use bonuses? What are the actual and potential transaction and agency costs associated with the use of bonus plans? Take the position of the Board of Directors as agents for the stockholders. They would use a bonus contract when perceived benefits (e.g., based on expected performance, net income, share price) exceed perceived costs (e.g., direct transactions costs and agency costs such as expected opportunistic behavior). Transactions costs: direct contract costs, including the payment of bonuses. Presumably, bonus costs rise with performance. There may be indirect transactions costs, such as mergers to increase bonuses rather than stock price. Agency costs: In attempting to align interests, new management incentives may emerge. Following Healy (1985), executives may manage earnings to maximize short- and long-term bonuses. Of particular concern is the use of discretionary accruals to change accounting income (increase accounting income in most cases, but decrease income above the maximum bonus or below zero income). This opportunistic behavior would be expected to increase bonus costs, but not maximize shareholder wealth. 2. The market model is: rit = i + irmt + eit. What is ―e‖ in this model? How do Ball and Brown (1968) use ―e‖ to measure the effect of ―bad news‖ on stock price? ―e‖ is the residual, the difference between expected return and actual return [rit - E(rit), where E(rit) = it + trmt]. ―e‖ is associated with unsystematic risk and represents an abnormal rate of return. Ball & Brown (1968) describe ―e‖ as representing the impact of new information. B&B predict that ―bad news‖ (lower than expected accounting income) will be associated with a negative ―e‖, measured by the abnormal performance index beginning 12 months before the earnings announcement. The API was .887 for the random walk bad news model at the end of month 0, a substantial decline in performance, as predicted. 3. In Giroux et al. (1995) YR1 (auditor change year) was negative and significant in the log of audit fee OLS model, but positive and significant in the log of audit hour model. YR2 was insignificant in both runs. What do these results mean? Are they consistent with audit economic theory? Explain. Accounting economics theory (especially DeAngelo 1981) predicts that an auditor change is associated with a lower audit fee in year 1; however, audit costs would be expected to be higher (e.g., learning curve). This should be recovered in later years by increasing audit fees and reducing audit costs. This was tested in Giroux et al. (1985). YR1 was associated with significantly lower fees and higher hours (a surrogate for audit cost). YR2 was not significant, indicating no difference in year 2 from other audit tenure periods. Thus, the empirical results were consistent with DeAngelo.
  15. 15. ACCT 665 Midterm Exam with Possible Answers Fall 1997 Answer any two (50 points each) 1. When writing a contract, a principal should consider the potential for opportunistic behavior. What does this mean? What can the principal do about this potential? According to Williamson (1985) there are three levels of behavior: Obedience, self interest, and opportunism. Opportunistic behavior is self interest with guile. Agents are expected to act in their self interest and understanding their incentives is important when writing contracts. The principal wants efficient contracts that limit transaction costs, including agency costs. Therefore, contracts should be written to: (1) limit expected agency costs and (2) align agent interests with those of the principal. Employment contracts may have incentives for managers to ―enhance‖ earnings performance such as bonus plans or stock options. Bonus plans provide incentives for short-term profit (with limits at maximum bonus and ―big bath‖ potential—see Healy, 1985). Stock options would align manager’s interests with shareholders, a long-term focus. 2. Assume you want to conduct an ex post analysis of audit fees and other characteristics of audit economics of small banks in the state. The state bank examiner has incredible information (virtually anything that is possible to collect). What type of research model would you construct and why? Would this model have strong internal validity? Explain. [Note: there is considerable flexibility for an answer] Following Suminic (1980), DeAngelo (1981a,b), and others, a fee model would be appropriate. A typical fee model would use log of fees as the dependent variable and model this as a function of audit costs, risk or loss functions, and other factors of interest, such as low balling, auditor tenure, industry factors, Big 6, and so on. Predictions for each variable would be made for each empirical measure based on theory. Expected results would be a model with a high R 2, significant coefficients in the expected directions for independent variables, and no diagnostic problems. {Other models may include audit quality (the bank examiner might examine quality) or peer review; type of contract (full fee vs. fixed price); etc. OLS regression has been the most common empirical test for audit economic models, although LOGIT, simultaneous equation models, and others are in the literature.] Audit economic models should be internally valid (assuming empirical results as suggested above and no sampling problems). A well developed theoretical foundation has been developed, empirical models match theory reasonably well, and actual numbers are normally used. Various studies yield similar results, a further check on validity. 3. Both Ball and Brown (1968) and Beaver (1968) use time period 0 (zero) as central for their analysis. What does this mean in both models? What did they find? Time period 0 is the announcement date of annual earnings (announcement month for BB and week for Beaver). BB were interested in market response (measured by API using accumulated standardized residuals from the market model) for 12 months up to month 0 (plus the next 6 months). Beaver looked at 17 weeks (week 0 +/- 8 weeks). BB found that market response followed earnings expectations (e.g., ―good news‖ portfolios associated with positive abnormal returns averaging over 5% spread out over the 12 month period). Beaver found a ―spike‖ at week 0 for both abnormal volume and his ―U‖ statistic (a measure of price variance). Both approaches suggest a market response for earnings announcements.
  16. 16. ACCT 665 Fall 1998 Midterm Exam Answer any two questions (50 points each). 1. a. Research is based on the Scientific Method. What is the Scientific Method? The Scientific Method is the systematic, controlled observation or experiment whose results lead to hypotheses, which are found valid or invalid through further work, leading to theories that are reliable because of critical skepticism (Bauer, p. 19). There are 2 major components: (1) theory construction, where formal hypotheses are constructed to test specific research paradigms and (2) theory verification or tests of hypotheses, usually using empirical evidence and statistical testing. b. Edward O. Wilson in Consilience claims that medical sciences have consilience and social sciences don’t. What does this mean? What is the potential solution for the social sciences? Consilience is the linking of facts and fact-based theory across disciplines to create a core of analysis. Medical sciences have been successful in this by combining theory and analysis across scientific disciplines to treat the human condition and advance health research. On the other hand, social sciences do not have a set of articulating or cross-discipline theoretical structures. Each discipline tends to operate separately and typically ignores the hard sciences (e.g., behavior based on epigenetic rules—the joint influence of heredity and environment). The solution, according the Wilson, is move toward consistent theory that is built on such areas as biology and psychology. 2. How is random walk used by Ball and Brown (1968)? On the other hand, random walk is not used by Beaver (1968). Why not? Random walk is a martingale process where E(Xt) = Xt-1. This naïve time series model is used by Ball and Brown (1968) to form two portfolios for a stock return analysis. If X t > Xt-1 (where Xt is EPS for one year) then earnings are higher than expected and that firm-year observation is placed in the ―good news‖ portfolio. If Xt < Xt-1 that observation is part of the ―bad news‖ portfolio. It is predicted (and demonstrated) that return as measured by the API from market model residuals is positive for the good news portfolio and visa versa. In summary both direction and magnitude are important components of Ball & Brown’s method. Beaver (1968) develops a ―U‖ statistic from the market model where U it = eit2 / st2. By squaring the residuals all ―U‖ observations are positive and the focus is on magnitude, not direction. Consequently, no expectation model is needed to determine direction. Instead, the prediction is that the U statistic is significantly larger when new information in the form of annual earnings is publicly announced. 3. Leftwich (1981) used a two-stage method for analyzing market reaction. How does this method work? Why is this paper an analysis of agency theory? Leftwich (1981) uses the market model to analyze 21 events associated with the passage of APB 16 & 17, related to business combinations. Stage 1: The market model is used to measure stock market reaction around an 11 day (or more) window for each event date. Significant reaction (based on the residuals from the market model) were noted for 9 events (8 in the expected negative direction) based on cumulative prediction errors. Stage 2: The prediction errors become the dependent variables in additional regression runs where the independent variables are associated with specific agency factors of debt and size. The PEs generally were significantly related to private debt, call provisions, and firm size. These are important agency theory factors to explain firm performance based on management incentives.