1) The experiment tested whether a "momentum effect" could counteract the endowment effect by measuring changes in subjects' willingness to trade goods between experimental periods.
2) Results showed willingness to trade doubled on average between periods 1 and 2, supporting the momentum effect. Most subjects adjusted in ways consistent with momentum rotation.
3) Graphs and statistical analysis of changes in preferred bundles indicated 60-80% of subjects behaved in a manner supporting the momentum trading theory, with momentum adjustments being over 5 times larger than anti-momentum adjustments on average.
This chapter discusses point estimates and confidence intervals. A point estimate is a statistic used to estimate a population parameter, while a confidence interval provides a range of values that is likely to include the true population parameter. The width of a confidence interval depends on the sample size, population variability, and desired confidence level. Confidence intervals for a mean can be constructed using the t or z distributions depending on whether the population standard deviation is known. Confidence intervals can also be constructed for a population proportion. Sample sizes needed for estimating means and proportions are also addressed.
This document defines key concepts in hypothesis testing including the null and alternative hypotheses, the five-step hypothesis testing procedure, and types of errors. It provides examples of hypothesis tests for a population mean when the standard deviation is known and unknown, and for a population proportion. The document explains how to set up and conduct hypothesis tests, interpret results, and compute Type I and Type II errors.
Here are the key steps to construct confidence intervals in R:
1. Generate sample data from a population distribution. For example, to generate a random sample of size 30 from a normal distribution with mean 100 and standard deviation 15:
x <- rnorm(30, 100, 15)
2. Calculate the sample mean and standard deviation:
mean(x)
sd(x)
3. Determine the appropriate t-statistic value based on the confidence level and degrees of freedom (n-1). For example, for a 95% CI with 29 df, the t-stat is 2.045:
qt(0.975, 29)
4. Calculate the confidence interval limits as:
Here are the steps to solve this problem:
1) Given: n = 10, x = 0.32, s = 0.09
2) The degrees of freedom is n - 1 = 10 - 1 = 9
3) The t-value for a 95% CI with 9 df is t0.025,9 = 2.262 (from t-table)
4) The CI is: x ± t*s/√n = 0.32 ± 2.262*(0.09/√10) = 0.32 ± 0.029
5) The 95% CI is 0.291 to 0.349 inches
6) 0.30 inches is within the CI, so it would be
Chapter 7 – Confidence Intervals And Sample SizeRose Jenkins
This document discusses confidence intervals for means and proportions. It defines key terms like point estimates, interval estimates, confidence levels, and confidence intervals. It provides formulas for calculating confidence intervals for means when the population standard deviation is known or unknown, and when the sample size is greater than or less than 30. Formulas are also given for calculating confidence intervals for proportions, and for determining the minimum sample size needed for estimating means and proportions within a desired level of accuracy. Examples of applying these concepts to sample data are also included.
Estimating population values ppt @ bec domsBabasab Patil
This document discusses confidence intervals for estimating population parameters. It covers confidence intervals for the mean when the population standard deviation is known and unknown, as well as confidence intervals for the population proportion. Key points include:
- A confidence interval provides a range of plausible values for an unknown population parameter based on a sample statistic.
- The margin of error and confidence level affect the width of a confidence interval.
- The t-distribution is used instead of the normal when the population standard deviation is unknown.
- Sample size formulas allow determining the required sample size to estimate a population parameter within a specified margin of error and confidence level.
A confidence interval provides a range of values that is likely to include an unknown population parameter, with a specified confidence level. A 95% confidence interval states that if you were to repeat the sampling process numerous times, 95% of the calculated confidence intervals would contain the true population parameter. It does not mean there is a 95% chance that the population parameter falls within the given interval. Larger sample sizes are needed to achieve smaller margins of error or higher confidence levels when estimating population parameters from sample data.
Chapter 6 part1- Introduction to Inference-Estimating with Confidence (Introd...nszakir
Introduction to Inference, Estimating with Confidence, Inference, Statistical Confidence, Confidence Intervals, Confidence Interval for a Population Mean, Choosing the Sample Size
This chapter discusses point estimates and confidence intervals. A point estimate is a statistic used to estimate a population parameter, while a confidence interval provides a range of values that is likely to include the true population parameter. The width of a confidence interval depends on the sample size, population variability, and desired confidence level. Confidence intervals for a mean can be constructed using the t or z distributions depending on whether the population standard deviation is known. Confidence intervals can also be constructed for a population proportion. Sample sizes needed for estimating means and proportions are also addressed.
This document defines key concepts in hypothesis testing including the null and alternative hypotheses, the five-step hypothesis testing procedure, and types of errors. It provides examples of hypothesis tests for a population mean when the standard deviation is known and unknown, and for a population proportion. The document explains how to set up and conduct hypothesis tests, interpret results, and compute Type I and Type II errors.
Here are the key steps to construct confidence intervals in R:
1. Generate sample data from a population distribution. For example, to generate a random sample of size 30 from a normal distribution with mean 100 and standard deviation 15:
x <- rnorm(30, 100, 15)
2. Calculate the sample mean and standard deviation:
mean(x)
sd(x)
3. Determine the appropriate t-statistic value based on the confidence level and degrees of freedom (n-1). For example, for a 95% CI with 29 df, the t-stat is 2.045:
qt(0.975, 29)
4. Calculate the confidence interval limits as:
Here are the steps to solve this problem:
1) Given: n = 10, x = 0.32, s = 0.09
2) The degrees of freedom is n - 1 = 10 - 1 = 9
3) The t-value for a 95% CI with 9 df is t0.025,9 = 2.262 (from t-table)
4) The CI is: x ± t*s/√n = 0.32 ± 2.262*(0.09/√10) = 0.32 ± 0.029
5) The 95% CI is 0.291 to 0.349 inches
6) 0.30 inches is within the CI, so it would be
Chapter 7 – Confidence Intervals And Sample SizeRose Jenkins
This document discusses confidence intervals for means and proportions. It defines key terms like point estimates, interval estimates, confidence levels, and confidence intervals. It provides formulas for calculating confidence intervals for means when the population standard deviation is known or unknown, and when the sample size is greater than or less than 30. Formulas are also given for calculating confidence intervals for proportions, and for determining the minimum sample size needed for estimating means and proportions within a desired level of accuracy. Examples of applying these concepts to sample data are also included.
Estimating population values ppt @ bec domsBabasab Patil
This document discusses confidence intervals for estimating population parameters. It covers confidence intervals for the mean when the population standard deviation is known and unknown, as well as confidence intervals for the population proportion. Key points include:
- A confidence interval provides a range of plausible values for an unknown population parameter based on a sample statistic.
- The margin of error and confidence level affect the width of a confidence interval.
- The t-distribution is used instead of the normal when the population standard deviation is unknown.
- Sample size formulas allow determining the required sample size to estimate a population parameter within a specified margin of error and confidence level.
A confidence interval provides a range of values that is likely to include an unknown population parameter, with a specified confidence level. A 95% confidence interval states that if you were to repeat the sampling process numerous times, 95% of the calculated confidence intervals would contain the true population parameter. It does not mean there is a 95% chance that the population parameter falls within the given interval. Larger sample sizes are needed to achieve smaller margins of error or higher confidence levels when estimating population parameters from sample data.
Chapter 6 part1- Introduction to Inference-Estimating with Confidence (Introd...nszakir
Introduction to Inference, Estimating with Confidence, Inference, Statistical Confidence, Confidence Intervals, Confidence Interval for a Population Mean, Choosing the Sample Size
This document provides an overview of probability and statistics concepts including:
- Random variables which are variables that can change from one experiment to another. Continuous random variables have probabilities defined by probability density functions while discrete random variables have probabilities defined by probability mass functions.
- Important continuous distributions like the normal, lognormal, gamma and Weibull distributions. Discrete distributions include the binomial and Poisson distributions.
- Concepts like mean, variance, and independence which are used to analyze multiple random variables. Approximations like the normal approximation are used to simplify calculations for some distributions.
- Various topics are covered in detail including probability, random variables, distributions, plots, and analyzing relationships between multiple random variables. Key concepts are
This document discusses different types of probability distributions used in statistics. There are two main types: continuous and discrete distributions. Continuous distributions are used when variables are measured on a continuous scale, while discrete distributions are used when variables can only take certain values. Some important continuous distributions mentioned are the normal, lognormal, and exponential distributions. Important discrete distributions include the binomial, hypergeometric, and Poisson distributions. Key terms like mean, variance, and standard deviation are also defined. Examples are provided to illustrate how these probability distributions are applied in fields like quality control and reliability engineering.
The document discusses static hedging of binary options using a portfolio of vanilla options. Specifically, it examines hedging a binary call option with a strike of 100 using a short call with a strike of 90 and a long call with a strike of 110. The analysis considers uncertain volatility, inhomogeneous maturity between the options, and incorporating bid-ask spreads to maximize the value of the binary option for both long and short positions. Finite difference methods are used to numerically evaluate the option prices under different volatility assumptions and jump conditions.
This document discusses sampling variability and sampling distributions. It defines key terms like statistic, sampling distribution, and population distribution. It presents examples of how sampling distributions are impacted by sample size and population characteristics. The central limit theorem is introduced, stating that sampling distributions become normally distributed as sample size increases, even if the population is not normal. Properties of sampling distributions for the sample mean and sample proportion are provided. Examples demonstrate how to calculate probabilities using these sampling distributions.
This document discusses statistical concepts such as parameters, statistics, descriptive statistics, estimation, and hypothesis testing. It provides examples of:
- Point estimates and interval estimates used to estimate population parameters from sample statistics. Point estimates provide a single value while interval estimates provide a range of values.
- Confidence intervals which specify a range of values that is expected to contain the population parameter a certain percentage of times, known as the confidence level. Common confidence levels are 90%, 95%, and 99%.
- Formulas for constructing confidence intervals for the population mean, proportion, and variance based on the sample statistic, sample size, confidence level, and whether the population standard deviation is known.
This document discusses methods for estimating population parameters from sample data, including point estimation, bias, confidence intervals, sample size determination, and hypothesis testing. Key points include defining point estimates as single values representing plausible population values based on sample data, describing how to calculate confidence intervals for population proportions and means using z-tests and t-tests, and outlining how to determine necessary sample sizes to achieve a desired level of accuracy and confidence.
This paper reports an experimental test of asymmetric Tullock contests. Both the simultaneous-move and sequential-move frameworks are considered. The introduction of asymmetries in the contest function generates experimental behavior qualitatively consistent with the theoretical predictions. However, especially in the simultaneous-move framework, average bidding levels are in excess of the risk-neutral predictions. We conjecture that the reason behind this behavior lies in subjects attaching positive utility to victory in the contest.
The document discusses various graphical methods for describing data, including bar charts, pie charts, stem-and-leaf diagrams, histograms, and cumulative relative frequency plots. It provides examples of each using sample student data on vision correction, weights, ages, and GPAs to illustrate how to construct and interpret the different graph types.
This document discusses interval estimation for proportions. It defines point estimates and interval estimates. A point estimate is a single value of a statistic used to estimate a population parameter, like the sample proportion p estimating the population proportion P. An interval estimate provides a range of values between which the population parameter is expected to lie with a certain confidence level, like a 95% confidence interval for a proportion. Two examples are provided to demonstrate how to calculate a confidence interval for a sample proportion and interpret whether it supports or contradicts a claimed population proportion.
The document provides an overview of various probability distributions available in software from Real Options Valuation, Inc. It lists the most commonly used distributions as Normal, Triangular, Uniform, Custom, Lognormal, and Binomial. It describes these distributions and their key parameters. The document then lists other less commonly used distributions like Arcsine, Bernoulli, Beta, Beta 3, Beta 4, Cauchy, Chi-Square, Cosine, Double Log, Erlang, and Exponential 2, briefly describing each one.
The document discusses normal and standard normal distributions. It provides examples of using a normal distribution to calculate probabilities related to bone mineral density test results. It shows how to find the probability of a z-score falling below or above certain values. It also explains how to determine the sample size needed to estimate an unknown population proportion within a given level of confidence.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 6: Normal Probability Distribution
6.5: Assessing Normality
Chapter 7 – Confidence Intervals And Sample Sizeguest3720ca
This document discusses confidence intervals for means and proportions. It defines key terms like point estimates, interval estimates, confidence levels, and confidence intervals. It provides formulas for calculating confidence intervals for means when the population standard deviation is known or unknown, and when the sample size is greater than or less than 30. Formulas are also given for calculating confidence intervals for proportions, and for determining the minimum sample size needed for estimating means and proportions within a desired level of accuracy. Examples of applying these concepts to sample data are also included.
8
The document provides an overview of marketing engineering and response models. It discusses linear regression models, which assume a linear relationship between dependent and independent variables. Key points include:
1) Linear regression finds coefficients that minimize error between actual and predicted dependent variable values.
2) Diagnostics include R-squared, standard error, and ANOVA tables comparing explained, residual, and total variation.
3) Models can forecast sales and profits given marketing mix changes.
4) Logit models are used when dependent variables are binary or limited ranges, predicting choice probabilities rather than continuous preferences.
To determine the appropriate sample size for quantitative research, key factors must be considered including:
1) The desired level of precision or acceptable margin of error for results.
2) The required confidence level, typically 95%.
3) An estimate of the population variability based on available data.
Using a basic formula that incorporates these factors, the sample size can be computed to achieve the desired precision at the specified confidence level. Probability sampling methods like simple random and stratified sampling are generally most effective when a sampling frame is available.
This document discusses inferential statistics and confidence intervals. It introduces confidence intervals for a population mean using the t-distribution when the sample size is small (less than 30). When the population variance is known, the z-distribution can be used. It provides examples of how to calculate 95% and 99% confidence intervals for a population mean using the t-distribution and normal distribution. Formulas for the standard error and reliability coefficients are also presented.
Intro to Quant Trading Strategies (Lecture 6 of 10)Adrian Aley
This document provides an outline and overview of using Kalman filter methods for pairs trading strategies based on modeling the spread between two assets as a mean-reverting process. It discusses modeling the spread as an Ornstein-Uhlenbeck process, computing the expected state from observations using the Kalman filter, and how to predict state estimates and minimize posterior variance in the Kalman filter updating process. References on stochastic spread methods and the application of Kalman filters to pairs trading are also provided.
The document provides information about descriptive statistics including how they are used to summarize and organize data from samples and populations. Descriptive statistics include measures of central tendency like the mean, median, and mode as well as measures of variability like range, interquartile range, variance and standard deviation. Examples are given showing how to calculate and present these statistics including frequency distributions, histograms, bar graphs and measures of central tendency and variability.
This document provides an overview of analysis of variance (ANOVA). It lists the goals as conducting hypothesis tests to determine if variances or means of populations are equal. It describes the characteristics of the F-distribution and how it is used to test hypotheses about equal variances or means. Examples are provided to demonstrate comparing two variances, comparing means of two or more groups, and constructing confidence intervals for differences in means. The key steps of ANOVA including organizing data in an ANOVA table and making conclusions based on the F-statistic are outlined.
The document describes various statistical methods for describing and analyzing data, including measures of central tendency (mean, median), variability (range, standard deviation, interquartile range), and distribution (histograms, boxplots). It provides examples of calculating these statistics and interpreting them for real data sets. Comparisons are made between the sample mean and median, and between theoretical descriptions of data distributions (Chebyshev's Rule and the Empirical Rule) and actual data analyses.
This document lists various landmarks and structures located in ancient Rome, including portraits, villas, porticoes, basilicas, arches, temples, platforms for public speeches, and imperial forums from the time period like the Portrait of Marcus Aurelias, Livia's Villa at Prima Porta, Porticus of Gaius and Lucius, Arch of Septimius Severus, Temples of various gods and emperors, Rostra, and Forum of Augustus.
This document provides an overview of probability and statistics concepts including:
- Random variables which are variables that can change from one experiment to another. Continuous random variables have probabilities defined by probability density functions while discrete random variables have probabilities defined by probability mass functions.
- Important continuous distributions like the normal, lognormal, gamma and Weibull distributions. Discrete distributions include the binomial and Poisson distributions.
- Concepts like mean, variance, and independence which are used to analyze multiple random variables. Approximations like the normal approximation are used to simplify calculations for some distributions.
- Various topics are covered in detail including probability, random variables, distributions, plots, and analyzing relationships between multiple random variables. Key concepts are
This document discusses different types of probability distributions used in statistics. There are two main types: continuous and discrete distributions. Continuous distributions are used when variables are measured on a continuous scale, while discrete distributions are used when variables can only take certain values. Some important continuous distributions mentioned are the normal, lognormal, and exponential distributions. Important discrete distributions include the binomial, hypergeometric, and Poisson distributions. Key terms like mean, variance, and standard deviation are also defined. Examples are provided to illustrate how these probability distributions are applied in fields like quality control and reliability engineering.
The document discusses static hedging of binary options using a portfolio of vanilla options. Specifically, it examines hedging a binary call option with a strike of 100 using a short call with a strike of 90 and a long call with a strike of 110. The analysis considers uncertain volatility, inhomogeneous maturity between the options, and incorporating bid-ask spreads to maximize the value of the binary option for both long and short positions. Finite difference methods are used to numerically evaluate the option prices under different volatility assumptions and jump conditions.
This document discusses sampling variability and sampling distributions. It defines key terms like statistic, sampling distribution, and population distribution. It presents examples of how sampling distributions are impacted by sample size and population characteristics. The central limit theorem is introduced, stating that sampling distributions become normally distributed as sample size increases, even if the population is not normal. Properties of sampling distributions for the sample mean and sample proportion are provided. Examples demonstrate how to calculate probabilities using these sampling distributions.
This document discusses statistical concepts such as parameters, statistics, descriptive statistics, estimation, and hypothesis testing. It provides examples of:
- Point estimates and interval estimates used to estimate population parameters from sample statistics. Point estimates provide a single value while interval estimates provide a range of values.
- Confidence intervals which specify a range of values that is expected to contain the population parameter a certain percentage of times, known as the confidence level. Common confidence levels are 90%, 95%, and 99%.
- Formulas for constructing confidence intervals for the population mean, proportion, and variance based on the sample statistic, sample size, confidence level, and whether the population standard deviation is known.
This document discusses methods for estimating population parameters from sample data, including point estimation, bias, confidence intervals, sample size determination, and hypothesis testing. Key points include defining point estimates as single values representing plausible population values based on sample data, describing how to calculate confidence intervals for population proportions and means using z-tests and t-tests, and outlining how to determine necessary sample sizes to achieve a desired level of accuracy and confidence.
This paper reports an experimental test of asymmetric Tullock contests. Both the simultaneous-move and sequential-move frameworks are considered. The introduction of asymmetries in the contest function generates experimental behavior qualitatively consistent with the theoretical predictions. However, especially in the simultaneous-move framework, average bidding levels are in excess of the risk-neutral predictions. We conjecture that the reason behind this behavior lies in subjects attaching positive utility to victory in the contest.
The document discusses various graphical methods for describing data, including bar charts, pie charts, stem-and-leaf diagrams, histograms, and cumulative relative frequency plots. It provides examples of each using sample student data on vision correction, weights, ages, and GPAs to illustrate how to construct and interpret the different graph types.
This document discusses interval estimation for proportions. It defines point estimates and interval estimates. A point estimate is a single value of a statistic used to estimate a population parameter, like the sample proportion p estimating the population proportion P. An interval estimate provides a range of values between which the population parameter is expected to lie with a certain confidence level, like a 95% confidence interval for a proportion. Two examples are provided to demonstrate how to calculate a confidence interval for a sample proportion and interpret whether it supports or contradicts a claimed population proportion.
The document provides an overview of various probability distributions available in software from Real Options Valuation, Inc. It lists the most commonly used distributions as Normal, Triangular, Uniform, Custom, Lognormal, and Binomial. It describes these distributions and their key parameters. The document then lists other less commonly used distributions like Arcsine, Bernoulli, Beta, Beta 3, Beta 4, Cauchy, Chi-Square, Cosine, Double Log, Erlang, and Exponential 2, briefly describing each one.
The document discusses normal and standard normal distributions. It provides examples of using a normal distribution to calculate probabilities related to bone mineral density test results. It shows how to find the probability of a z-score falling below or above certain values. It also explains how to determine the sample size needed to estimate an unknown population proportion within a given level of confidence.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 6: Normal Probability Distribution
6.5: Assessing Normality
Chapter 7 – Confidence Intervals And Sample Sizeguest3720ca
This document discusses confidence intervals for means and proportions. It defines key terms like point estimates, interval estimates, confidence levels, and confidence intervals. It provides formulas for calculating confidence intervals for means when the population standard deviation is known or unknown, and when the sample size is greater than or less than 30. Formulas are also given for calculating confidence intervals for proportions, and for determining the minimum sample size needed for estimating means and proportions within a desired level of accuracy. Examples of applying these concepts to sample data are also included.
8
The document provides an overview of marketing engineering and response models. It discusses linear regression models, which assume a linear relationship between dependent and independent variables. Key points include:
1) Linear regression finds coefficients that minimize error between actual and predicted dependent variable values.
2) Diagnostics include R-squared, standard error, and ANOVA tables comparing explained, residual, and total variation.
3) Models can forecast sales and profits given marketing mix changes.
4) Logit models are used when dependent variables are binary or limited ranges, predicting choice probabilities rather than continuous preferences.
To determine the appropriate sample size for quantitative research, key factors must be considered including:
1) The desired level of precision or acceptable margin of error for results.
2) The required confidence level, typically 95%.
3) An estimate of the population variability based on available data.
Using a basic formula that incorporates these factors, the sample size can be computed to achieve the desired precision at the specified confidence level. Probability sampling methods like simple random and stratified sampling are generally most effective when a sampling frame is available.
This document discusses inferential statistics and confidence intervals. It introduces confidence intervals for a population mean using the t-distribution when the sample size is small (less than 30). When the population variance is known, the z-distribution can be used. It provides examples of how to calculate 95% and 99% confidence intervals for a population mean using the t-distribution and normal distribution. Formulas for the standard error and reliability coefficients are also presented.
Intro to Quant Trading Strategies (Lecture 6 of 10)Adrian Aley
This document provides an outline and overview of using Kalman filter methods for pairs trading strategies based on modeling the spread between two assets as a mean-reverting process. It discusses modeling the spread as an Ornstein-Uhlenbeck process, computing the expected state from observations using the Kalman filter, and how to predict state estimates and minimize posterior variance in the Kalman filter updating process. References on stochastic spread methods and the application of Kalman filters to pairs trading are also provided.
The document provides information about descriptive statistics including how they are used to summarize and organize data from samples and populations. Descriptive statistics include measures of central tendency like the mean, median, and mode as well as measures of variability like range, interquartile range, variance and standard deviation. Examples are given showing how to calculate and present these statistics including frequency distributions, histograms, bar graphs and measures of central tendency and variability.
This document provides an overview of analysis of variance (ANOVA). It lists the goals as conducting hypothesis tests to determine if variances or means of populations are equal. It describes the characteristics of the F-distribution and how it is used to test hypotheses about equal variances or means. Examples are provided to demonstrate comparing two variances, comparing means of two or more groups, and constructing confidence intervals for differences in means. The key steps of ANOVA including organizing data in an ANOVA table and making conclusions based on the F-statistic are outlined.
The document describes various statistical methods for describing and analyzing data, including measures of central tendency (mean, median), variability (range, standard deviation, interquartile range), and distribution (histograms, boxplots). It provides examples of calculating these statistics and interpreting them for real data sets. Comparisons are made between the sample mean and median, and between theoretical descriptions of data distributions (Chebyshev's Rule and the Empirical Rule) and actual data analyses.
This document lists various landmarks and structures located in ancient Rome, including portraits, villas, porticoes, basilicas, arches, temples, platforms for public speeches, and imperial forums from the time period like the Portrait of Marcus Aurelias, Livia's Villa at Prima Porta, Porticus of Gaius and Lucius, Arch of Septimius Severus, Temples of various gods and emperors, Rostra, and Forum of Augustus.
Mata kuliah ini membahas tentang geofisika, yang mencakup struktur lapisan bumi, jenis batuan dan mineral, bentuk muka bumi, serta metode geofisika untuk eksplorasi bumi. Mahasiswa akan belajar tentang kerak, selimut, dan inti bumi; jenis batuan beku, sedimen, dan metamorf beserta mineralnya; gunung, bukit, lembah, dan dataran; serta metode gravitasi, seismologi, panas bumi, listrik, dan magnetik.
This document discusses the author's journey with digital tools and online learning. It mentions the author's early personal learning network and updated network. It also discusses stepping out of one's comfort zone and exploring various digital tools like Twitter, blogs, podcasts, web apps, videos, infographics, filters, and attributing information online. The document explores concepts like digital footprint, 21st century learning, teaching 21st century learners, BYOD, Google, Google Drive, Google Blogspot, bookmarking with Diigo, presentations, inclusive classrooms, and international education resources from UNESCO.
Walk THIS Way: Simple Ideas to Increase Foot TrafficBuzztime Business
Restaurant and bar owners are often looking for ways to boost sales by increasing foot traffic. Here are a few simple ideas to get folks through the door in creative, unique and engaging ways.
15 Questions Every Server Should Know The Answer ToBuzztime Business
The following is a recommended list of questions that your staff should know how to answer. Of course, your requirements may vary slightly so add or remove questions as required.
Manual ini memberikan panduan ringkas tentang cara penanaman dan penjagaan buah naga. Ia menjelaskan bahawa buah naga adalah tanaman berguna yang kaya dengan khasiat kesihatan. Manual ini menyediakan maklumat tentang varieti, tanah, cara penanaman, pembajaan, penyiraman, pembungaan, pemangkasan, pengawalan perosak dan penyakit, penuaian hasil, dan produk hiliran buah naga.
Agents Behavior In Market Bubbles Herding And Information EffectsBryce Nelson
This document summarizes a study that examines how behavioral factors influence the formation of speculative bubbles in financial markets. The study conducted an experiment where subjects had to forecast prices of the S&P 500 index during the dotcom bubble. The results show that incentives for herding behavior increased forecast volatility and contributed to bubble inflation. However, providing information about expected market trends reduced volatility and offset the effects of herding incentives. Therefore, in the absence of clear signals about market sentiment, herding behavior is more likely to inflate bubbles.
BUS 308 Week 5 Lecture 3 A Different View Effect Sizes .docxcurwenmichaela
BUS 308 Week 5 Lecture 3
A Different View: Effect Sizes
Expected Outcomes
After reading this lecture, the student should be familiar with:
1. What effect size measures exist for different statistical tests.
2. How to interpret an effect size measure.
3. How to calculate an effect size measure for different tests.
Overview
While confidence intervals can give us a sense of how much variation is in our decisions,
effect size measures help us understand the practical significance of our decision to reject the
null hypothesis. Not all statistically significant results are of the same importance in decision
making. A difference in means of 25 cents is more important with means around a dollar than
with means in the millions of dollars, yet with the right sample size both groups can have this
difference be statistically significant.
Effect size measures help us understand the practice importance of our decision to reject
the null hypothesis.
Excel has limited functions available for us to use on Effect Size measures. We generally
need to take the output from the other functions and generate our Effect Size values.
Effect Sizes
One issue many have with statistical significance is the influence of sample size on the
decision to reject the null hypothesis. If the average difference in preference for a soft drink was
found to be ½ of 1%; most of us would not expect this to be statistically significant. And,
indeed, with typical sample sizes (even up to 100), a statistical test is unlikely to find any
significant difference. However, if the sample size were much larger; for example, 100,000; we
would suddenly find this miniscule difference to be significant!
Statistical significance is not the same as practical significance. If for example, our
sample of 100,000 was 1% more in favor of an expensive product change, would it really be
worthwhile making the change? Regardless of how large the sample was, it does not seem
reasonable to base a business decision on such a small difference.
Enter the idea of Effect Size. The name is descriptive but at the same time not very
illuminating on what this measure does. We will get to specific measures shortly, but for now,
let’s look at how an Effect Size measure can help us understand our findings. First, the name:
Effect Size. What effect? What size? In very general terms, the effect we are monitoring is the
effect that occurs when we change one of the variables. For example, is there an effect on the
average compa-ratio when we change from male to female. Certainly, but not all that much, as
we found no significant difference between the average male and female compa-ratios. Is there
an effect when we change from male to female on the average salary? Definitely. And it is
much larger than what we observed on the compa-ratio means. We found a significant
difference in the average salary for males than females – around $14,000.
The Effect Siz.
This document provides a summary of univariate analysis conducted on several variables from an Ames housing dataset and builds single decision tree models using the original and amended datasets. The univariate analysis examines relationships between variables like lot area, overall quality, year built, total square footage, and time of sale with the target variables of sale price and sale category. Strong relationships were found with overall quality and total square footage. Single decision trees were built on the original and amended datasets and their predictive performance on test data will be compared.
The document discusses one-way analysis of variance (ANOVA), which compares the means of three or more populations. It provides an example where sales data from three marketing strategies are analyzed using ANOVA. The null hypothesis is that the population means are equal, and it is rejected since the F-statistic is greater than the critical value, indicating at least one mean is significantly different. Post-hoc comparisons using the Bonferroni method find that Strategy 2 (emphasizing quality) has significantly higher sales than Strategy 1 (emphasizing convenience).
* Corresponding author. Tel.: 773 702 7282; fax: 773 702 9937; e-mail: [email protected]
edu.
1 The comments of Brad Barber, David Hirshleifer, S.P. Kothari, Owen Lamont, Mark Mitchell,
Hersh Shefrin, Robert Shiller, Rex Sinquefield, Richard Thaler, Theo Vermaelen, Robert Vishny, Ivo
Welch, and a referee have been helpful. Kenneth French and Jay Ritter get special thanks.
Journal of Financial Economics 49 (1998) 283—306
Market efficiency, long-term returns, and behavioral
finance1
Eugene F. Fama*
Graduate School of Business, University of Chicago, Chicago, IL 60637, USA
Received 17 March 1997; received in revised form 3 October 1997
Abstract
Market efficiency survives the challenge from the literature on long-term return
anomalies. Consistent with the market efficiency hypothesis that the anomalies are
chance results, apparent overreaction to information is about as common as underreac-
tion, and post-event continuation of pre-event abnormal returns is about as frequent as
post-event reversal. Most important, consistent with the market efficiency prediction that
apparent anomalies can be due to methodology, most long-term return anomalies tend to
disappear with reasonable changes in technique. ( 1998 Elsevier Science S.A. All rights
reserved.
JEL classification: G14; G12
Keywords: Market efficiency; Behavioral finance
1. Introduction
Event studies, introduced by Fama et al. (1969), produce useful evidence on
how stock prices respond to information. Many studies focus on returns in
a short window (a few days) around a cleanly dated event. An advantage of this
approach is that because daily expected returns are close to zero, the model for
expected returns does not have a big effect on inferences about abnormal returns.
0304-405X/98/$19.00 ( 1998 Elsevier Science S.A. All rights reserved
PII S 0 3 0 4 - 4 0 5 X ( 9 8 ) 0 0 0 2 6 - 9
The assumption in studies that focus on short return windows is that any lag
in the response of prices to an event is short-lived. There is a developing
literature that challenges this assumption, arguing instead that stock prices
adjust slowly to information, so one must examine returns over long horizons to
get a full view of market inefficiency.
If one accepts their stated conclusions, many of the recent studies on long-
term returns suggest market inefficiency, specifically, long-term underreaction
or overreaction to information. It is time, however, to ask whether this litera-
ture, viewed as a whole, suggests that efficiency should be discarded. My answer
is a solid no, for two reasons.
First, an efficient market generates categories of events that individually
suggest that prices over-react to information. But in an efficient market, appar-
ent underreaction will be about as frequent as overreaction. If anomalies split
randomly between underreaction and overreaction, they are consistent with
market efficiency. We shall see that a roughly even split between apparent
overreaction and underreact ...
This study examined the "crowd within" effect by asking 76 participants to make two guesses at ranking sets of knowledge items. The guesses were averaged using Borda count. Results showed the average was more accurate than individual guesses, supporting the crowd within effect. Higher ability subjects performed better. While easy problems elicited better responses than hard ones, there was no interaction between difficulty and guesses. Overall, the findings provide evidence that averaging multiple opinions or guesses from the same individual improves accuracy, analogous to the wisdom of crowds.
Student's t-test is used to determine if two population means are statistically different based on random samples from those populations. It calculates a ratio of the difference between sample means to the variability within each sample. If the t-value is large enough based on the sample sizes and pre-set significance level (often 0.05), then the population means are considered statistically different. The t-test is commonly used to compare outcomes before and after an intervention or between treated and control groups.
Student's t-test is used to determine if two population means are statistically different based on random samples from those populations. It calculates a ratio of the difference between two sample means over the variability within each sample. If the t-value is large enough based on the sample sizes and pre-set significance level (often 0.05), then the population means are considered statistically different. The t-test is commonly used to compare outcomes before and after an intervention or between treated and untreated groups.
This chapter discusses two-sample hypothesis tests for comparing means and proportions between two independent populations or between paired/dependent samples. It provides examples of hypothesis tests to compare the means of two independent samples using the z-test if populations are normal and sample sizes are large, or the t-test if populations are normal but sample sizes are small. Tests are also shown to compare proportions between two independent populations using the z-test, and to compare means between paired samples using the t-test.
Section 1 Data File DescriptionThe fictional data represents a te.docxbagotjesusa
This document describes using dummy predictor variables in multiple regression analysis. It provides an example using hypothetical data on faculty salaries. Key points:
- Dummy variables allow inclusion of categorical predictors like gender or political party in regression by coding them numerically.
- For k categories, k-1 dummy variables are needed. This example uses gender (coded 0,1) and college (coded 1,2,3) as predictors.
- Regression and ANOVA provide equivalent information about differences in mean salaries for gender and across colleges. Dummy variable regression tests are equivalent to ANOVA comparisons.
- The document screens the salary data for violations of regression assumptions like normality before running analyses.
The document summarizes an investigation into pairs trading to profit from arbitrage opportunities. The author selects pairs of securities from the US and Nigerian markets, tests for cointegration using the Engle-Granger approach, and develops a pairs trading strategy based on the residual plot. Using standard deviation thresholds of 1.0 and 1.5, the strategy is applied to the selected pairs and performance is analyzed, finding profits from 109-809% for the Nigerian pairs and 45-106% for the US pairs. The author concludes pairs trading can be profitable in both advanced and developing markets but notes future research could incorporate transaction costs and more fully examine excursion effects.
This document provides an overview of one-way analysis of variance (ANOVA). It begins by explaining the basic concepts and settings for ANOVA, including comparing population means across three or more groups. It then covers the hypotheses, ideas, assumptions, and calculations involved in one-way ANOVA. These include splitting total variability into parts between and within groups, computing an F-statistic to test if population means are equal, and potentially performing multiple comparisons between pairs of groups if the F-test is significant. Worked examples are provided to illustrate key ANOVA concepts and calculations.
This study examines extreme co-movements in stock prices. Daily prices for the first 100 stocks were analyzed to calculate log returns and identify extreme jumps. A GARCH model was used to extract conditional volatility. Pseudo-observations were generated by dividing returns by volatility. A generalized extreme value distribution was fitted to exceedances above a threshold to determine tail properties. Fréchet scales were calculated and ranked. The number of joint exceedances above percentiles over time lags were counted to estimate conditional probabilities of extreme co-movements in stock decreases.
This document summarizes a study on market efficiency and long-term stock returns. It discusses two key findings:
1) Studies of long-term stock returns show about as much evidence of overreaction to information as underreaction, which is consistent with the prediction of market efficiency.
2) Most anomalies in long-term returns tend to disappear or become marginal when using different models for expected returns or statistical techniques, suggesting they can reasonably be attributed to chance rather than inefficiency.
Market efficiency survives the challenge from the literature on long-term return anomalies. Consistent with the market efficiency hypothesis that the anomalies are chance results, apparent overreaction to information is about as common as under-reaction, and post-event continuation of preevent abnormal returns is about as frequent as post-event reversal. Most important, consistent with the market efficiency prediction that apparent anomalies can be due to methodology, most long-term return anomalies tend to
disappear with reasonable changes in technique.
kelan nyo isubmit yung assignment no. 7 and 8 nyo nasa slides yun ng stats. isubmit nyo sa akin sa lunes during electromagnetism kasi kukulangin yung class participation nyo sa stats.
This document provides an outline for a Probability and Statistics course. It covers topics such as introduction to statistics, tabular and graphical representation of data, measures of central tendency and variation, probability, discrete and continuous distributions, and hypothesis testing. Descriptive statistics are used to summarize and describe data, while inferential statistics allow predictions and inferences about a larger data set based on a sample. Variables can be classified based on their scale of measurement as nominal, ordinal, interval, or ratio. Graphical representations include pie charts, histograms, bar graphs, and frequency polygons. Measures of central tendency include the mean, median, and mode.
WEEK 6 – EXERCISES Enter your answers in the spaces pr.docxwendolynhalbert
WEEK 6 – EXERCISES
Enter your answers in the spaces provided. Save the file using your last name as the beginning of the file name (e.g., ruf_week6_exercises) and submit via “Assignments.” When appropriate,
show your work
. You can do the work by hand, scan/take a digital picture, and attach that file with your work.
1
.
A psychotherapist studied whether his clients self-disclosed more while sitting in an easy chair or lying down on a couch. All clients had previously agreed to allow the sessions to be videotaped for research purposes. The therapist randomly assigned 10 clients to each condition. The third session for each client was videotaped and an independent observer counted the clients’ disclosures. The therapist reported that “clients made more disclosures when sitting in easy chairs (
M
= 18.20) than when lying down on a couch (
M
= 14.31),
t
(18) = 2.84,
p
< .05, two-tailed.” Explain these results to a person who understands the
t
test for a single sample but knows nothing about the
t
test for independent means.
2.
A researcher compared the adjustment of adolescents who had been raised in homes that were either very structured or unstructured. Thirty adolescents from each type of family completed an adjustment inventory. The results are reported in the table below. Explain these results to a person who understands the
t
test for a single sample but knows nothing about the
t
test for independent means.
Means on Four Adjustment Scales for
Adolescents from Structured versus Unstructured Homes
Scale
Structured Homes
Unstructured Homes
t
Social Maturity
106.82
113.94
–1.07
School Adjustment
116.31
107.22
2.03*
Identity Development
89.48
94.32
1.93*
Intimacy Development
102.25
104.33
.32
______________________
*
p
< .05
3.
Do men with higher levels of a particular hormone show higher levels of assertiveness? Levels of this hormone were tested in 100 men. The top 10 and the bottom 10 were selected for the study. All participants took part in a laboratory simulation in which they were asked to role-play a person picking his car up from a mechanic’s shop. The simulation was videotaped and later judged by independent raters on each of four types of assertive statements made by the participant. The results are shown in the table below. Explain these results to a person who fully understands the
t
test for a single sample but knows nothing about the
t
test for independent means.
Mean Number of Assertive Statements
Type of Assertive Statement
Group
1
2
3
4
Men with High Levels
2.14
1.16
3.83
0.14
Men with Low Levels
1.21
1.32
2.33
0.38
t
3.81**
0.89
2.03*
0.58
______________________
*
p
< .05;
**
p
< 0.1
4.
A manager of a small store wanted to discourage shoplifters by putting signs around the store saying “Shoplifting is a crime!” However, he wanted to make sure this would not result in customers buying less. To test this, he displayed the signs every other W.
This study examined the effect of time pressure on solving optimal stopping problems. 71 participants completed an optimal stopping task with numbers under short (3 numbers) or long (7 numbers) conditions. Performance was analyzed across blocks of trials. Results showed that the short condition led to significantly better performance than the long condition. While no significant learning effects were found overall, subjects in the long condition showed a slight improvement in the final block, indicating possible learning with more alternatives. Thus, increasing the number of choices can hinder optimal decision making due to added complexity and pressure.
2. INTRODUCTION
In behavioral economics, the endowment effect is the hypothesis that a person's
willingness to accept (WTA) compensation for a good is greater than their willingness to
pay (WTP) for it once their property right to it has been established.
I theoretically study dynamic general equilibrium economies populated with reference
dependent agents. I show that under plausible axioms, these dynamic economies
generate a countervailing force to the static endowment effect that I call the
“momentum effect.”
Over the last three decades, dozens of experiments have demonstrated that human
decision making can be powerfully shaped by reference points.
3. INTRO CONT’D
If reference points track endowments, then these small trades will tend to make
agents open to additional rounds of trade. Over time this iterative process must
eventually generate much greater trading volumes than would be observed in a
static economy.
In my experiment I answer the question by using two microeconomic concepts. First,
theoretically study dynamic general equilibrium economies populated with reference
dependent agents. In these dynamic economies the static endowment effect seems to
be countered which I call “momentum effect”. The momentum effect can reduce or
even eliminate the trade-dampening effects of the initial endowment effect over time.
4. INTRO CONT’D
Second, I work on an experiment to test the existence of dynamic effects predicted in
my theory. In the experiment, subjects completely rank 57 bundles containing various
combinations of two goods. This helps subjects trade after the first round.
The process is then repeated in a second round where subjects get a chance to trade
again. Unlike previous designs, the experimental setting is continuous and dynamic.
This setting allowed me to test the hypothesis that subjects’ preference adjust in a
trade-enhancing manner between periods.
5. HYPOTHESIS
The momentum effect will have the potential and
evidence to disprove the endowment effect. This
will be seen through the graphs of reference
dependent curves. Subjects' willingness to trade
their given amount of goods are expected to
dramatically increase when they were allowed to
trade as predicted by the theory (momentum
effect).
6. MATERIALS AND METHODS
I conducted 8 sessions using a total of 60 undergraduate subjects at Baruch
College between October, 2011 and May, 2012. In each session, subjects entered
the laboratory and were given instructions that described the entire structure of the
session in advance.
Undoubtedly subjects will suffer from boredom and fatigue so I put a great deal of
effort into making it easy and quick to rank a large number of bundles, minimizing
the costs of fine tuning rankings to accurately represent preferences. The software
can be seen in Figure 4. The first number (in green – see figure 4) represents a
number of lottery tickets, the second number (in blue) represents a number of
chocolates. The combination of items they will be given at the beginning of the
experiment will be represented by one of these slips, let's call it slip A. Each slip will
be ordered by favorite to least favorite (1-57).
Once all 57 bundles had been ranked, a “Submit” button appeared at the bottom of
the screen. Subjects could adjust the order of the bundles as much as they liked
prior to submitting their final ranking.
7. MATERIALS AND METHODS CONT’D
Figure 4: Screenshot. Custom software allows users to drag and drop each
bundle (left hand side of the screen) into a rank position (right hand side of the
screen) to submit their preferences. This allows subjects to very quickly build up their
rankings and to later fine tune them quickly and easily.
8. MATERIALS AND METHODS CONT’D
After necessary swaps in the first period were completed, the
experiment was repeated for period 2 with the same procedure one
more time. The subject was quietly informed which bundle had been
randomly selected for her, and a trade was made if she had ranked
the selected bundle lower (better) than her current endowment.
After each subject had been informed of the result for the
period, a new period proceeded identically to the first. The set of
slips available will remain unchanged. If the randomly selected slip in
this second round is ranked higher than the slip representing their
current combination of items, the appropriate swap will occur. After
all necessary swaps are complete; they will be free to leave the
experiment.
9. MATERIALS AND METHODS CONT’D
Figure 5: Empirical Strategy. Each panel (one for each subject type) plots
the set of rank-able bundles from the experiment as dots. Hypothetical GCES
indifference curves are plotted through a second period endowment of (5, 10).
In each case, one (solid line) is generated by a reference point at the first
period endowment and the other (dotted line) by a reference point at the
second period endowment. Hollow blue dots show bundles added to the
preferred-to (the second period endowment) set in the second period; solid red
dots show bundles removed from this set in the second period.
10. RESULT #1
Momentum Trading vs. Endowment Theory : Momentum trading predicts
expansion in willingness to trade and a freeze of it after one or several trades.
While the Endowment theory predicts that trading will not start.
Result 1: Willingness to trade doubles.
On average the anterior preferred-to set more than doubles between the first and
second period, evincing a large aggregate expansion in willingness to trade. Some
subjects have little or no opportunity to reveal adjustment in the anterior quadrant at
the second period endowment due to constraints provided by their location.
Most subjects show the adjustments during the posterior quadrants where momentum
rotation predicts contraction of the preferred-to set. Figure 1 shows that the red bars
dominate the posterior quadrant as the blue bars dominate the anterior quadrant The
red indicates the contraction. The degree of adjustment is overall considerably more
modest than in the anterior quadrant (as fewer subjects are observed uncensored in this
region) and adjustments are somewhat smaller. Still, as Table 1 makes clear, over twice
as many subjects contract as expand in the posterior quadrant.
11. RESULT #2
Result 2: Willingness to trade freezes after one or more trades as soon as
satisfaction is achieved. On average the posterior preferred-to set shrinks
between the first and second period. This is consistent with momentum
rotation.
I needed to adjust the anterior and posterior quadrant since it was a high powered test
of patterns. After the data was collected each subject’s data was summed up of the
changes to the preferred-to set consistent with momentum adjustment and those
inconsistent with momentum adjustment. We exclude from our test subjects only
capable of adjusting in momentum or anti-momentum consistent directions to avoid
bias.
12. RESULTS #3
Result 3: 60% to 80% of the subjects demonstrate a behavior that supported the
momentum trading theory. The mean momentum adjustment is over 5 times
larger than the mean anti-momentum adjustment. On net, 2/3 of subjects
adjust in a manner consistent with momentum rotation. This proportion rises
to nearly 80% in the subset of subjects who made strong systematic net
adjustments in their preferences between the first and second period.
My experiment was designed to see changes in one subject and not multiple.
Specifically, for most subjects we do not observe an incentive-compatible ranking of
the first period endowment bundle of the opposite subject type, making it impossible
to credibly compare how subjects of each type rank each other’s endowments. As
seen through testing the reference dependence has a strong influence over first period
preferences. When the subjects are allowed to rank the bundles, the endowed
bundle, on average, is 30% lower (more strongly preferred) than the non-endowed
bundles. This trend is highly significant.
13. RESULTS CONT’D
Evidence From the Experimental Graphs: The left hand panel of Figure 1 is
dominated by blue, revealing a huge expansion in the average size of the anterior
preferred-to set from the first to the second period. The expansion (supports my
theory) for some subjects are visible in the graphs; by contrast only a small minority
show evidence of net contraction (against my theory) and these contractions tend to
be tiny in size relative to the typical expansion. Table 1 summarizes, revealing that
the anterior set doubles in size from 7 to 14 between the first and second period.
About 2/3 of subjects reveal net expansion while less than 1/4, on net, contract.
14. RESULTS CONT’D
In Figure 1 (next slide) it can be seen that by plotting vertical bars it is broader in
viewing which visualizes the size preferred-to sets for each subject. The preferred
to set is shown by the grey shaded area and the red bars in period 1 and the gray
plus blue area represents the size in period 2. Thus, red represents net shrinkage
in the preferred-to set and blue net expansion.
Figures for the anterior and the posterior quadrants are provided and in each case
plot only subjects who were both capable of shrinking and expanding in the
quadrant. This ensures that individual level censoring of the direction of
adjustment in the plotted sample doesn’t generate bias for my results.
The left hand panel of Figure 1 is dominated by blue, revealing a huge expansion
in the average size of the anterior preferred-to set from the first to the second
period. The expansion for some subjects are visible in the graphs; by contrast only
a small minority show evidence of net contraction and these contractions tend to
be tiny in size relative to the typical expansion. Table 1 summarizes, revealing that
the anterior set doubles in size from 7 to 14 between the first and second period.
About 2/3 of subjects reveal net expansion while less than 1/4, on net, contract.
15. RESULTS
Figure 2: Trade and Preference Rotation: A shift in
reference point from A to B causes a counterclockwise
rotation in the indifference curve (left panel). This rotation
(by each trader) causes a new expansion in the individually
Figure 1: Preferred-to sets in the first and second rational set and outward shift in the Pareto set, generating
new trade (right panel).
period in the anterior and posterior quadrants.
Gray plus red regions visualize the size of the first
period preferred-to set (one bar is provided for each
uncensored subject); gray plus blue regions
visualize the size of the second period preferred-to
set. Red shows net contraction between periods and
Table 1: The first groups of columns shows mean preferred-to set
blue net expansion.
sizes. The second group shows the proportion of subjects who on-
net expand versus contract in each quadrant. The final column
calculates the mean (across subjects) ratio of the net change
between periods to the size of the first period preferred-to set.
16. DISCUSSION
My experiment joins others in literature and extends the findings in the
endowment effect, one of the most experimented topic in economics. In most
texts they only went as far as to disagree with the mechanism and
inexperience with trading environments play in generating the endowment
effect.
My project is bases on a theoretical hypothesis rather than a methodological
one. Theoretically my experiment shows that the same behavioral forces
(reference dependence at current endowments) that generate the endowment
effect at first will completely go against it in the long run. This is called the
countervailing force the “momentum effect”.
17. DISCUSSION CONT’D
It is fairly simple and can be easily understood: if the reference points change
with endowments, small amounts of trade will tend to push on for further
rounds of trade in a dynamic setting. As a result, measurements of reference
dependence made in static settings will tend to overstate the long-run impact
of reference dependence on trade, often to a great degree.
In my experiment you can see that static elicitation actually understates the
subject’s willingness to trade in the future. On average subjects tend to trade
more from the first period to second period, as predicted by the model. The
data also shows that the driver of the change was preference rotation, similar
to what was assumed in the theory.
18. DISCUSSION CONT’D
Even though my experiment supports my predictions strongly there are some
data that challenges my model. When subjects don’t trade in the second
round, I still see a momentum rotation. This indicates that the changes in
endowment are not necessary to generate trades predicted by my model. To
interpret this finding, it is important to keep in mind that my theory was
grown from the assumption that reference points are followed through by
current endowments.
Classical experimental results in this literature depend on reference points
being shaped primarily by current endowments. However, consistent present-
orientated reference points should prevent non-traders’ reference points from
changing and is therefore inconsistent with my results.
19. DISCUSSION CONT’D
In summary, my predictions prove that dynamic economies long term trading
is greater than the ones in static economies. If I only tested “willingness” to
trade in my first period, I would substantially underestimate eventual trade
volumes implied by period 2 rankings.
My theory however does not anticipate an additional effect of market
dynamics suggested in my data: markets create an opportunity for growth in
sophistication and encourage agents to think about the future after any move
is made.
The sophistication seen in my data suggests that market process may work
better to limit the effect of reference dependence on trade than my theory
predicts.
20. REFERENCES/ACKNOWLEDGEMENTS
1. Johannes Abeler, Armin Falk, Lorenz Goette, and David Huffman. Reference points and effort provision.
The American Economic Review, 101(2):470–492, 2011.
2. Amos Tversky and Daniel Kahneman. Loss aversion and riskless choice: A reference dependent model.
Quarterly Journal of Economics, 106(4):1039–1061, 1991.
3. Jacob Sagi. Anchored preference relations. Journal of Economic Theory, 130(1):283–295, 2006. Jason F.
4. Crockett and Oprea 2012. In the Long Run We All Trade: Reference Dependence in Dynamic Economics, New
York, Research Paper, Baruch College, 56 pg.
5. John A. List. Neoclassical theory versus prospect theory: Evidence from the marketplace. Econometrica,
72(2):615–625, 2004.
6. Yusufcan Masatligoglu and Neslihan Uler. Understanding the reference effect. 2012.
7. Alistair Munro and Robert Sugden. On the theory of reference-dependent preferences. Journal of Economic
Behavior and Organization, 50(4):407–428, 2003.
8. Charles R. Plott and Kathryn Zeiler. The willingness to pay-willingness to accept gap, the ’endowment effect,’
subject misconceptions, and experimental procedures for eliciting valuations.The American Economic Review,
95(3):530–545, 2005.
9. Dirk Engelmann and Guillaume Hollard. Reconsidering the effect of market experience on the ’endowment
effect. Econometrica, 78(6):2005–2019, 2010.
10. Specials thanks to my research teacher, Dr.Shapovalov, for guiding me through my research and professor,
Sean Crockett, for allowing me to conduct the project at Baruch College with him.