Yavuz Sefik
2/25/15
Week 25 Analysis Assignment
Thomas Malthus believed that “population, when unchecked, increases in a geometrical ratio,” and that “subsistence increases only in an arithmetical ratio.” What he is saying is that eventually, the population exceeds the amount of food that is available for consumption; therefore, starvation occurs among the poor, and many die.
Malthus states that by nature, we require food. That means that there must be a “strong and constantly operating check on population from the difficulty of subsistence.” This check, according to him, is the starvation of the poor. Since there is less food, less people can eat the same amount of food to survive. “The poor consequently must live much worse, and many of them be reduced to severe distress. The number of labourers also being above the proportion of the work in the market, the price of labour must tend toward a decrease; while the price of provisions would at the same time tend to rise. The labourer therefore must work harder to earn the same as he did before.”
He then finally states that population growth “cannot be checked, without producing misery or vice,” and implies that even though the government may try to implement welfare programs to benefit the poor, in the end, the natural law will win, and the ones who cannot afford food will die. According to him, it is best to let nature run its course, so that most of the population can be happy.
Business Research Methods, Ch. 16
created by LOUIS DAILY
Last updated Feb 24, 2015, 7:24 AM
1
· Comment on Feb 22, 2015, 10:52 PM
Message collapsed. Message unread EDA
posted by LOUIS DAILY at Feb 22, 2015, 10:52 PM
Last updated Feb 22, 2015, 10:52 PM
Histograms, box-plots, and stem and leaf displays can clarify how data is distributed. These plots and the various graphs (bar, line, etc) are an important part of Exploratory Data Analysis. Discuss. Questions?
· Comment on Feb 24, 2015, 7:24 AM
Message collapsed. Message read Re: EDA
posted by ERIK SEIDEL at Feb 24, 2015, 7:24 AM
Last updated Feb 24, 2015, 7:24 AM
·
While I've used some basic plots and graphs in my career, primarily bar and line graphs, I'm learning through this class just how valuable these graphs can be. I'm also learning about the usefulness of other types of graphs that I have not really considered before. Scatter plots are an excellent example. I know that I've seen these types of graphs before but just glanced over them because I didn't really understand them or their value. However, these charts can be very useful because they show various data points in a dataset and how they relate to each other. By reviewing this, one can determine if there is a positive or negative correlation between the variables on the horizontal and vertical axis. For example, our medical management team at our health insurance company is interested in determining whether spending more on medical management efforts results in overall lower medical .
Yavuz Sefik22515Week 25 Analysis AssignmentThomas Malthus.docx
1. Yavuz Sefik
2/25/15
Week 25 Analysis Assignment
Thomas Malthus believed that “population, when
unchecked, increases in a geometrical ratio,” and that
“subsistence increases only in an arithmetical ratio.” What he is
saying is that eventually, the population exceeds the amount of
food that is available for consumption; therefore, starvation
occurs among the poor, and many die.
Malthus states that by nature, we require food. That means
that there must be a “strong and constantly operating check on
population from the difficulty of subsistence.” This check,
according to him, is the starvation of the poor. Since there is
less food, less people can eat the same amount of food to
survive. “The poor consequently must live much worse, and
many of them be reduced to severe distress. The number of
labourers also being above the proportion of the work in the
market, the price of labour must tend toward a decrease; while
the price of provisions would at the same time tend to rise. The
labourer therefore must work harder to earn the same as he did
before.”
He then finally states that population growth “cannot be
checked, without producing misery or vice,” and implies that
even though the government may try to implement welfare
programs to benefit the poor, in the end, the natural law will
win, and the ones who cannot afford food will die. According to
him, it is best to let nature run its course, so that most of the
population can be happy.
Business Research Methods, Ch. 16
created by LOUIS DAILY
Last updated Feb 24, 2015, 7:24 AM
1
2. · Comment on Feb 22, 2015, 10:52 PM
Message collapsed. Message unread EDA
posted by LOUIS DAILY at Feb 22, 2015, 10:52 PM
Last updated Feb 22, 2015, 10:52 PM
Histograms, box-plots, and stem and leaf displays can clarify
how data is distributed. These plots and the various graphs
(bar, line, etc) are an important part of Exploratory Data
Analysis. Discuss. Questions?
· Comment on Feb 24, 2015, 7:24 AM
Message collapsed. Message read Re: EDA
posted by ERIK SEIDEL at Feb 24, 2015, 7:24 AM
Last updated Feb 24, 2015, 7:24 AM
·
While I've used some basic plots and graphs in my career,
primarily bar and line graphs, I'm learning through this class
just how valuable these graphs can be. I'm also learning about
the usefulness of other types of graphs that I have not really
considered before. Scatter plots are an excellent example. I
know that I've seen these types of graphs before but just
glanced over them because I didn't really understand them or
their value. However, these charts can be very useful because
they show various data points in a dataset and how they relate
to each other. By reviewing this, one can determine if there is a
positive or negative correlation between the variables on the
horizontal and vertical axis. For example, our medical
management team at our health insurance company is interested
in determining whether spending more on medical management
efforts results in overall lower medical expenses. By charting
the medical management expenses and medical expenses of
various health insurance companies on a scatter plot, this can be
determined.
3. Business Research Methods, Ch. 15
· Data Collection and Protection The thread has 1 unread
message.
created by ERIK SEIDEL
Last updated Feb 25, 2015, 12:59 PM
1
· Comment on Feb 25, 2015, 5:58 AM
Message collapsed. Message read Data Collection and
Protection
posted by ERIK SEIDEL at Feb 25, 2015, 5:58 AM
Last updated Feb 25, 2015, 5:58 AM
·
After defining the research problem, determining the null and
alternative hypothesis statements, assessing the appropriate
target population and sample size, etc., comes the data
collection phase for a study. There are various methods for data
collection, including data mining, surveys, etc. This step in the
process must be undertaken cautiously to ensure the integrity of
the data that is being gathered. Missing or incorrect data can
distort an analysis and provide management with useless
information. The concern is that management won't know the
data is useless and will use it as the basis for making decisions.
Protecting the data gathered is also very important. We've
heard over the past couple of years of significant data breaches
with various companies. These are very costly due to possible
sanctions and providing credit monitoring if someone's social
security number or other personal information are lost. This
can also tarnish the image of a company.
· Comment on Feb 25, 2015, 12:59 PM
Message collapsed. Message unread Re: Data Collection and
Protection
posted by PATRICIA MARCUS at Feb 25, 2015, 12:59 PM
4. Last updated Feb 25, 2015, 12:59 PM
·
Great post Erik. I agree there are various methods for data
collection. Data collection methods plus data analysis
procedures are essential
parts to the research process. For someone to get valid results
the data collection process should be precise. Also, the
technique of getting
the data needs to be suitable to the research study profile to
produce the correct results of the research process.Reliability is
an experiment, test or any measuring procedure that yields the
same results on repeated trials. Validity is the construct
measures says it is measuring(Carnmines & Zeller, 1983). An
example is a data collection of STD's. It is important to protect
the subjects in a case study like this because personal
information is involved and should be protected for one's
identity. The way the data collection methods were used to
support the reliability and validity of the study was it allowed
the students to answer questioners anonymous that supported
the main purpose of the study which was to examine the effects
of STD's.
Business Research Methods, Ch. 14
· sampling The thread has 3 unread messages.
created by JUDEENE WALKER
Last updated Feb 26, 2015, 10:47 PM
3
· Comment on Feb 25, 2015, 6:45 PM
Message collapsed. Message unread sampling
posted by JUDEENE WALKER at Feb 25, 2015, 6:45 PM
Last updated Feb 25, 2015, 6:45 PM
·
According to our text, the basic idea of sampling is that by
selecting some of the elements in a population we might draw
5. conclusions about the entire population. Benefits of sampling
includes but not limited to: greater accuracy of results, lower
cost and greater speed of data collection.
Probability and non-probability sampling methods were also
discussed in this chapter: probability sampling is a sampling
technique wherein the samples are gathered in a process that
gives ll the individuals in the population equal chances of being
selected. Simple random sampling, stratified sampling, luster
random and systematic random sampling are types of
probability sampling methods.
· Comment on Feb 26, 2015, 7:51 PM
Message collapsed. Message unread Re: sampling
posted by PATRICIA MARCUS at Feb 26, 2015, 7:51 PM
Last updated Feb 26, 2015, 7:51 PM
·
Great post Judeene In statistics and survey methodology,
sampling is concerned with the selection of a subset of
individuals from within a population to
estimate characteristics of the whole population. The three main
advantages of sampling are that the cost is lower, data
collection is faster, and since the data set is smaller it is
possible to ensure homogeneity and to improve the accuracy and
quality of the data. When it comes to the sampling process, it is
usually biased since no randomization was used in obtaining the
sample. It is also worth noting that the members of the
population did not have equal chances of being selected. The
consequence of this is the misrepresentation of the entire
population which will then limit generalizations of the results
of the study.
· Comment on Feb 26, 2015, 10:47 PM
Message collapsed. Message unread Re: sampling
posted by LOUIS DAILY at Feb 26, 2015, 10:47 PM
Last updated Feb 26, 2015, 10:47 PM
Patricia,
6. Populations can be so large that we really have to use samples.
thanks
Lou
·
The nature of sampling The thread has 5 unread messages.
created by ARIEL SMITH
Last updated Feb 26, 2015, 6:50 PM
5
· Comment on Feb 25, 2015, 1:00 PM
Message collapsed. Message unread The nature of sampling
posted by ARIEL SMITH at Feb 25, 2015, 1:00 PM
Last updated Feb 25, 2015, 1:00 PM
·
According to the text, the basic idea of sampling is that by
selecting some of the elements in a population, we may draw
conclusions about the entire population. A population is the
total collection of elements about which we wish to make some
inferences. What is an element of population? A population
element is the individual participant or object on which the
measurement is taken also known as the unit of study. A census
is a count of all the elements in a population. If 4,000 files
define the population, a census would obtain information from
every one of them. We call the listing of all population elements
from which the sample will be drawn the sample frame.
Russell, M., & Airasian, P. Classroom Assessment: Concepts
and Applications, 7th Edition. [VitalSource Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781308263021/page/380
· Comment on Feb 25, 2015, 1:12 PM
Message collapsed. Message unread Re: The nature of sampling
posted by ARIEL SMITH at Feb 25, 2015, 1:12 PM
7. Last updated Feb 25, 2015, 1:12 PM
·
Why Sample?
According to the text, there are several reasons for sampling,
including (1) lower cost, (2) greater accuracy of results, (3)
greater speed of data collection, and (4) availability of
population elements
1.Lower cost- Researchers can spend less money observing or
interviewing a portion of a population rather than the entire
population.
2. Accuracy- More than 90 percent of the total survey error in
one study was from nonsampling sources and only 10 percent or
less was from random sampling error.3 The U.S. Bureau of the
Census, while mandated to take a census of the population every
10 years, shows its confidence in sampling by taking sample
surveys to check the accuracy of its census. Only when the
population is small, accessible, and highly variable is accuracy
likely to be greater with a census than a sample
3. Data collection- Sampling's speed of execution reduces the
time between the recognition of a need for information and the
availability of that information.
4. Availablity- Sampling is the only process possible if the
population is infinite. For example, the text uses an eample
related to vehicle safety. Safety is a compelling marketing
appeal for most vehicles. Yet we must have evidence to make
such a claim. So we crash-test cars to test bumper strength or
efficiency of airbags to prevent injury. In testing for such
evidence, we destroy the cars we test.
Russell, M., & Airasian, P. Classroom Assessment: Concepts
and Applications, 7th Edition. [VitalSource Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781308263021/page/381
· Comment on Feb 26, 2015, 5:27 AM
8. Message collapsed. Message unread Re: The nature of sampling
posted by ERIK SEIDEL at Feb 26, 2015, 5:27 AM
Last updated Feb 26, 2015, 5:27 AM
·
Hi Ariel,
This is a nice summary regarding the benefits of sampling. I
think there are also some challenges that go along with
sampling depending on the type of study. You may be trying to
answer a specific research question but know that only certain
people in the population would be appropriate to survey in order
to form a valid conclusion. For example, if a company were
seeking to determine the long-term health impacts of smoking,
it would first need to know which members of the population
have smoked for a certain number of years. This type of
information may be difficult to determine. Health care
organizations have an advantage because they have the ability
to ask every single patient whether or not they smoke and how
long they have smoked. People are much more willing to
provide health information to a doctor than to an anonymous
survey. This is another important consideration. If the survey
originates from the wrong source, it may receive much fewer
results.
· Comment on Feb 26, 2015, 12:20 PM
Message collapsed. Message unread Re: The nature of sampling
posted by STEPHANIE RECTOR at Feb 26, 2015, 12:20 PM
Last updated Feb 26, 2015, 12:20 PM
·
Those are some very valid points Ariel and Eric. Coming from a
healthcare background Eric, I agree that healthcare
organizations definitely have an advantage asking such sensitive
questions regarding ones health. Whether it's smoking, drinking,
eating habits, etc., a population as a whole is more inclined to
give honest feedback to their physician than to an anonymous
9. surveyor. When I was in the sleep diagnostic field, many of our
physicians administered sleep questionairs (that we provided)
for all of their patients. Whether a patient was coming in for a
general physical, or an eye infection, they were given these 20
questions to answer. Because historically sleep apnea had been
under diagnosed, our physicians were surprised to see the
results of their patients and the concluding results of their sleep
studies. They placed a great deal of value on these surveys
because they were able to be proactive in patient's overall
health. Long term sleep apnea leads to heart conditions, strokes,
and possibly death, so an early diagnosis and treatment was
imperative. Those questionairs served as a tool that was
provided to each and every patient in their database.
· Comment on Feb 26, 2015, 6:50 PM
Message collapsed. Message unread Re: The nature of sampling
posted by LOUIS DAILY at Feb 26, 2015, 6:50 PM
Last updated Feb 26, 2015, 6:50 PM
Ariel,
Yes, sampling is a useful time savings tool.
thanks
Lou
·
Sample vs. Census The thread has 2 unread messages.
created by ARIEL SMITH
Last updated Feb 26, 2015, 6:50 PM
2
· Comment on Feb 25, 2015, 1:16 PM
Message collapsed. Message unread Sample vs. Census
posted by ARIEL SMITH at Feb 25, 2015, 1:16 PM
Last updated Feb 25, 2015, 1:16 PM
·
According to the text, the advantages of sampling over census
10. studies are less compelling when the population is small and the
variability within the population is high. A census must follow
to conditions (1) feasible when the population is small and (2)
necessary when the elements are quite different from each
other. When the population is small, any sample we draw may
not be representative of the population from which it is drawn.
The resulting values we calculate from the sample are incorrect
as estimates of the population values. The size of a population
determines if a census is feasible.
Russell, M., & Airasian, P. Classroom Assessment: Concepts
and Applications, 7th Edition. [VitalSource Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781308263021/page/381
· Comment on Feb 25, 2015, 7:33 PM
Message collapsed. Message unread Re: Sample vs. Census
posted by JUDEENE WALKER at Feb 25, 2015, 7:33 PM
Last updated Feb 25, 2015, 7:33 PM
·
Census and sampling are methods of collecting data from a
population. A census is a periodic collection of information
from the entire population; It is also known as a complete
enumeration. A census is a time consuming affairs as it involves
counting all items, most researcher are always short on time and
money so the census is not a frequently used method.
A sample is a subset of units in a population that are selected
to represent all units in the population of interest. Unlike a
census a sample is a partial enumeration (as it is a count from a
part of the entire population). A sample must be robust in its
design and large enough to provide a good representative of the
entire population of interest.
Advantages of census over a sample;
2. census provides a true measure of the population, reducing
11. sampling error.
2. Increase confidence interval: conducting a census often
results in enough respondents to allow a high degree of
statistical confidence in the survey results.
When deciding between the two methods, be sure to keep in
mind the goals of the present survey and other surveys along the
line that will rely on the data being collected.
2. Comment on Feb 26, 2015, 1:41 PM
Message collapsed. Message read Re: Sample vs. Census
posted by ARACHEAL VENTRESS at Feb 26, 2015, 1:41 PM
Last updated Feb 26, 2015, 1:41 PM
4.
Judeene
Thanks for your post. Your post caused me to do a little more
research in the areas of census vs a sample. I found that there
are other reasons why researchers will use a sample vs a census
of the population. Disadvantages of using a census are:
Cost: In terms of money, conducting a census for a large
population can be very expensive.
Time: A census generally takes longer to conduct than a sample
survey.
Response burden: Information needs to be received from every
member of the target population.
Control: A census of a large population is such a huge
undertaking that it makes it difficult to keep every single
operation under the same level of scrutiny and control.
Reference
http://www.statcan.gc.ca/
1. Comment on Feb 26, 2015, 6:50 PM
Message collapsed. Message read Re: Sample vs. Census
posted by LOUIS DAILY at Feb 26, 2015, 6:50 PM
12. Last updated Feb 26, 2015, 6:50 PM
Ariel,
Yes, if you can measure the whole population, then do it! You
won't even need to do a hypothesis test.
thanks
Lou
·
Good sample The thread has 3 unread messages.
created by ARIEL SMITH
Last updated Feb 26, 2015, 6:49 PM
3
. Comment on Feb 25, 2015, 1:24 PM
Message collapsed. Message unread Good sample
posted by ARIEL SMITH at Feb 25, 2015, 1:24 PM
Last updated Feb 25, 2015, 1:24 PM
1.
According tot he text, the way to figuring out if you have a
good sample is to ensure the sample represents the
characteristics of the population. In measurement terms, the
sample must be valid. Validity of a sample depends on two
considerations: accuracy and precision.
1. Accuracy- Accuracy is the degree to which bias is absent
from the sample. When the sample is drawn properly, the
measure of behavior, attitudes, or knowledge of some sample
elements will be less than the measure of those same variables
drawn from the population. Variations in these sample values
offset each other, resulting in a sample value that is close to the
population value. An accurate (unbiased) sample is one in which
the under estimators offset the over estimators.
2. Precision- A second criterion of a good sample design is
precision of estimate. Researchers accept that no sample will
fully represent its population in all respects. The numerical
13. descriptors that describe samples may be expected to differ
from those that describe populations because of random
fluctuations inherent in the sampling process. This is called
sampling error or random sampling error and reflects the
influence of chance in drawing the sample members. Sampling
error is what is left after all known sources of systematic
variance have been accounted for.
Russell, M., & Airasian, P. Classroom Assessment: Concepts
and Applications, 7th Edition. [VitalSource Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781308263021/page/383
1. Comment on Feb 26, 2015, 6:31 PM
Message collapsed. Message unread Re: Good sample
posted by PATRICIA MARCUS at Feb 26, 2015, 6:31 PM
Last updated Feb 26, 2015, 6:31 PM
2.
Great post Ariel. Sampling introduces risk into the project: ?
Risk that the data sample may not accurately portray the
population--there may be inadvertent exclusions, clusters,
strata, or other population attributes not understood and
accounted for. Risk assessments There are two risk assessments
to be made. "Margin of error", which refers to the estimated
error around the measurement, observation, or calculation of
statistics within the
interval of the sample data, and 2. "Confidence interval", which
refers to the probability that true population parameters are
within the range of the interval.
he principle risk is that the sample misrepresents the
population. If confidence is stated as 95% for some interval,
then there is a 5% chance that the true population parameter
lays outside the interval. So therefore the actual size of the
population is irrelevant--so long as it is 'large' compared to the
sample
14. 1. Comment on Feb 26, 2015, 6:49 PM
Message collapsed. Message unread Re: Good sample
posted by LOUIS DAILY at Feb 26, 2015, 6:49 PM
Last updated Feb 26, 2015, 6:49 PM
Ariel,
Yes and the best way to increase your probability of having a
representative sample is to sample randomly.
thanks
Lou
·
Sample size The thread has 2 unread messages.
created by ARIEL SMITH
Last updated Feb 25, 2015, 8:10 PM
2
. Comment on Feb 25, 2015, 1:30 PM
Message collapsed. Message unread Sample size
posted by ARIEL SMITH at Feb 25, 2015, 1:30 PM
Last updated Feb 25, 2015, 1:30 PM
1.
According to the text, Some principles that influence sample
size include:
• The greater the dispersion or variance within the population,
the larger the sample must be to provide estimation precision.
• The greater the desired precision of the estimate, the larger the
sample must be.
• The narrower or smaller the error range, the larger the sample
must be.
• The higher the confidence level in the estimate, the larger the
sample must be.
• The greater the number of subgroups of interest within a
sample, the greater the sample size
must be, as each subgroup must meet minimum sample size
15. requirements.
Cost considerations influence decisions about the size and type
of sample and the data collection methods. Almost all studies
have some budgetary constraint, and this may encourage a
researcher to use a nonprobability sample. Probability sample
surveys incur list costs for sample frames, callback costs, and a
variety of other costs that are not necessary when
nonprobability samples are used.
Russell, M., & Airasian, P. Classroom Assessment: Concepts
and Applications, 7th Edition. [VitalSource Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781308263021/page/392
1. Comment on Feb 25, 2015, 8:10 PM
Message collapsed. Message unread Re: Sample size
posted by JUDEENE WALKER at Feb 25, 2015, 8:10 PM
Last updated Feb 25, 2015, 8:10 PM
2.
good points Ariel.
Determining sample size is a very important issue when
collection data from the population. Before we can calculate a
sample size we need to determine a few things about the target
population and sample. When samples are too large we waste
time, money and resources on the other hand when samples are
too small they may lead to inaccurate results.
Before we can calculate the appropriate sample size we need to
determine a few things about our target population: we need to
determine the population size (how many people fit in the
demographic where the data will be retrieved). The margin of
error/confidence interval is another feature that needs to be
identified before we decide on our sample size. There is no
16. guarantee that sample will be perfect so we need to determine
how much error we can accept.
·
Statistics for Business and Economics, Ch. 9
· Chi Square The thread has 4 unread messages.
created by LOUIS DAILY
Last updated Feb 25, 2015, 9:05 PM
4 · Comment on Feb 22, 2015, 10:48 PM
Message collapsed. Message unread Chi Square
posted by LOUIS DAILY at Feb 22, 2015, 10:48 PM
Last updated Feb 22, 2015, 10:48 PM
Chi Square is used with nominal or frequency data. Chi Square
compares the observed frequencies with the expected
frequencies. There are two kinds of Chi Square tests: a
goodness of fit test, and a test of independence.
Discussion. Questions?· Comment on Feb 25, 2015, 12:49 PM
Message collapsed. Message unread Re: Chi Square
posted by PATRICIA MARCUS at Feb 25, 2015, 12:49 PM
Last updated Feb 25, 2015, 12:49 PM
·
The chi in chi-square is the Greek letter χ, pronounced ki as in
kite. Chi-square (χ2) procedures measures the differences
between observed (O) and expected (E) frequencies of nominal
variables, in which subjects are grouped in categories or cells.
There are two basic types of chi-square analysis, the Goodness
of Fit Test, used with a single nominal variable, and the Test of
Independence, used with two nominal variables. Both types of
chi-square use the same formula. Computing the Chi Square.
The first step is to subtract expected frequencies (E) from the
observed (O). These
17. differences fall under the "O-E" column. Notice that Σ(O-E)=0,
just as Σx=0. The second step is to square the differences. These
squares are found under the "(O-E)2" column. The third step is
to divide the squared differences by the expected values.·
Comment on Feb 25, 2015, 6:42 PM
Message collapsed. Message unread Re: Chi Square
posted by JUDEENE WALKER at Feb 25, 2015, 6:42 PM
Last updated Feb 25, 2015, 6:42 PM
·
HI professor I have heard of the "Chi Square" until I started this
course; after reading your post and other resources I am more
clear on the chi square and it uses. I learned that the Chi square
test is used to determine whether there is a significant
difference between the expected frequencies and the observed
frequencies in one or more categories. In a more simple terms
(the Chi Square test is used if test if s a sample of data came
from a population with a specific distribution) Snedecor &
Cochran 1989.
The goodness of fit test and the test of independence are two
kinds of Chi Square tests; the goodness of fit test can be applied
to any distribution (Binomial and Poisson) for which we can
calculate the cumulative distribution function.
On the other hand the test of independence is applied when we
have two categorical variables from a single population. It is
often used to determine whether there is a significant
association between two variables.
The chi square test for independence should be used when:
2. The variables under study are each categorical
2. The sampling method is simple random sampling1. Comment
on Feb 25, 2015, 9:05 PM
Message collapsed. Message unread Re: Chi Square
posted by SAID SHEIK ABDI at Feb 25, 2015, 9:05 PM
Last updated Feb 25, 2015, 9:05 PM
3.
18. The chi-squared test of independence is one of the most basic
and common hypothesis tests in the statistical analysis
of categorical data. Given 2 categorical random
variables, and , the chi-squared test of independence
determines whether or not there exists a statistical
dependence between them. Formally, it is a hypothesis
test with the following null and alternative hypotheses:
Chi-Square Goodness of Fit Test
When an analyst attempts to fit a statistical model to observed
data, he or she may wonder how well the model actually reflects
the data. How "close" are the observed values to those which
would be expected under the fitted model? One statistical test
that addresses this issue is the chi-square goodness of fit test.
This test is commonly used to test association of variables in
two-way tables (see "Two-Way Tables and the Chi-Square
Test"), where the assumed model of independence is evaluated
against the observed data. In general, the chi-square test
statistic is of the form
http://www.ling.upenn.edu/~clight/chisquared.htm
·
Statistics for Business and Economics, Ch. 8
· Observational vs designed experiment The thread has 4 unread
messages.
created by ARACHEAL VENTRESS
Last updated Feb 26, 2015, 7:57 PM
4
· Comment on Feb 25, 2015, 10:44 AM
Message collapsed. Message unread Observational vs designed
experiment
posted by ARACHEAL VENTRESS at Feb 25, 2015, 10:44 AM
Last updated Feb 25, 2015, 10:44 AM
·
What is the difference between an observational experiment and
19. a designed experiment?
A designed experiment is one for which the analyst controls the
specification of the treatments and the method of assigning the
experimental units to each treatment. An observational
experiment is one for which the analyst simply observes the
treatments and the response on a sample of experimental units.
Our text went further to provide examples to help differentiate
between the two concepts: if you give one randomly selected
group of employees a training program and withhold it from
another randomly selected group to evaluate the effect of the
training on worker productivity, then you are designing an
experiment. If, on the other hand, you compare the productivity
of employees with college degrees with the productivity of
employees without college degrees, the experiment is
observational.
Reference
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for Business and Economics (11th ed.). Boston, MA: Prentice
Hall
· Comment on Feb 25, 2015, 8:29 PM
Message collapsed. Message unread Re: Observational vs
designed experiment
posted by KIM DUNLAP at Feb 25, 2015, 8:29 PM
Last updated Feb 25, 2015, 8:29 PM
·
Hi Aracheal,
When I was reading our textbook regarding observational versus
designed experimental research, I was thinking that because of
the control aspect of experimental research - it is probably more
expensive to conduct. I then did a little additional research and
found that yes, indeed designed experimental research is more
expensive. When I think about this in a common sense way - it
20. is very easy to understand why it is more expensive because of
the control that is involved - in regards to the subjects,
controlling the situation - whether it is the environment or the
object, ensuring consistency, and then controlling all of these
throughout the various samples.
I'm sure this fact is a contributing factor to the expense of
health care supplies - whether we are researching medication or
a supply to be used on a patient. There is a lot of control
required in the testing of objects to be used on humans.
· Comment on Feb 26, 2015, 10:32 AM
Message collapsed. Message unread Re: Observational vs
designed experiment
posted by ARACHEAL VENTRESS at Feb 26, 2015, 10:32 AM
Last updated Feb 26, 2015, 10:32 AM
·
Kim Thanks for your post. From reading your post- I can see
why a designed experiment could be considered more expensive
because of the experimenter's control involved. With budget
constraints- I can see that being an issue.
According to our text, there are advantages of a designed
experiment. Our text states that designed experiments are
generally preferred to observational experiments. Not only do
we have better control of the amount and quality of the
information collected, but we also avoid the biases inherent in
observational experiments in the selection of the experimental
units representing each treatment.
Reference
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for Business and Economics (11th ed.). Boston, MA: Prentice
Hall
· Comment on Feb 26, 2015, 7:57 PM
Message collapsed. Message unread Re: Observational vs
21. designed experiment
posted by CRYSTAL RAMOS at Feb 26, 2015, 7:57 PM
Last updated Feb 26, 2015, 7:57 PM
·
(An observational study measures the value of response variable
without attempting to influence any of the response or
explanatory variables of the individual).
That is, in an observational study, the researcher observes the
behavior of the individuals in the study without trying to
influence the outcome of the study.
(In Designed Experiments, individuals are assigned to groups
where explanatory variables are intentionally changed and the
values of the response variables are rewarded).
If a researcher assigns the individuals in a study to a certain
group, intentionally changes the value of the explanatory
variable (remember radiation to rats), and then records the value
of the response variable for each group, the researcher is
conducting a designed experiment. (Drug testing - Placebo vs.
active drub).
·
ANOVA The thread has 7 unread messages.
created by LOUIS DAILY
Last updated Feb 26, 2015, 7:52 PM
7
· Comment on Feb 22, 2015, 10:46 PM
Message collapsed. Message unread ANOVA
posted by LOUIS DAILY at Feb 22, 2015, 10:46 PM
Last updated Feb 22, 2015, 10:46 PM
Despite its name, Analysis of Variance is used to test means. If
we have more than two groups to compare, we cannot use a t or
z test. We must use ANOVA. The test statistic is F, and we
compare it to critical F from the chart. If F is greater than
critical F, then we reject the null. The null is: Ho:
22. Mu1=Mu2=Mu3.... For instance, we might have a low dose
drug group, a high dose drug group, and a placebo group.
The F statistic is a ratio, a ratio of two Variances, the Between
Groups Variance to the Within Groups Variance.
Discuss. Questions?
· Comment on Feb 25, 2015, 12:41 PM
Message collapsed. Message unread Re: ANOVA
posted by PATRICIA MARCUS at Feb 25, 2015, 12:41 PM
Last updated Feb 25, 2015, 12:41 PM
·
Many organizations use various tools to ensure quality
assurance and management for their business. The challenge for
them is to ensure that
they provide the best quality of service to their clients in a time
effective manner. As such, having a diversity of tool options in
place helps the organization identify daily challenges and
increase overall effectiveness practices in their decision making
processes. Implicitly, identifying the problems is the first key
component towards making a sound decision. Once the problems
are identified organizations can use tests such as ANOVA,
nonparametric test and Kruskal-Wallis test for operational
research methods and total quality management. These methods
will allow researchers to analyze significant data that will
subsequently result in implementation of the found solutions.
Utilizing these tools can assist in analyzing information in order
to make the best possible decision for the organization.
· Comment on Feb 25, 2015, 6:13 PM
Message collapsed. Message unread Re: ANOVA
posted by KIM DUNLAP at Feb 25, 2015, 6:13 PM
Last updated Feb 25, 2015, 6:13 PM
·
The Anova F test is equal to the square of the calculated t (if
23. you ran the F test on a 2 sample test). This indicates that F-test
and t-test are really the same. The big difference is that the F-
test can be used to compare more than two treatment means,
whereas the t-test is applicable to only two samples.
When calculating the F- statistic - when the value of the MST to
MSE ratio is near one, it indicates that the two sources of
variation between treatment means and within treatment means
are approximately equal. Values of F that are not close to 1(in
excess of 1) would indicate that the variation among treatment
means well exceeds that within treatments and therefore support
the alternative hypothesis.
· Comment on Feb 25, 2015, 8:47 PM
Message collapsed. Message unread Re: ANOVA
posted by SAID SHEIK ABDI at Feb 25, 2015, 8:47 PM
Last updated Feb 25, 2015, 8:47 PM
·
The reason for doing an ANOVA is to see if there is any
difference between groups on some variable. For example, you
might have data on student performance in non-assessed tutorial
exercises as well as their final grading. You are interested in
seeing if tutorial performance is related to final grade. ANOVA
allows you to break up the group according to the grade and
then see if performance is different across these grades.
ANOVA is available for both parametric (score data) and non-
parametric (ranking/ordering) data
The following assumptions exist when you perform ananalysis
of variance:
· The expected values of the errors are zero.
· The variances of all errors are equal to each other.
· The errors are independent from one another.
· The errors are normally distributed.
http://www.investopedia.com/terms/a/anova.asp
· Comment on Feb 26, 2015, 11:06 AM
24. Message collapsed. Message unread Re: ANOVA
posted by ARACHEAL VENTRESS at Feb 26, 2015, 11:06 AM
Last updated Feb 26, 2015, 11:06 AM
·
The ANOVA has long enjoyed the status of being the most used
statistical technique in psychological research. The
popularity of this technique can be attributed to two main
reasons. First, like the t test, it deals with differences between
and among sample means; however, it imposes no restrictions
on the number of means. second, theANOVA allows dealing
with two or more independent variables simultaneously, and it
provides information not onlyof each variable effect, but
also of the interacting effects of two or more variables
Reference
Ximénez, C., & Revuelta, J. (2007). Extending the CLAST
sequential rule to one-way ANOVA under group
sampling. Behavior Research Methods, 39(1), 86-100. Retrieved
from
http://search.proquest.com/docview/204304311?accountid=3581
2
· Comment on Feb 26, 2015, 2:24 PM
Message collapsed. Message unread Re: ANOVA
posted by STEPHANIE RECTOR at Feb 26, 2015, 2:24 PM
Last updated Feb 26, 2015, 2:24 PM
·
According to McClave, "Conditions Required for a Valid
ANOVA F-test: Completely Randomized Design
· 1. The samples are randomly selected in an independent
manner from the k treatment populations. (This can be
accomplished by randomly assigning the experimental units to
the treatments.)
· 2. All k sampled populations have distributions that are
approximately normal.
25. · 3. The k population variances are equal (i.e., )" (McClave,
2011).
An example of this would be if ASA (Amateur Softball
Association) wanted to compare the mean distances between
four competing brands when hit with a particular bat. These
brands could be Worth, Dudley, Wilson, and Rawlings. Closely
relating the example in our material, ten random sample balls of
each brand would be hit in random sequence. This could be
done with a robotic swinger and the distance can be recorded
for each hit. At this point, a comparison of mean distances for
each brand would be determined (α=.10) can be used. Then you
can compute the test statistic and p-value to analyze results. The
ANOVA involves two or more independent variables. Although
we only used four brands and ten hits for this example, ANOVA
does not contain limits to the number of means you can have.
Reference
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for Business and Economics (11th ed.). Boston, MA: Prentice
Hall
· Comment on Feb 26, 2015, 7:52 PM
Message collapsed. Message unread Re: ANOVA
posted by CRYSTAL RAMOS at Feb 26, 2015, 7:52 PM
Last updated Feb 26, 2015, 7:52 PM
·
In the typical application of ANOVA, the null hypothesis is that
all groups are simply random samples of the same population.
For example, when studying the effect of different treatments
on similar samples of patients, the null hypothesis would be that
all treatments have the same effect (perhaps none). Rejecting
the null hypothesis would imply that different treatments result
in altered effects.
By construction, hypothesis testing limits the rate of Type I
errors (false positives leading to false scientific claims) to a
26. significance level. Experimenters also wish to limit Type II
errors (false negatives resulting in missed scientific
discoveries). The Type II error rate is a function of several
things including sample size (positively correlated with
experiment cost), significance level (when the standard of proof
is high, the chances of overlooking a discovery are also high)
and effect size (when the effect is obvious to the casual
observer, Type II error rates are low).
The terminology of ANOVA is largely from the statistical
design of experiments. The experimenter adjusts factors and
measures responses in an attempt to determine an effect. Factors
are assigned to experimental units by a combination of
randomization and blocking to ensure the validity of the results.
Blinding keeps the weighing impartial. Responses show a
variability that is partially the result of the effect and is
partially random error.
ANOVA is the synthesis of several ideas and it is used for
multiple purposes. As a consequence, it is difficult to define
concisely or precisely.
Yavuz Sefik
2/25/15
Week 25 Textbook Assignment
1. Why did the Industrial Revolution start in Britain?
Britain was already experienced in the areas of metallurgy and
mining because of their access to coal and iron. This would help
give them a head start in the revolution. The individualistic
attitude of the population of Britain, as well as the importance
placed in understanding the rational aspect of nature gave the
people a reason to create machines. The enclosure movement,
where wealthy landlords could take land whenever needed,
allowed for a mass migration of labor from the countryside to
the cities of Britain. Factories now had the labor they needed to
ensure productivity, and the perfect conditions for the Industrial
Revolution were set.
27. 2. What role did the cotton industry play in the beginnings of
the Industrial Revolution?
New technology was introduced into the cotton industry that
made it easier for it to be woven and spun. Buildings where
these machines could be placed were needed, so more factories
were built, and more jobs were created. As newer technology
for spinners was invented, technology for weavers raced to
catch up, and vice versa. This created competition that would
continue to drive industry.
3. How did the development of steam power transform industry
in Britain?
It allowed for factories to be built anywhere, whereas before
they had to be built near the only reliable power source at the
time, water. As steam power was more widely used, industry
grew faster. Railroads were built that could transport goods at
timely speed, and steam-powered boats were constructed so that
bigger payloads could be shipped farther. As a result, weaker
people – children and women – could work with these stronger,
less demanding machines, and society in Britain was
transformed.
4. What is Doctor Kay’s description of the mental and physical
health of a worker in a Manchester cotton mill?
The workers become reckless since they work so hard yet they
are given so little. They lived and worked in crowded and filthy
environments, and they caught diseases easily. The home is only
seen as shelter, and meals are prepared and eaten hastily, so that
they can get back to making the little money that they do.
5. Kay does not seem to think the employer, the machines, or
the society, are responsible for the workers’ misery. What do
you think?
I believe that if applied correctly, the machines can be used to
benefit society as a whole. The only reason that the people were
28. as miserable as they were was the lack of compassion in the
employers and in the government. All the employers care about
is making as big a profit as possible, while the government
allows this so that the nation’s economy can grow. If
regulations were put into place, society would be different, and
the workers’ lives would likely become better.
Yavuz Sefik
2/25/15
Week 25 Homework Assignment
1. What role did the preceding agricultural revolution play in
the rise of industry in Britain?
Without the agricultural revolution that previously took place,
the Industrial Revolution could not have occurred in Britain. As
farming improved (increased use of pesticides, farming methods
such as crop rotation, and the enclosure movement), agricultural
output increased, and more people lived longer. As a result,
more labor was available for industry, and the people, booted
from their farms because of the enclosure movement, went
looking for work in the cities.
2. What roles did cotton, the steam engine and new forms of
transportation have that led to the rise of industry in Britain?
New technology was introduced into the cotton industry that
made it easier for it to be woven and spun. Buildings where
these machines could be placed were needed, so more factories
were built, and more jobs were created. As newer technology
for spinners was invented, technology for weavers raced to
catch up, and vice versa. This created competition that would
continue to drive industry.
The steam engine allowed for factories to be built anywhere,
whereas before they had to be built near the only reliable power
source at the time, water. As steam power was more widely
used, industry grew faster. Railroads were built that could
29. transport goods at timely speed, and steam-powered boats were
constructed so that bigger payloads could be shipped farther. As
a result, weaker people – children and women – could work with
these stronger, less demanding machines, and society in Britain
was transformed.
3. What were the working and living conditions of the poor
proletariat workers in Industrial Britain?
The proletariat workers lived and worked in crowded and filthy
environments, and they caught diseases easily. The home is only
seen as shelter, and meals are prepared and eaten hastily, so that
they can get back to making the little money that they do. Dust
and soot was everywhere, and had negative effects on the
workers’ lungs.