SlideShare a Scribd company logo
1 of 77
Business Research Methods, Ch. 19
New Message
·
Chapter 19: Cluster Analysis The thread has 1 unread message.
created by Jynx Gresser
Last updated Mar 07, 2015, 11:34 PM
1
· Comment on Mar 07, 2015, 11:34 PM
Message collapsed. Message unread Chapter 19: Cluster
Analysis
posted by Jynx Gresser at Mar 07, 2015, 11:34 PM
Last updated Mar 07, 2015, 11:34 PM
·
According to Cooper and Schindler (2011), cluster analysis is "a
set of interdependence techniques for grouping similar objects
or people" (p. 550). This method is often utilized in the fields
of medicine, biology, and marketing (Cooper & Schindler,
2011). Within the field of marketing, one can divide customers
into groups based on buying behaviors as well as age, lifestyle,
and financial characteristics (Cooper & Schindler, 2011).
Cluster analysis is often compared to a factor analysis, but
differs in the ways that correlations are treated; they are
similarity measures rather than control variables on a linear
model (Cooper & Schindler, 2011). There are five basic steps in
the application of cluster analysis and they include selection of
the sample to be clustered, definition of the variables on which
to measure objects, events, or people, computation of
similarities through correlations, selection of mutually
exclusive clusters, and cluster comparison and validation
(Cooper & Schindler, 2011). The biggest takeaway from this
analysis method is clustering similar groups together to provide
a heightened awareness of links between data amongst different
demographic variables. A dendogram provides a visual
representation on how to categorize clusters and understand
their differences. I look forward to utilizing this type of
analysis when I begin my marketing classes that involves
product development and how it effects buying behavior. Does
anyone in class utilize this method and have some insight into
how it is applied?
Reference
Cooper, D. R., & Schindler, P. S. (2011). Business Research
Methods (11th ed.). New York, NY: McGraw's/Irwin. Retrieved
from the University of Phoenix eBook Collection database.
Jynx Gresser
·
SEM The thread has 3 unread messages.
created by ARACHEAL VENTRESS
Last updated Mar 07, 2015, 10:03 PM
3
· Comment on Mar 06, 2015, 1:25 PM
Message collapsed. Message unread SEM
posted by ARACHEAL VENTRESS at Mar 06, 2015, 1:25 PM
Last updated Mar 06, 2015, 1:25 PM
·
Structural equation modeling (SEM) implies a structure for the
covariances between observed variables, and accordingly it is
sometimes called covariance structure modeling. More
commonly, researchers refer to structural equation models as
LISREL (linear structural relations) models--the name of the
first and most widely cited SEM computer program.
SEM is a powerful alternative to other multivariate techniques,
which are limited to representing only a single relationship
between the dependent and independent variables. The major
advantages of SEM are (1) that multiple and interrelated
dependence relationships can be estimated simultaneously and
(2) that it can represent unobserved concepts, or latent
variables, in these relationships and account for measurement
error in the estimation process.
Reference
Cooper, D. R., & Schindler, P. S. (2011). Business Research
Methods (11th ed.). New York, NY: McGraw's/Irwin. Retrieved
from the University of Phoenix eBook Collection database.
· Comment on Mar 07, 2015, 8:36 PM
Message collapsed. Message unread SEM
posted by PATRICIA MARCUS at Mar 07, 2015, 8:36 PM
Last updated Mar 07, 2015, 8:36 PM
·
Great post Arachael. SEM is a flexible and extensive method for
testing theory. It is a statistical modeling modeling technique to
establish relationships among variables. It is best developed on
the basis of substantive theory. Statistical estimates of these
hypothesized covariance indicates within a margin of error how
well the models fit with days. Structural equation models
subsume factor analysis, regression, and path analysis.
Integration of that types of analysis is an important
advancement because it helps make possibleempirical
specification of the linkage between imperfectly measured
variables and rhetorical constructs of interest. A key feature of
SEM is that observational variables are seen as a representation
of a small number of constructs that can't be directly measures
but only inferred from the observed measured variables. .
· Comment on Mar 07, 2015, 10:03 PM
Message collapsed. Message unread Re: SEM
posted by MaDonna Keys at Mar 07, 2015, 10:03 PM
Last updated Mar 07, 2015, 10:03 PM
·
According to stat soft its states that the SEM or the Structural
Equation Modeling is a very powerful analysis technique
because it includes a few different versions of specialized
number of methods. The methods are used in special analysis
cases. There are basics that are included in application
structural equation modeling.
There are six major applications that are included in structural
equation modeling those are casual modeling, confirmatory
factor analysis, second order factor analysis, regression
models,covariance structure model, and correlation structure
model.
Casual modeling and or path analysis which is a type of
modeling that focuses on hypothesizes among variables that are
latent and manifested, tested and the models of the casual
models with a linear equation system.
Confirmatory factor or analysis focuses on intercorrelations
and their testing and the factor analysis of specific hypotheses
and their structure of factors when tested.
Second order factor analysis which involves the analyzation of
the common factors through a correlation matrix to provide a
second order of factors through the analysis .
Regression models is weights in regression weights are
constrained to equal each other based upon numerical values
extending the linear regression analysis of stats.
Covariance structural models which includes the hypothesizing
that a covariance matrix has a particular form. This can be
tested on variables with all equal variances.
Correlation structural models this also uses hypothesizing the
correlation matrix has a specific structure or form and an
example of this includes circumplex.
Retrieved from Structural Equation Modeling accessed March 8,
2015
·
Multivariate Analysis The thread has 2 unread messages.
created by STEPHANIE RECTOR
Last updated Mar 07, 2015, 8:25 PM
2
· Comment on Mar 05, 2015, 10:26 AM
Message collapsed. Message unread Multivariate Analysis
posted by STEPHANIE RECTOR at Mar 05, 2015, 10:26 AM
Last updated Mar 05, 2015, 10:26 AM
·
Many businesses today rely on multiple independent and
multiple dependent variables because of how complex consumer
preferences are. According to our readings,multivariate analysis
are "those statistical techniques which focus upon, and bring out
in bold relief, the structure of simultaneous relationships among
three or more phenomena" (Cooper, 2011). Dependence and
interdependency are the two types of multivariate techniques
and selecting the correct one is of high importance. You would
utilize the dependency technique if the criterion and predictor
variables are clear in the research question. Cooper also states,
"Alternatively, if the variables are interrelated without
designating some as dependent and others independent, then
inter- dependence of the variables is assumed. Factor analysis,
cluster analysis, and multidimensional scaling are examples of
interdependency techniques" (Cooper, 2011). Multivariate
analysis can be complicated because of the inclusion of physics-
based analysis. These help calculate how variables influence
hierarchical "systems-of-systems."
Reference:
Cooper, D. R., & Schindler, P. S. (2011). Business Research
Methods (11th ed.). New York, NY: McGraw's/Irwin. Retrieved
from the University of Phoenix eBook Collection database.
· Comment on Mar 05, 2015, 6:34 PM
Message collapsed. Message unread Re: Multivariate Analysis
posted by LOUIS DAILY at Mar 05, 2015, 6:34 PM
Last updated Mar 05, 2015, 6:34 PM
Stephanie,
Yes, when we have more than one dependent variables, we can
use multivariate techniques to analyze the design.
thanks
Lou
·
Conjoint Analysis The thread has 3 unread messages.
created by KIM DUNLAP
Last updated Mar 07, 2015, 8:57 PM
3
· Comment on Mar 06, 2015, 6:26 PM
Message collapsed. Message unread Conjoint Analysis
posted by KIM DUNLAP at Mar 06, 2015, 6:26 PM
Last updated Mar 06, 2015, 6:26 PM
·
Conjoint Analysis is one that seems to be used often in the
marketing aspect of business. It is a method or tool that allows
us to the relative importance of combinations of attributes and
rank them in order of importance. So in other words, on almost
any product, you can assign different attributes such as cost,
brand, color options, size, price, options available. You could
then assign combinations different levels and go out and
research the public. By doing this you can rank the importance
of the different combinations to your customers and market
those qualities that matter most to your customer base.
Statistical software that assists the researcher in analyzing
conjoint data is helpful.
· Comment on Mar 07, 2015, 5:20 PM
Message collapsed. Message unread Re: Conjoint Analysis
posted by JUDEENE WALKER at Mar 07, 2015, 5:20 PM
Last updated Mar 07, 2015, 5:20 PM
·
Conjoint analysis is a popular marketing research technique that
marketers used to determine what features a new product should
have and how it should be priced. The main steps involved in
using conjoint analysis include determination of the salient
attributes for the given product from the points of view of the
consumers.
In simple terms conjoint analysis is an advanced market
research technique that gets to "the root of the matter" of how
people make decisions and what they really value in products
and services. Using this technique one has to present people
with choices then analyze the data.
Benefit of using conjoint analysis is to evaluate product/service
attributes in a way that no other method can. Conjoint analysis
provides the ability to use results to develop market simulation
models that can be used well in to the future. In this ever
changing competitive market it is important to use CA as it
allows changes to be incorporated in to a simulation model
showing predictions of how buyers will respond to different
changes.
· Comment on Mar 07, 2015, 8:57 PM
Message collapsed. Message unread Re: Conjoint Analysis
posted by PATRICIA MARCUS at Mar 07, 2015, 8:57 PM
Last updated Mar 07, 2015, 8:57 PM
·
Great post Judeene..From what I gather from conjoint analysis
and I am in agreement with you is that it presents people with
choices and then it allows them to analyze those choices. It
allows businesses to work or and quantify the hidden rules
people use to make traps between different gratis or component
pays off the offer. The principle behind vonjoint analysis is
that it stays by breaking down every thing about a product or
service and then test the combination of them to find out what
the customer prefer. .Vonjoint analysis can be relatively
compress because it twos an understanding of how to use and
create. attributes and levels.Even then a vonjoint analysis don't
always fit. So depending on the product or services itis possible
that certain approaches are not always suitable and other
methods are needed.
Interpreting Statistics
Write a one-paragraph (2-3 complete sentences) summary of the
article, demonstrating that
you read and understood it. (Look at your notes on
Summarizing.)
Researchers at UCLA measured the quantity of same-sex
couples that are married, have
registered civil unions or domestic partnerships. Researchers
also viewed the number of
couples that are legally ending their relationships, in contrast to
the divorce rate for straight
couples. Based on their research findings, they drew several
conclusions about the current
and possible future trends of trends of gay marriage and
divorce.
STATISTIC # 1
1. Explain the statistic you selected from your article. (Look at
your class notes on
interpreting statistics and read through the Statistics PPT
again.)
About two-thirds of registered or married same-sex couples are
lesbians and only about
one-third are gay men.
2. What are the two pieces of data being compared in the
statistic?
The two pieces of data being compared are the number of
registered same-sex civil
unions or marriages among women and the number of registered
same-sex civil unions
or marriages among men.
3. Describe at least one conclusion that can be drawn from your
statistic.
One conclusion that can be drawn is that marriage is more
appealing to women than to
men. Another possible conclusion that one could draw is that
gay women are more
comfortable publicly proclaiming their sexual orientation than
gay men.
STATISTIC #2
1. Explain the static you chose.
About 1% of married or registered same-sex couples get
divorced each year while
approximately 2% of straight couples get divorced each year.
2. What are the two pieces of data being compared in the
statistic?
The two pieces of data being compared are the divorce rates for
same-sex couples and
the divorce rates for straight couples.
3. Describe at least one conclusion that can be drawn from your
statistic.
One conclusion that can be drawn is that partners in same-sex
marriages are happier
than partners in straight marriages.
These findings are based on the article cited below.
Hertz, Frederick. "Frederick Hertz: Divorce & Marriage Rates
for Same-Sex Couples." Breaking News and Opinion on The
Huffington Post. N.p., n.d. Web. 10 Oct. 2013.
<http://www.huffingtonpost.com/frederick-hertz/divorce-
marriage-rates-
fo_b_1085024.html>.
Sexual assault reports climb at area colleges
N.E. schools’ data tied to a greater awareness
By Matt Rocheleau
| G L O B E C O R R E S P O N D E N T O C T O B E R 0 6 ,
2 0 1 4 • Share via e-mail
Reports of sexual assaults on area college campuses rose
markedly last year, an increase that safety
specialists attribute primarily to heightened national awareness
of the problem, which prompts more
victims to come forward.
A survey of information from more than two dozen of the
largest New England colleges found that
reports of “forcible sex offenses” climbed by 40 percent overall
between 2012 and 2013, according to
a Globe review of data that colleges provided in annual
federally mandated reports released last
week.
Continue reading below
Last year, there were a total of 289 reports of the offenses at the
colleges. That compares with 206 in
2012. The 2013 total was more than twice as many as reported
five years before.
Sexual assaults on and around college campuses, long
considered a vastly underreported crime, have
received increased attention in recent months from the Obama
administration, the schools, and
students. As a result, campuses have stepped up training,
support, and outreach, and the rising
number for 2013 is seen as signaling that victims are more
comfortable reporting assaults.
“It means that students are coming forward and reporting crimes
that are happening and ending that
culture of silence,” said Alison Kiss, director of the Clery
Center for Security On Campus, a nonprofit
that trains colleges to comply with the federal Clery Act.
Specialists also believe the spike in reporting may indicate that
colleges are becoming more thorough
and transparent in collecting and disclosing sexual assault data.
At all but four of the campuses in the review, the number of
sexual assault reports rose or held steady
last year.
Hampshire College said its reports of assaults increased from 13
to 20, giving the school the highest
rate of sexual assaults per 1,000 students in 2013 — 13.6 —
among colleges surveyed by the Globe.
The next highest rate was 5.5 assaults at Dartmouth College; the
school reported 24 assaults in 2012,
and 35 in 2013. The average rate among schools in the survey
was 0.99 per 1,000 students in 2013.
Diana Fernandez, Hampshire’s coordinator for Title IX, a
federal law that mandates gender equality
in campus life, attributed the above-average rate at the college
to its new efforts to inform students
about how to report the crime and about the increased resources
available for victims.
“It’s continued education and really working with the
community on that education and training,”
Fernandez said.
She said Hampshire recently added online training for students
on sexual assault prevention.
Hampshire has also expanded workshops that teach students
about consent and how to intervene if
they see a friend in a vulnerable situation.
“We really look at how can we promote these conversations and
how can we work with our students
to have these conversations,” she said.
University of Connecticut saw sexual assault reports nearly
double from 13 in 2012 to 25 last year,
putting its rate for 2013 at 0.88 per 1,000 students.
“I think the numbers will continue to rise here and at other
campuses for years to come, and that’s
really important because it gives us a better picture of what’s
happening,” said UConn’s Title IX
coordinator, Elizabeth Conklin.
Meanwhile, Amherst College was one of just four schools in the
survey that saw sexual assault
reports decline. The college reported 14 sexual assaults in both
2010 and 2011, 17 offenses in 2012,
and nine in 2013, which put its rate for 2013 at 5.04 per 1,000
students.
Campus spokeswoman Caroline Hanna said in an e-mail that the
college has taken numerous steps
to better prevent and respond to sexual assault and encourages
students to report the crime. But, “as
these numbers tend to fluctuate across institutions and year over
year, it is difficult for us to
speculate about the reason for the change in last year’s
numbers,” Hanna said.
Under the Clery Act, colleges are required by Oct. 1 each year
to issue a report that includes statistics
of allegations of crimes that occurred on campus, including
dorms and other public property; at
property owned by, but separated from, the main campus; and at
fraternities and sororities. They
exclude other off-campus housing.
An estimated 88 percent of college victims do not formally
report sexual assaults, federal studies
suggest. Even so, some area schools have reported surprisingly
low numbers of assault allegations,
given their size.
Suffolk University, which enrolled 8,800 students last year, said
there were no reports of sexual
assaults in 2013. Since 2008, the downtown university reported
two cases, both in 2010.
School spokesman Greg Gatlin said the low numbers may be
because fewer than 15 percent of Suffolk
students live on campus.
The university “works closely with students, faculty, and staff
to encourage the reporting of any
incident and to make our community aware of available
resources,” he said in an e-mail.
Bunker Hill Community College, with an enrollment of about
14,000, listed one report of sexual
assault in both 2008 and 2012, and zero reports in the four years
in between.
Spokeswoman Karen Norton said the college is confident its
data is accurate. “We have a lot of
processes in place about how we gather the information and
report it,” Norton said.
The two-year school is different from more traditional colleges,
she said: There are no dormitories;
two-thirds of the students are part time; and about one-third
take at least some of their classes
online. The average age of students is 27, she said.
Without singling out specific schools, the Clery Center’s Kiss
and other experts say that low numbers
may indicate colleges are not doing enough to raise awareness
about the issue or to educate how
victims can report. It may also mean colleges are not being
honest or thorough in their reporting,
experts said.
In recent years, numerous colleges have been caught or accused
of failing to disclose individual
reports of sexual assault or for releasing inaccurate data.
Under new federal guidelines, colleges were required for the
first time in 2013 to also compile and
disclose the number of reports of dating violence, domestic
violence, and stalking on their campuses.
But two local schools failed to include those statistics on their
latest Clery reports.
One school was Roxbury Community College, which
acknowledged last year — amid an ongoing
federal investigation — that it had underreported crime figures
in prior Clery reports. Campus
president Valerie Roberson said in an e-mail that the college
had no reports of domestic violence,
dating violence, or stalking for 2013.
“As a commuter college, these crimes would be unusual,
especially when compared to what might
occur at a residential college,” she wrote.
Wentworth Institute of Technology also failed to include the
newly required data.
After the Globe pointed out the omission, Wentworth
spokesman Caleb Cochran said it was an
oversight. On Thursday, the school issued an updated report
saying it did not receive reports in any
of the three categories last year.
Rocheleau,
Mtatt.
"Reports
of
sexual
assaults
on
area
college
campuses
rise
sharply
-­‐
The
Boston
Globe."
BostonGlobe.com.
N.p.,
6
Oct.
2014.
Web.
9
Oct.
2014.
<http://www.bostonglobe.com/metro/2014/10/05/reports-
­‐sexual-­‐assaults-­‐area-­‐college-­‐campuses-­‐rise-­‐
sharply/F0R0BoigySPVOaWn5YXeDI/story.html?p1=Article_F
acet_Related_Article>.
· Statistics for Business and Economics, Ch. 13
Methods Of Forecasting The thread has 1 unread message.
created by MaDonna Keys
Last updated Mar 07, 2015, 10:59 PM
1
· Comment on Mar 07, 2015, 10:59 PM
Message collapsed. Message unread Methods Of Forecasting
posted by MaDonna Keys at Mar 07, 2015, 10:59 PM
Last updated Mar 07, 2015, 10:59 PM
·
According to investopedia when it comes forcasting there are a
few different methods in which forecasts can be made. When it
comes to the use of any method it falls into one of these two
categories. The categories are qualitative and quantitative.
When it comes models that are Qualitative they are usually
short term predictions. The forecast scope is limited. The
qualitative forecasts is based on expert basis. They are
dependent upon market mavens or the market as a whole the
weighing of the informed consensus. The best used when it
comes to predicting company successes based upon the short
term and it meets the reliability measures based on opinions.
Quantitative methods it involves the discount of the expertise
and tries to remove the human element out the analysis. This
analysis is concerned only with data not much of relying on
people. Variables that consist of gross sales, gross domestic
products, and others. This kind of method is used on the basis
of long terms measured in months to years.
Qualitative methods includes market research which involves
polls of large numbers of people when it comes to a specific
product and how many people will purchase that product.
Delphi methods which includes the analysis of field experts.
Quantitative methods include the indicator approach,
Econometric modeling, and Time Series.
Indicator approach this is relationship between particular
indicators such as performance lagging or leading data.
Econometric which involves the testing of consistency of data
over a particular amount of time. Time Series refers to
collections of past related data for future predictions of events.
Retrieved From The Basics Of Business Forecasting Accessed
March 8,2015
·
index numers The thread has 4 unread messages.
created by ARIEL SMITH
Last updated Mar 07, 2015, 9:19 PM
4
· Comment on Mar 04, 2015, 12:42 PM
Message collapsed. Message unread index numers
posted by ARIEL SMITH at Mar 04, 2015, 12:42 PM
Last updated Mar 04, 2015, 12:42 PM
·
According to the text, a common technique for characterizing a
business or economic time series is to compute index numbers.
Index numbers measure how time series values change relative
to a preselected time period, called the base period. There are
two types of indexes used in business and economic: price and
quantity indexes. Price indexes measure changes in the price of
a commodity or group of commodities over time. One example
of a price index is The Consumer Price Index (CPI) is a price
index because it measures price changes of a group of
commodities that are intended to reflect typical purchases of
American consumers. An index constructed to measure the
change in the total number of automobiles produced annually by
American manufacturers would be an example of a quantity
index.
McClave, J. T., Benson, P. G., & Sincich, T. (2010). Statistics
for Business and Economics, 11th Edition. [VitalSource
Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781269882163/id/ch13tab0
1
· Comment on Mar 05, 2015, 6:45 PM
Message collapsed. Message read Re: index numers
posted by LOUIS DAILY at Mar 05, 2015, 6:45 PM
Last updated Mar 05, 2015, 6:45 PM
Ariel,
Yes, and the CPI is the usual measure of inflation in the
economy.
thanks
Lou
· Comment on Mar 06, 2015, 1:13 PM
Message collapsed. Message unread Re: index numers
posted by ARACHEAL VENTRESS at Mar 06, 2015, 1:13 PM
Last updated Mar 06, 2015, 1:13 PM
·
Ariel
Thanks for your thorough explanation of index numbers. An
index number is one simple number that we can look at to give
us a general overview of what is happening in a particular
field. Another example of a real world number index is the
Dow Jones International Average. The Dow Jones industrial
average or DJIA for short, is a stock index. It measures the
performance of 30 large public companies in the United States.
These 30 companies are representative of companies in the
whole of the United States and therefore, this index is
mentioned in relation to how businesses in the United States are
doing. If this index goes down, then that means companies in
the United States are not doing well. If the index goes up, then
that means businesses in the United States are doing well. This
index is very important in the forecasting of how well our
economy doing in terms of profit or deficit.
Reference
http://study.com/academy/lesson/index-numbers-in-statistics-
uses-examples.html
· Comment on Mar 06, 2015, 2:04 PM
Message collapsed. Message unread Re: index numers
posted by STEPHANIE RECTOR at Mar 06, 2015, 2:04 PM
Last updated Mar 06, 2015, 2:04 PM
·
Hello Ariel and Aracheal, thanks for the insight on index
numbers and examples. According to our readings, "Methods of
calculating index numbers range from very simple to extremely
complex, depending on the numbers and types of commodities
represented by the index" (McClave, 2010). Index numbers that
are based on price or quantity of one commodity is referred to
as a simple index number. This number describes the relative
changes through time in the price or quantity of that
commodity. An example of this would be to construct a simple
index to describe the relative changes in gold prices over the
last 15 years. The year 2000 would be considered the "base
period." The simple index number can be calculated for any
chosen year by dividing that year's price by the price during the
base year 2000, then multiplying the remaining by 100.
McClave, J. T., Benson, P. G., & Sincich, T. (2010). Statistics
for Business and Economics, 11th Edition.
· Comment on Mar 07, 2015, 9:19 PM
Message collapsed. Message unread Re: index numers
posted by SAID SHEIK ABDI at Mar 07, 2015, 9:19 PM
Last updated Mar 07, 2015, 9:19 PM
·
Economists frequently use index numbers when making
comparisons over time. An index starts in a given year, the base
year, at an index number of 100. In subsequent years,
percentage increases push the index number above 100, and
percentage decreases push the figure below 100. An index
number of 102 means a 2% rise from the base year, and an index
number of 98 means a 2% fall.
Using an index makes quick comparisons easy. For example,
when comparing house prices from the base year of 2005, an
index number of 110 in 2006 indicates an increase in house
prices of 10% in 2006.
The best-known index in the United States is the consumer price
index, which gives a sort of "average" value for inflation based
on price changes for a group of selected products. The Dow
Jones and NASDAQ indexes for the New York and American
Stock Exchanges, respectively, are also index numbers.
http://mathworld.wolfram.com/IndexNumber.html
·
Forecasting The thread has 9 unread messages.
created by LOUIS DAILY
Last updated Mar 07, 2015, 9:13 PM
9
· Comment on Mar 02, 2015, 11:41 AM
Message collapsed. Message unread Forecasting
posted by LOUIS DAILY at Mar 02, 2015, 11:41 AM
Last updated Mar 02, 2015, 11:41 AM
In the previous chapters, we studied Linear Regression. Let's
say our Y variable is sales and our X variable is time (perhaps
Jan, Feb, March, April, etc). If the X variable is time, we call
this a time series. Can you think of a use for a regression line
with this time series?
If we had three years of data, how many years ahead do you
think we could predict using a regression line?
If each December sales went sky high because of Christmas,
how could we adjust for this before constructing the regression
line?
· Comment on Mar 03, 2015, 5:42 AM
Message collapsed. Message read Re: Forecasting
posted by ERIK SEIDEL at Mar 03, 2015, 5:42 AM
Last updated Mar 03, 2015, 5:42 AM
·
A regression line with a time series would be helpful in
predicting future sales levels or other trends. For example,
medical expenses for our health insurance company tend to
fluctuate each year. Some years the flu is worse, and other
years there is just heavier utilization than others. But if you
create a time series of this data over a few years, you will see in
almost every case that there is an overall increasing trend to
medical expenses per member. This may be related to increases
in our provider contracts, increased utilization of services, or a
combination of the two. If we had three years of data, you
could probably predict another five years into the future.
However, consideration would have to be given to any expected
material changes in the data to a change in the business,
industry, etc. If trying to project sales and sales are higher each
December, I think the best way to account for this would be to
either include total year sales as the x variable or only look at
year-to-year December sales as the x variable.
· Comment on Mar 03, 2015, 8:23 AM
Message collapsed. Message read Re: Forecasting
posted by LOUIS DAILY at Mar 03, 2015, 8:23 AM
Last updated Mar 03, 2015, 8:23 AM
Erik,
Time is usually the x variable, the independent variable. Then
we plot the change in the y variable over time. After that, we
"smooth" the data for random fluctuations, seasonality, etc.
Finally, we use regression to model the trend (if any).
thanks
Lou
· Comment on Mar 04, 2015, 4:25 PM
Message collapsed. Message unread Re: Forecasting
posted by STEPHANIE RECTOR at Mar 04, 2015, 4:25 PM
Last updated Mar 04, 2015, 4:25 PM
·
Can you think of a use for a regression line with this time
series?
If we had three years of data, how many years ahead do you
think we could predict using a regression line?
If each December sales went sky high because of Christmas,
how could we adjust for this before constructing the regression
line?
A sequence of data points make up a time series and the
successive measurements are made over a particular time frame.
According to Wikipedia, "Time series are used
in statistics, signal processing, pattern
recognition, econometrics,mathematical finance, weather
forecasting, earthquake
prediction, electroencephalography, control
engineering, astronomy,communications engineering, and
largely in any domain of applied science and engineering which
involves temporal measurements" (wikipedia.org). We can use
time series forecasting to predict future value based on previous
values. If we had three years of data we could predict ahead the
next three to five years. It is when it is long term that time
series can lag. Before constructing a regression line, we could
adjust for the high December sales by "smoothing," which
involves removing irregular fluctuations in a time series. This
method essentially "smooths" out most residual effects.
References:
http://en.wikipedia.org/wiki/Time_series
· Comment on Mar 05, 2015, 6:39 PM
Message collapsed. Message unread Re: Forecasting
posted by LOUIS DAILY at Mar 05, 2015, 6:39 PM
Last updated Mar 05, 2015, 6:39 PM
Stephanie,
With three years of data, I think we can be comfortable with
predicting one year ahead--any further out would be "on a
limb".
thanks
Lou
· Comment on Mar 06, 2015, 1:06 PM
Message collapsed. Message unread Re: Forecasting
posted by ARACHEAL VENTRESS at Mar 06, 2015, 1:06 PM
Last updated Mar 06, 2015, 1:06 PM
·
Professor:
A time series is a sequence of data points, typically consisting
of successive measurements made over a time interval.
Examples of time series are ocean tides, counts of sunspots, and
the daily closing value of the Dow Jones Industrial Average.
Time series are very frequently plotted via line charts. Time
series are used in statistics, signal processing, pattern
recognition, econometrics, mathematical finance, weather
forecasting, earthquake prediction, astronomy, and largely in
any domain of applied science and engineering which involves
temporal measurements.
Time series analysis comprises methods for analyzing time
series data in order to extract meaningful statistics and other
characteristics of the data. Time series forecasting is the use of
a model to predict future values based on previously observed
values. While regression analysis is often employed in such a
way as to test theories that the current values of one or more
independent time series affect the current value of another time
series, this type of analysis of time series is not called "time
series analysis", which focuses on comparing values of a single
time series or multiple dependent time series at different points
in time.
Reference
http://en.wikipedia.org/wiki/Time_series
· Comment on Mar 06, 2015, 11:56 PM
Message collapsed. Message unread Re: Forecasting
posted by Jynx Gresser at Mar 06, 2015, 11:56 PM
Last updated Mar 06, 2015, 11:56 PM
·
According to Gellert (n. d.), "it can be highly beneficial for
companies to develop a forecast of the future values of some
important metrics, such as demand for its products or variables
that describe the economic climate" (para. 1). At my company,
we have an actual daily, weekly, and monthly sales goal as well
as a forecast which is based on previous sales data with a little
lift. This would be an example of time series analysis because it
provides an estimate of what we could achieve in sales. Another
example could be gift set store inventory based on past dollar
sales and this is particularly evident during important sales
holidays, like Mother's Day. "It is possible to develop a linear
regression model that simply fits a line to the variables
historical performance and extrapolates that into the future"
(Gellert, n. d., para. 4). Although, this line is unable to account
for holidays where there are extreme values and nonlinearity
(Gellert, n. d.). I would probably only predict one year out even
with multiple years of data because this involves the most up to
date knowledge for analysis. For December sales, I would adjust
for the sky high sales by switching to a weekly sales liner
regression. Unfortunately, December could be thrown out as an
extreme value for the year, but if each week or another busy
month was analyzed then we could understand its impact more
effectively in regards to forecasting.
Reference
Gellert, A. (n. d.). Linear regression forecasting methods by
companies. The Houston Chronicle. Retrieved from
http://smallbusiness.chron.com/linear-regression-forecasting-
method-companies-73112.html
Jynx Gresser
· Comment on Mar 07, 2015, 8:26 AM
Message collapsed. Message unread Forecasting
posted by CRYSTAL RAMOS at Mar 07, 2015, 8:26 AM
Last updated Mar 07, 2015, 8:26 AM
·
The use of historic data to determine the direction of future
trends. Forecasting is used by companies to determine how to
allocate their budgets for an upcoming period of time. This is
typically based on demand for the goods and services it offers,
compared to the cost of producing them. Investors utilize
forecasting to determine if events affecting a company, such as
sales expectations, will increase or decrease the price of shares
in that company. Forecasting also provides an important
benchmark for firms which have a long-term perspective of
operations.
Stock analysts use various forecasting methods to determine
how a stock's price will move in the future. They might look at
revenue and compare it to economic indicators, or may look at
other indicators, such as the number of new stores a company
opens or the number of orders for the goods it manufactures.
Economists use forecasting to extrapolate how trends, such as
GDP or unemployment, will change in the coming quarter or
year. The further out the forecast, the higher the chances that
the estimate will be less accurate.
· Comment on Mar 07, 2015, 10:16 AM
Message collapsed. Message unread Re: Forecasting
posted by ERIK SEIDEL at Mar 07, 2015, 10:16 AM
Last updated Mar 07, 2015, 10:16 AM
·
There are numerous applications of using historical data to
predict future results. Some of these include forecasting sales
or expenses for future years, months, weeks, or even days. I
think there are also many operational applications of using
statistics to forecast future events that companies should take
into consideration in order to maximize efficiency. For
example, the customer service areas of companies often have
peaks and valleys in workload. These may vary by time of day,
week, month, or year. Companies that do not take these
variations into account can be very inefficient and spend money
and resources unnecessarily. Customer service departments that
are most efficient will have a flexible staff and use statistical
data from past experience to determine when to increase or
decrease staffing.
· Comment on Mar 07, 2015, 5:25 PM
Message collapsed. Message unread Re: Forecasting
posted by TAMARQUES PORTER at Mar 07, 2015, 5:25 PM
Last updated Mar 07, 2015, 5:25 PM
·
Products that exhibit slow-moving demand or have sporadic
demand require a specific type of statistical forecast model.
Intermittent model works for products with erratic demand.
Products with erratic demand do not exhibit a seasonal
component; instead a graph drawn of the products demand
attributes shows peaks and flat periods at intermittent points
along the time series. The goal of this model is to provide a
safety stock value instead of a forecast value. The safety stock
value allows for just enough inventories to cover needs.
Forecasting new products remains one of the toughest
forecasting tasks available. New product forecasting requires
input from human and computer generated sources. New product
forecasting methods seek to manage the high ramp up period
associated with a new product introduction. These methods also
work for maturing products approaching the end of their life
cycle.
· Comment on Mar 07, 2015, 9:13 PM
Message collapsed. Message unread Re: Forecasting
posted by SAID SHEIK ABDI at Mar 07, 2015, 9:13 PM
Last updated Mar 07, 2015, 9:13 PM
·
Forecasting is the prediction of future events and conditions and
is a key element in service organizations, especially banks, for
management decision-making. There are typically two types of
events: 1) uncontrollable external events - originating with the
national economy, governments, customers and competitors and
2) controllable internal events (e.g., marketing, legal, risk, new
product decisions) within the firm.
If we had three years of data, how many years ahead do you
think we could predict using a regression line? I think we can
easily predict a year
If each December sales went sky high because of Christmas,
how could we adjust for this before constructing the regression
line? We know the December is the peak sale month and we
easily smooth by changing Y variable and then construct the
regression line.
http://www.isixsigma.com/tools-templates/risk-
management/use-forecasting-basics-predict-future-conditions/
·
Composite index number The thread has 4 unread messages.
created by ARIEL SMITH
Last updated Mar 07, 2015, 4:54 PM
4
· Comment on Mar 04, 2015, 12:51 PM
Message collapsed. Message unread Composite index number
posted by ARIEL SMITH at Mar 04, 2015, 12:51 PM
Last updated Mar 04, 2015, 12:51 PM
·
According to the text, a composite index number represents
combinations of the prices or quantities of several commodities.
The booked used a great example. To construct an index for the
total number of sales of two major automobile manufacturers:
General Motors and Ford. The first step you would have to do is
collect data on the sales of each manufacturer during the period
in which you are interested in. To summarize the information
from both time series in a single index, we add the sales of each
manufacturer for each year, we form a new time series
consisting of the total number of automobiles sold by the two
manufacturers. Then we construct a simple index for the total of
the two series. The resulting index is called a simple composite
index.
A simple composite index is a simple index for a time series
consisting of the total price or total quantity of two or more
commodities.
McClave, J. T., Benson, P. G., & Sincich, T. (2010). Statistics
for Business and Economics, 11th Edition. [VitalSource
Bookshelf version].
Retrieved from
http://online.vitalsource.com/books/9781269882163/id/ch13fig0
2
· Comment on Mar 05, 2015, 6:47 PM
Message collapsed. Message unread Re: Composite index
number
posted by LOUIS DAILY at Mar 05, 2015, 6:47 PM
Last updated Mar 05, 2015, 6:47 PM
Ariel,
That example is a time series showing the fluctuation in the
price of silver.
thanks
Lou
· Comment on Mar 07, 2015, 8:25 AM
Message collapsed. Message unread Composite index number
posted by CRYSTAL RAMOS at Mar 07, 2015, 8:25 AM
Last updated Mar 07, 2015, 8:25 AM
·
A grouping of equities, indexes or other factors combined in a
standardized way, providing a useful statistical measure of
overall market or sector performance over time.
Also known simply as a "composite".
Usually, a composite index has a large number of factors which
are averaged together to form a product representative of an
overall market or sector. For example, the Nasdaq Composite
index is a market capitalization-weighted grouping of
approximately 5,000 stocks listed on the Nasdaq market. These
indexes are useful tools for measuring and tracking price level
changes to an entire stock market or sector. Therefore, they
provide a useful benchmark against which to measure an
investor's portfolio. The goal of a well diversified portfolio is
usually to outperform the main composite indexes.
· Comment on Mar 07, 2015, 4:54 PM
Message collapsed. Message unread Re: Composite index
number
posted by JUDEENE WALKER at Mar 07, 2015, 4:54 PM
Last updated Mar 07, 2015, 4:54 PM
·
Compose index numbers are in dices calculated to reflect the
change in activity of a number of items from the base period to
the period under consideration. CIN allows us to measure with a
single number, the relative variations within a group of
variables upon moving from on situation to another. The aim of
using composite index numbers is to summarize all the simple
index numbers contained in a complex number into just one
index.
Example: the index of prices for taxi is 100 compared to a given
base year. The index for rental is 160 (using the same base
year). Calculate the composite index for taxi and rent using a
weighting of 55 for taxi and 45 for rental using the composite
index formula: total of index/ total weighting.
CI= (100*55+ 160*45)/ (55+45) = 12,700/100= 127
·
The first message in the thread has been removed
created by ARIEL SMITH
Last updated Mar 04, 2015, 12:51 PM
· Comment on Mar 04, 2015, 12:50 PM
This message has been removed
Message collapsed.
Statistics for Business and Economics, Ch. 11
·
· Multiple Linear Regression The thread has 7 unread messages.
created by LOUIS DAILY
Last updated Mar 07, 2015, 10:40 AM
7 · Comment on Mar 02, 2015, 11:39 AM
Message collapsed. Message unread Multiple Linear Regression
posted by LOUIS DAILY at Mar 02, 2015, 11:39 AM
Last updated Mar 02, 2015, 11:39 AM
The simple Linear Regression equation is a relationship between
one independent variable (X) and one dependent variable (Y).
X is used to predict Y. But X might only help in predicting Y,
not predict it perfectly. If we had other variables, say, X2 and
X3, that also can independently contribute to the prediction, we
might have a much better prediction equation. We might have
something like: Y = slope X1 + slope X2 + slope X3 +
intercept. This would be a multiple regression equation.
Why is this an improvement over a simple linear equation
model?
How many variables do you think we would need to make a real
good prediction?
What if the relationship between the three independent variables
in this equation and the dependent variable is not linear at all,
but a curved relationship (curvilinear)?· Comment on Mar 04,
2015, 1:31 PM
Message collapsed. Message unread Re: Multiple Linear
Regression
posted by STEPHANIE RECTOR at Mar 04, 2015, 1:31 PM
Last updated Mar 04, 2015, 1:31 PM
·
Why is this an improvement over a simple linear equation
model?
How many variables do you think we would need to make a real
good prediction?
What if the relationship between the three independent variables
in this equation and the dependent variable is not linear at all,
but a curved relationship (curvilinear)?
Realistically, some research requires the use of multiple
variables because some issues include several influential
factors. A simple linear equation model will not be able to
include those other factors, so we must use a multiple
regression model to determine a conclusion. On a side note,
multiple regression will not adequately explain the relationship
between independent and dependent variables if they are not
linear. The steps in multiple regression are basically the same
for simple regression. According to csulb.edu, "Regression with
only one dependent and one independent variable normally
requires a minimum of 30 observations. A good rule of thumb is
to add at least an additional 10 observations for each additional
independent variable added to the equation. The number of
independent variables in the equation should be limited by two
factors. First, the independent variables should be included in
the equation only if they are based on the researcher's theory
about what factors influence the dependent variable. Second,
variables that do not contribute very much to explaining the
variance in the dependent variable (i.e., to the total R2), should
be eliminated" (csulb.edu, 2015). When the relationship
between the three independent variables in this equation and the
dependent variable is not linear at all, but a curved relationship
(curvilinear) then the data points will increase together to a
certain point. Then as one continues to increase, the other
decreases or the other way around. You basically can see on a
scatter plot a line rising to a peak, then declining.
Reference:
http://web.csulb.edu/~msaintg/ppa696/696regmx.htm· Comment
on Mar 05, 2015, 6:51 PM
Message collapsed. Message unread Re: Multiple Linear
Regression
posted by LOUIS DAILY at Mar 05, 2015, 6:51 PM
Last updated Mar 05, 2015, 6:51 PM
Stephanie,
Yes, sometimes the relationship between variables is not linear,
but curvilinear--we can fit a curve to the data.
thanks
Lou· Comment on Mar 07, 2015, 10:40 AM
Message collapsed. Message unread Re: Multiple Linear
Regression
posted by ERIK SEIDEL at Mar 07, 2015, 10:40 AM
Last updated Mar 07, 2015, 10:40 AM
·
An example of a curvilinear relationship that quickly came to
mind when reading about this was predicting our company's
administrative expense ratio for a new line of business. An
administrative expense ratio is calculated by simply taking total
administrative expenses (sales, customer services, overhead,
etc.) as a percentage of total revenue. Our health insurance
company has invested in recent years in going into new markets
and lines of business. One thing I have learned is that it takes a
certain critical mass of membership in order to gain sufficient
economies of scale to lower the administrative expense ratio to
the point where the line of business is profitable. The slope of
this based on my experience is curvilinear. As we gain the first
5,000 to 10,000 members for a particular line of business, the
curve moves downward significantly due to being able to spread
overhead, marketing, and other fixed and start-up costs over a
larger base of membership. After 10,000 members or so, the
slope continues downward but at a much slower rate.· Comment
on Mar 06, 2015, 8:26 PM
Message collapsed. Message unread Re: Multiple Linear
Regression
posted by CRYSTAL RAMOS at Mar 06, 2015, 8:26 PM
Last updated Mar 06, 2015, 8:26 PM
· Multiple Linear Regression
Multiple linear regression attempts to model the relationship
between two or more explanatory variables and a response
variable by fitting a linear equation to observed data. Every
value of the independent variable x is associated with a value
of the dependent variable y. The population regression line for
p explanatory variables x1, x2, ... , xp is defined to be y = 0 +
1x1 + 2x2 + ... + pxp. This line describes how the mean
response y changes with the explanatory variables. The
observed values for y vary about their means y and are
assumed to have the same standard deviation . The fitted
values b0, b1, ..., bp estimate the parameters 0, 1, ..., p of the
population regression line.
Since the observed values for y vary about their means y, the
multiple regression model includes a term for this variation. In
words, the model is expressed as DATA = FIT + RESIDUAL,
where the "FIT" term represents the expression 0 + 1x1 + 2x2
+ ... pxp. The "RESIDUAL" term represents the deviations of
the observed values y from their means y, which are normally
distributed with mean 0 and variance . The notation for the
model deviations is .
Formally, the model for multiple linear regression, given n
observations, is yi = 0 + 1xi1 + 2xi2 + ... pxip + i for i =
1,2, ... n.
In the least-squares model, the best-fitting line for the observed
data is calculated by minimizing the sum of the squares of the
vertical deviations from each data point to the line (if a point
lies on the fitted line exactly, then its vertical deviation is 0).
Because the deviations are first squared, then summed, there
are no cancellations between positive and negative values. The
least-squares estimates b0, b1, ... bp are usually computed by
statistical software.
The values fit by the equation b0 + b1xi1 + ... + bpxip are
denoted i, and the residuals ei are equal to yi - i, the difference
between the observed and fitted values. The sum of the
residuals is equal to zero.
The variance ² may be estimated by s² = , also known as the
mean-squared error (or MSE). The estimate of the standard
error s is the square root of the MSE.· Comment on Mar 06,
2015, 11:14 PM
Message collapsed. Message unread Re: Multiple Linear
Regression
posted by Jynx Gresser at Mar 06, 2015, 11:14 PM
Last updated Mar 06, 2015, 11:14 PM
·
According to McClave, Benson, and Sincich (2011), "most
practical applications of regression analysis employ models that
are more complex than the simple straight-line model" (p. 625).
Most probabilities involve one or more independent variables
that affect the dependent variable (McClave et al., 2011). This
is where multiple linear regression comes into the statistical
equation. Princeton University Library (n. d.). found that the
simple linear regression displays the relationship between the
independent and dependent variable while the multiple linear
regression predicts the relationship with multiple independent
variables, for example, height, weight, and gender. The
predictive value of the relationship between two or more
variables can be increased and overall purpose of multiple
regressions is to prove a relationship (Princeton University
Library, n. d.). As the number of variables increases, the
statistical probability of the relationship between the variables
is strengthened. What is the best number of relationships to
have? I couldn't find any specific numbers to answer this
question. Curvilinear relationships can occur in multiple
regressions especially when two or more variables stop having
affect on one another. The predictive value of these variables
starts to decrease; a good example would be friends and
happiness (Princeton University Library, n. d.). Happiness can
increase to a certain value of friends then start to decrease, thus
making the line curvilinear.
References
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for business and economics (11th ed.). Boston, MA: Prentice
Hall. Retrieved from the University of Phoenix eBook
Collection database.
Princeton University Library. (n. d.). Data and statistical
services. Retrieved
from http://dss.princeton.edu/online_help/analysis/regression_in
tro.htm
Jynx Gresser· Comment on Mar 07, 2015, 9:29 AM
Message collapsed. Message unread Re: Multiple Linear
Regression
posted by KIM DUNLAP at Mar 07, 2015, 9:29 AM
Last updated Mar 07, 2015, 9:29 AM
·
Hi Professor Lou,
Multiple Linear Regression models are important because in
most research there is more than one variable x that affects the
outcome y.
The multiple linear regression model allows us to take this into
consideration mathematically and allows us to statistically
prove the affects of multiple variables. Where is looks like it
becomes very complicated is where you can have multiple pairs
or options of pairs of variables that have to be ranked in the
analysis. While I understand the need to do that and it would be
helpful in relation to complete studies involving humans - I
have to admit I feel it is nothing that I could accomplish on my
own at this point in time.
I never did find any type of information that suggested there is
an optimal number of variables to provide us with the best
predictability.
·
Multiple Regression Models The thread has 2 unread messages.
created by BEAU KUSH
Last updated Mar 05, 2015, 6:49 PM
2 · Comment on Mar 04, 2015, 3:02 PM
Message collapsed. Message unread Multiple Regression
Models
posted by BEAU KUSH at Mar 04, 2015, 3:02 PM
Last updated Mar 04, 2015, 3:02 PM
·
According to McClave, Benson, and Sincich (2011), most
practical applications of regression analysis employ models that
are more complex than the simple straight-line model.
Moreover, the models typically utilize realistic probabilistic
models which contain many variables that might be related to
one of the factors being analyzed (McClave, Benson, and
Sincich 2011). The probabilistic models that include more than
one independent variable are called multiple regression models.
Utilizing the independent variables in these models helps
researchers make accurate predictions about the outcomes or
results. Essentially, researchers will utilize multiple regression
models when there is a requirement to predict the value
variables based on the value of two or more other variables.
Multiple regression allows researchers to make logical
predictions based on the data provided. This method saves time
and provides a reliable method for obtaining accuracy.
Reference:
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for Business and Economics (11th ed.). Boston, MA: Prentice
Hall.· Comment on Mar 05, 2015, 6:49 PM
Message collapsed. Message unread Re: Multiple Regression
Models
posted by LOUIS DAILY at Mar 05, 2015, 6:49 PM
Last updated Mar 05, 2015, 6:49 PM
Beau,
Yes, multiple regression uses more than one independent
variable, or predictor variables. This can lead to a much better
predictive model than simple linear regression.
thanks
Lou
Statistics for Business and Economics, Ch. 10
·
· Caution for Regression Analysis The thread has 4 unread
messages.
created by ERIK SEIDEL
Last updated Mar 07, 2015, 8:17 PM
4
· Comment on Mar 04, 2015, 5:34 AM
Message collapsed. Message read Caution for Regression
Analysis
posted by ERIK SEIDEL at Mar 04, 2015, 5:34 AM
Last updated Mar 04, 2015, 5:34 AM
·
When preparing a linear regression analysis, the resulting
correlation, or lack of correlation, can be misleading to users.
The purpose of a linear regression model is to demonstrate
whether an increase or decrease in variable x has a positive or
negative impact on variable y. If you see a clear slope that is
increasing or decreasing, the logical conclusion is that variable
x is directly impacting variable y, or vice versa. However, it is
not necessarily true that a change in variable x is the reason for
the change in variable y. For example, a study of the impact of
drinking more water to someone's level of health may show that
drinking more water improves people's health overall.
However, it may not be drinking water that is actually
improving health. It may be that eating less as a result of
drinking more water is actually the cause of the improved
health.
· Comment on Mar 05, 2015, 9:43 PM
Message collapsed. Message read Re: Caution for Regression
Analysis
posted by ARACHEAL VENTRESS at Mar 05, 2015, 9:43 PM
Last updated Mar 05, 2015, 9:43 PM
·
Erik- thanks for your post and your example- relating things to
real life examples make the applications more clear. Reading a
little further in this week's chapter, there is caution in making
inferences based on one variable impacting another.
Our text explains that when using the sample correlation
coefficient, r, to infer the nature of the relationship
between x and y, two caveats exist: (1) A high correlation does
not necessarily imply that a causal relationship exists
between x and y− only that a linear trend may exist; (2) a low
correlation does not necessarily imply that x and y are
unrelated--only that x and y are not strongly linearly related.
Reference
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for Business and Economics (11th ed.). Boston, MA: Prentice
Hal
· Comment on Mar 07, 2015, 5:30 PM
Message collapsed. Message unread Re: Caution for Regression
Analysis
posted by LOUIS DAILY at Mar 07, 2015, 5:30 PM
Last updated Mar 07, 2015, 5:30 PM
Aracheal,
Yes, some variables are related curvilinearly.
thanks
Lou
· Comment on Mar 07, 2015, 3:11 PM
Message collapsed. Message read Re: Caution for Regression
Analysis
posted by DELILAH HENDERSON at Mar 07, 2015, 3:11 PM
Last updated Mar 07, 2015, 3:11 PM
·
Erik, that is a good example. Do you think it would help if we
were careful to have variable x be as specific as variable y in
order for the linear regression analysis to be more true or valid
to consumers? Maybe something more defined as the impact of
drinking more water to improving kidney function instead of
improving someone's level of health overall? Improving
someone's level of health seems much broader than the variable
of drinking more water.
Your example was really good on showing that just because two
variables are put together doesn't necessarily mean that y
naturally follows x. I thought this example was along the lines
of our first chapter showing some of the surveys that were
either skewed or false because of improperly conducting the
survey (if one was conducted at all.)
I liked your examples on this post and the one regarding sales
and marketing. Thanks for the examples - they help me
understand the concept better.
d
· Comment on Mar 07, 2015, 5:31 PM
Message collapsed. Message unread Re: Caution for Regression
Analysis
posted by LOUIS DAILY at Mar 07, 2015, 5:31 PM
Last updated Mar 07, 2015, 5:31 PM
Delilah,
Finer measurements are always better when constructing a
regression line or curve.
thanks
Lou
· Comment on Mar 07, 2015, 5:29 PM
Message collapsed. Message unread Re: Caution for Regression
Analysis
posted by LOUIS DAILY at Mar 07, 2015, 5:29 PM
Last updated Mar 07, 2015, 5:29 PM
Erik,
Yep "correlation does not imply causation"
thanks
Lou
· Comment on Mar 07, 2015, 8:17 PM
Message collapsed. Message unread Caution for Regression
Analysis
posted by PATRICIA MARCUS at Mar 07, 2015, 8:17 PM
Last updated Mar 07, 2015, 8:17 PM
·
Great post Erik..I agree that when preparing a linear regression
analysis the resulting correlation or lack of correlation can be
misleading. The regression analysis is a parametric test used
for the inference from a dapple to a population. The goal of the
regression analysis is to investigate how effective one or more
variables are pricing the value of a dependent variable. The
relationship between two variables is top establish that changes
in the explanatory variable causes changes in the response
variable. Even when the strong association is visible the
conclusion that the association is due to a casual link bergen the
variables is elusive. Correlations and regression only describe
linear data and are not resistant. Therefore you should always
plot the data before interpreting the regression or correlation.
·
Simple Linear Regression The thread has 11 unread messages.
created by LOUIS DAILY
Last updated Mar 07, 2015, 5:53 PM
11
· Comment on Mar 02, 2015, 11:35 AM
Message collapsed. Message read Simple Linear Regression
posted by LOUIS DAILY at Mar 02, 2015, 11:35 AM
Last updated Mar 02, 2015, 11:35 AM
You may recall your high school algebra about the equation for
a straight line: Y = m X + k where m is the slope of the line
and k is the y intercept. Let's say we have two variables X and
Y. Let's say they represent height (Y) and weight (X). They
can be put into Excel as two columns. We can construct a
scatter plot with Excel. The scatter plot will show us the shape
of this relationship. The taller you are, on the average, the
heavier you are, so there will probably be an upward sloping
linear scatter plot. The relationship is not perfect, so you won't
be able to draw a line that contains all the points. But what is
the best fitting line that we can draw? This assumes of course
that a line is in fact a good description of the relationship. The
technique of Linear Regression answers this question. Linear
Regression will compute the line which best describes the
relationship. It does this by computing the slope and intercept
of this best fitting line. It is the best fit in the sense that it is
the line that is closest on the average to all of the data points
(the line of least squares). So we have a "model"--the best
description of the relationship between the two variables
assuming the relationship is linear (like a line). Can you think
of any uses for such a model--for this best fitting line and the
equation that describes it?
· Comment on Mar 03, 2015, 5:35 AM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by ERIK SEIDEL at Mar 03, 2015, 5:35 AM
Last updated Mar 03, 2015, 5:35 AM
·
Our health insurance company uses regression analysis to
review the correlation between various medical expense related
metrics. Our actuarial informatics department is probably the
heaviest user of regression analysis. For example, regression
analysis can be used to determine if certain physical traits or
health conditions are related to other health conditions. We
may pull together the medical record data from all of our health
plan members and look at variables such as high blood pressure
and weight. We would chart all members that have blood
pressure records above a certain level. The blood pressure
readings become the x variable. The y variable would then be
the weight of these members. If we see a positive correlation
line between weight and high blood pressure, then our medical
management department would know to pay particular attention
to members which a higher BMI if we are trying to encourage
healthy blood pressure of our members.
· Comment on Mar 03, 2015, 8:20 AM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by LOUIS DAILY at Mar 03, 2015, 8:20 AM
Last updated Mar 03, 2015, 8:20 AM
Erik,
Thanks for that inside look. Yes, there are lots and lots of uses
for regression.
thanks
Lou
· Comment on Mar 03, 2015, 8:38 PM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by KIM DUNLAP at Mar 03, 2015, 8:38 PM
Last updated Mar 03, 2015, 8:38 PM
·
Hi Erik,
Thank you for your example. I always have a much better
understanding of topics when I hear how people use them in
their everyday work. Linear regression can be used for
predictive analysis. Linear regression helps us describe and
explain the relationship between one dependent variable and at
least one independent variable. According to Statistics
Solution
s, there are three major uses for linear regression. They are 1)
Causal analysis 2) Forecasting and effect and 3) Trend
forecasting. One important thing to note is that regression
analysis assumes a causal relationship between one or more
independent variables and one dependent variable.
https://www.statisticssolutions.com/what-is-linear-regression/
· Comment on Mar 06, 2015, 8:50 AM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by LOUIS DAILY at Mar 06, 2015, 8:50 AM
Last updated Mar 06, 2015, 8:50 AM
Kim,
Regression is mathematically closely related to correlation.
While we create causal "models" all the time, we still have to
be careful about the direction of causality. Correlation does not
imply causation is still the rule.
thanks
Lou
· Comment on Mar 04, 2015, 12:47 PM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by STEPHANIE RECTOR at Mar 04, 2015, 12:47 PM
Last updated Mar 04, 2015, 12:47 PM
·
Hello Class,
According to Wikipedia, "In statistics, linear regression is an
approach for modeling the relationship between a
scalar dependent variable y and one or moreexplanatory
variables (or independent variable) denoted X. The case of one
explanatory variable is called simple linear regression. For more
than one explanatory variable, the process is called multiple
linear regression" (wikipedia, 2015). This type of analysis is
used commonly for practical applications. Linear regression
requires finding the best-fitting straight line through the data
points. A scatter plot is often a useful tool in determining how
closely the two variables relate. After reading Eric's response,
this reminded me of how my previous sleep company used
linear regression in relation to sleep studies. With the research
our tech managers implemented into our sleep questionairs, I
quickly realized the strong correlation between a patient's BMI
(body mass index) and a diagnosis of sleep apnea. The higher a
patients BMI was, the more likely they were to be diagnosed
with sleep apnea. The obstruction of airways were higher the
higher the BMI was. In addition, if untreated with surgery or a
CPAP, a patient with sleep apnea had higher increased chances
of suffering from severe heart conditions in the longterm.
References:
http://en.wikipedia.org/wiki/Linear_regression
· Comment on Mar 06, 2015, 8:53 AM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by LOUIS DAILY at Mar 06, 2015, 8:53 AM
Last updated Mar 06, 2015, 8:53 AM
Stephanie,
Thanks for sharing that information on your sleep study. That
apprently was a correlational study, but correlation is
mathematically very related to regression. So...besides
calculating the correlation coefficient, there could be a
regression line modelling the relationship.
thanks
Lou
· Comment on Mar 05, 2015, 8:48 AM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by CRYSTAL RAMOS at Mar 05, 2015, 8:48 AM
Last updated Mar 05, 2015, 8:48 AM
·
In statistics, simple linear regression is the least squares
estimator of a linear regression model with a single explanatory
variable. In other words, simple linear regression fits a straight
line through the set of n points in such a way that makes the
sum of squared residuals of the model (that is, vertical distances
between the points of the data set and the fitted line) as small as
possible.
The adjective simple refers to the fact that this regression is one
of the simplest in statistics. The slope of the fitted line is equal
to the correlation between y and x corrected by the ratio of
standard deviations of these variables. The intercept of the
fitted line is such that it passes through the center of mass (x, y)
of the data points.
· Comment on Mar 07, 2015, 5:53 PM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by LOUIS DAILY at Mar 07, 2015, 5:53 PM
Last updated Mar 07, 2015, 5:53 PM
Crystal,
It is "simple" because there is only one independent variable.
thanks
Lou
· Comment on Mar 05, 2015, 7:11 PM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by JUDEENE WALKER at Mar 05, 2015, 7:11 PM
Last updated Mar 05, 2015, 7:11 PM
·
HI professor, now that you mention it I do remember the
equation for a straight line Y=mX + K. This equation expresses
the fact that if we plot y against x, and the variables obey a
relationship of this form one will obtain a straight line graph
with slope m and an intercept
Simple linear regression is one of the most commonly used
techniques for determining how one variable of interest is
affected by changes in another variable. From what I remember
and read from our text simple linear is used for three main
purposes:
a) To predict values of one variable from values of another
for which more data are available.
b) To describe the linear dependence of one variable on
another
c) To correct for linear dependence of one variable on
another
· Comment on Mar 06, 2015, 7:46 PM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by SAID SHEIK ABDI at Mar 06, 2015, 7:46 PM
Last updated Mar 06, 2015, 7:46 PM
·
When you think of regression, think prediction. A regression
uses the historical relationship between an independent and a
dependent variable to predict the future values of the dependent
variable. Businesses use regression to predict such things as
future sales, stock prices, currency exchange rates, and
productivity gains resulting from a training program. Types of
Regression A regression models the past relationship between
variables to predict their future behavior. As an example,
imagine that your company wants to understand how past
advertising expenditures have related to sales in order to make
future decisions about advertising. The dependent variable in
this instance is sales and the independent variable is advertising
expenditures.Usually, more than one independent variable
influences the dependent variable. You can imagine in the above
example that sales are influenced by advertising as well as other
factors, such as the number of sales representatives and the
commission percentage paid to sales representatives. When one
independent variable is used in a regression, it is called
a simple regression; when two or more independent variables
are used, it is called a multiple regression.
Regression models can be either linear or nonlinear. A linear
model assumes the relationships between variables are straight-
line relationships, while a nonlinear model assumes the
relationships between variables are represented by curved lines.
In business you will often see the relationship between the
return of an individual stock and the returns of the market
modeled as a linear relationship, while the relationship between
the price of an item and the demand for it is often modeled as a
nonlinear relationship.
http://www.dummies.com/how-to/content/how-to-use-a-linear-
regression-to-identify-market-.html
· Comment on Mar 06, 2015, 7:55 PM
Message collapsed. Message unread Re: Simple Linear
Regression
posted by Jynx Gresser at Mar 06, 2015, 7:55 PM
Last updated Mar 06, 2015, 7:55 PM
·
I do remember this relationship, specifically y = mx + b and its
interpretations with a line graph. Foremost, simple linear
regression is a unique concept in statistics and is considered to
be "the methodology of estimating and using a straight line
relationship" (McClave, Benson, & Sincich, 2011, p. 561).
Furthermore, it describes the relationship between the two
variables X [independent] and Y [dependent] (California State
University-Long Beach, n. d.). The regression equation is
written as y= a + bx + e; a or alpha being the constant and b or
beta being coefficient of x and the slope of the regression line
(California State University-Long Beach, n. d.). There are
multiple uses of simple linear regression and they include the
effect of pricing on consumer behavior and analyzing the risk
involved in business decisions. When companies change prices,
they can monitor the amount of product sold with a particular
price and then determine the relationship between pricing and
purchasing with a linear relationship (Hamel, n. d.). I work in
retail and I see this relationship quite frequently, for example,
higher prices can reduce the number of clients who purchase the
product and lower prices equate to a higher number of
purchases. This can help assist with future pricing decisions.
Therefore, there is a correlation between these two variables,
but it doesn't mean that this is the only cause for this
relationship.
References
California State University-Long Beach. (n. d.). Simple
regression. Retrieved from
http://web.csulb.edu/~msaintg/ppa696/696regs.htm
Hamel, G. (n. d.). What are some ways that linear regression
can be applied in business settings?. The Houston Chronicle.
Retrieved from http://smallbusiness.chron.com/ways-linear-
regression-can-applied-business-settings-35431.html
McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics
for business and economics (11th ed.). Boston, MA: Prentice
Hall. Retrieved from the University of Phoenix eBook
Collection database.
Jynx Gresser
·
Confidence intervals vs Prediction Intervals The thread has 7
unread messages.
created by ARACHEAL VENTRESS
Last updated Mar 07, 2015, 5:51 PM
7
· Comment on Mar 05, 2015, 10:03 PM
Message collapsed. Message unread Confidence intervals vs
Prediction Intervals
posted by ARACHEAL VENTRESS at Mar 05, 2015, 10:03 PM
Last updated Mar 05, 2015, 10:03 PM
·
Confidence intervals tell you about how well you have
determined the mean. Assume that the data really are randomly
sampled from a Gaussian distribution. If you do this many
times, and calculate a confidence interval of the mean from each
sample, you'd expect about 95 % of those intervals to include
the true value of the population mean. The key point is that the
confidence interval tells you about the likely location of the
true population parameter.
Prediction intervals tell you where you can expect to see the
next data point sampled. Assume that the data really are
randomly sampled from a Gaussian distribution. Collect a
sample of data and calculate a prediction interval. Then sample
one more value from the population. If you do this many times,
you'd expect that next value to lie within that prediction interval
in 95% of the samples.The key point is that the prediction
interval tells you about the distribution of values, not the
uncertainty in determining the population mean.
Prediction intervals must account for both the uncertainty in
knowing the value of the population mean, plus data scatter. So
a prediction interval is always wider than a confidence interval.
Reference
http://www.graphpad.com
· Comment on Mar 06, 2015, 5:44 AM
Message collapsed. Message unread Re: Confidence intervals vs
Prediction Intervals
posted by KIM DUNLAP at Mar 06, 2015, 5:44 AM
Last updated Mar 06, 2015, 5:44 AM
·
Hii Aracheal,
Thank you for your very clear post on confidence intervals vs.
prediction intervals. The other thing I picked up on confidence
intervals vs. prediction intervals is that the prediction interval
is wider than the confidence interval because the added
uncertainty of predicting a single response or point vs. a mean
response. For example, I looked at a study where they were
predicting the life expectancy of men who smoked 20 cigarettes
a day. The confidence interval for the data was (68.70, 77.61)
and the prediction interval for the same data was (55.36, 90.95)
- a much wider spread.
http://www.real-statistics.com/wp-
content/uploads/2012/12/confidence-prediction-intervals-
excel.jpg
· Comment on Mar 06, 2015, 7:50 PM
Message collapsed. Message unread Re: Confidence intervals vs
Prediction Intervals
posted by SAID SHEIK ABDI at Mar 06, 2015, 7:50 PM
Last updated Mar 06, 2015, 7:50 PM
·
A prediction interval is an interval associated with a random
variable yet to be observed, with a specified probability of the
random variable lying within the interval. For example, I might
give an 80% interval for the forecast of GDP in 2014. The
actual GDP in 2014 should lie within the interval with
probability 0.8. Prediction intervals can arise in Bayesian or
frequentist statistics.
A confidence interval is an interval associated with a parameter
and is a frequentist concept. The parameter is assumed to be
non-??random but unknown, and the confidence interval is
computed from data. Because the data are random, the interval
is random. A 95% confidence interval will contain the true
parameter with probability 0.95. That is, with a large number of
repeated samples, 95% of the intervals would contain the true
parameter.
http://www.real-statistics.com/regression/confidence-and-
prediction-intervals/
· Comment on Mar 07, 2015, 9:57 AM
Message collapsed. Message unread Re: Confidence intervals vs
Prediction Intervals
posted by ERIK SEIDEL at Mar 07, 2015, 9:57 AM
Last updated Mar 07, 2015, 9:57 AM
·
I can see the importance of calculating and understanding
confidence intervals. There are many uncertainties in the
business world, especially in today's fast-paced environment
where things change almost every day. When management is
preparing to make a strategic decision that will impact a
company's future, there will always be some uncertainties. The
best managers can do when making decisions is basing those
decisions on the best data available. While management cannot
be certain about the outcome of a decision, they at least want to
be reasonably confident about the success of the project or
strategic change in the company. By using data to determine
whether there can be 95% certainty of an outcome, management
can be confident that the outcome will most likely have the
result that is expected.
· Comment on Mar 07, 2015, 2:47 PM
Message collapsed. Message unread Re: Confidence intervals vs
Prediction Intervals
posted by DELILAH HENDERSON at Mar 07, 2015, 2:47 PM
Last updated Mar 07, 2015, 2:47 PM
·
Aracheal, that was a good post telling of the differences
between confidence intervals and prediction intervals. I found
an article that shows how to calculate the two and showed a two
good graphs to help explain the difference. The article,
Confidence and prediction intervals for forecasted values, from
a website showing how to use Excel, showed that the graph on
the left was a confidence interval. It said that with a confidence
interval, "This means that there is a 95% probability that the
true linear regression line of the population will lie within the
confidence interval of the regression line calculated from the
sample data.
In the graph on the left of Figure 1, a linear regression line is
calculated to fit the sample data points. The confidence interval
consists of the space between the two curves (dotted lines).
Thus there is a 95% probability that the true best-fit line for the
population lies within the confidence interval (e.g. any of the
lines in the figure on the right above)." The graph on the right
shows the prediction interval. The article mentioned, "There is
also a concept called prediction interval. Here we look at any
specific value ofx, x0, and find an interval around the predicted
value ?0 for x0 such that there is a 95% probability that the real
value of y (in the population) corresponding to x0 is within this
interval (see the graph on the right side of Figure 1)."
The two graphs helped me understand the two a little better.
Reference:
http://biostatistics/regression/confidence-and-prediction-
intervals/
· Comment on Mar 07, 2015, 5:51 PM
Message collapsed. Message unread Re: Confidence intervals vs
Prediction Intervals
posted by LOUIS DAILY at Mar 07, 2015, 5:51 PM
Last updated Mar 07, 2015, 5:51 PM
Great illustration Delilah,
thanks
Lou
· Comment on Mar 07, 2015, 5:47 PM
Message collapsed. Message unread Re: Confidence intervals vs
Prediction Intervals
posted by LOUIS DAILY at Mar 07, 2015, 5:47 PM
Last updated Mar 07, 2015, 5:47 PM
Aracheal,
Good job in distinguishing between confidence intervals and
prediction intervals
thanks
Lou
·
The least square approach The thread has 5 unread messages.
created by ARIEL SMITH
Last updated Mar 07, 2015, 5:38 PM
5
· Comment on Mar 04, 2015, 1:02 PM
Message collapsed. Message unread The least square approach
posted by ARIEL SMITH at Mar 04, 2015, 1:02 PM
Last updated Mar 04, 2015, 1:02 PM
·
The least square approach, is a mathematical procedure for
finding the best-fitting curve to a given set of points by
minimizing the sum of the squares of the offsets ("the
residuals") of the points from the curve. The sum of
the squares of the offsets is used instead of the offset absolute
values because this allows the residuals to be treated as a
continuous differentiable quantity. However, because squares of
the offsets are used, outlying points can have a disproportionate
effect on the fit, a property which may or may not be desirable
depending on the problem at hand. The linear least squares
fitting technique is the simplest and most commonly applied
form of linear regression and provides a solution to the problem
of finding the best fitting straight line through a set of points.
In fact, if the functional relationship between the two quantities
being graphed is known to within additive or multiplicative
constants, it is common practice to transform the data in such a
way that the resulting line is a straight line, say by
plotting vs. instead of vs. in the case of analyzing the
period of a pendulum as a function of its length .
Wolfram Math World. (2013). Retrieved from
http://mathworld.wolfram.com/LeastSquaresFitting.html
· Comment on Mar 05, 2015, 7:40 PM
Message collapsed. Message unread Re: The least square
approach
posted by JUDEENE WALKER at Mar 05, 2015, 7:40 PM
Last updated Mar 05, 2015, 7:40 PM
·
After further reading I found that linear regression was the first
type of regression analysis to be studied rigorously and used
extensively in more practical applications. This is so because
models that depend linearly on its unknown parameters are
easier o fit than those models which are nonlinearly related to
its parameter.
When two variables are highly correlated the points on the
scatter diagram more or less follow a diagonal line. What the
regression method does is to find the line that minimizes the
average distance in the vertical direction from the line to all
points. If the goal is forecasting or reduction, linear regression
can be used to fit a predictive model to an observed data set of
x and y values.
· Comment on Mar 07, 2015, 5:38 PM
Message collapsed. Message unread Re: The least square
approach
posted by LOUIS DAILY at Mar 07, 2015, 5:38 PM
Last updated Mar 07, 2015, 5:38 PM
Judeene
Good. Just to be exact, the least squares technique minimizes
the squared distances, thus "least squares".
thanks
Lou
· Comment on Mar 06, 2015, 8:37 PM
Message collapsed. Message unread Re: The least square
approach
posted by CRYSTAL RAMOS at Mar 06, 2015, 8:37 PM
Last updated Mar 06, 2015, 8:37 PM
·
A statistical technique to determine the line of best fit for a
model. The least squares method is specified by an equation
with certain parameters to observed data. This method is
extensively used in regression analysis and estimation.
In the most common application - linear or ordinary least
squares - a straight line is sought to be fitted through a number
of points to minimize the sum of the squares of the distances
(hence the name "least squares") from the points to this line of
best fit.
In contrast to a linear problem, a non-linear least squares
problem has no closed solution and is generally solved by
iteration. The earliest description of the least squares method
was by Carl Freidrich Gauss in 1795.
· Comment on Mar 07, 2015, 2:57 PM
Message collapsed. Message unread Re: The least square
approach
posted by DELILAH HENDERSON at Mar 07, 2015, 2:57 PM
Last updated Mar 07, 2015, 2:57 PM
·
Thanks, Ariel, for your post. I looked up some more information
on the least square approach and found an article "Least
Squared Method' that explained it a little bit more. The article
said "A statistical technique to determine the line of best fit for
a model. The least squares method is specified by an equation
with certain parameters to observed data. This method is
extensively used in regression analysis and estimation." The
article also gave a little more information, "In the most common
application - linear or ordinary least squares - a straight line is
sought to be fitted through a number of points to minimize the
sum of the squares of the distances (hence the name "least
squares") from the points to this line of best fit.
In contrast to a linear problem, a non-linear least squares
problem has no closed solution and is generally solved by
iteration. The earliest description of the least squares method
was by Carl Freidrich Gauss in 1795."
Reference:
http://www.investopedia.com/terms/l/least-squares-method.asp
·
Linear Regression for Marketing The thread has 4 unread
messages.
created by ERIK SEIDEL
Last updated Mar 07, 2015, 5:34 PM
4
· Comment on Mar 06, 2015, 6:34 AM
Message collapsed. Message unread Linear Regression for
Marketing
posted by ERIK SEIDEL at Mar 06, 2015, 6:34 AM
Last updated Mar 06, 2015, 6:34 AM
·
Due to many changes and financial pressures in the health
insurance industry, our company (and I'm sure others as well)
are taking a harder look at administrative expenses and
determining where budgets can be reduced. Without being
administratively efficient, companies will go out of business.
We're seeing more and more consolidations of health care
organizations because of this. One area our company has
focused on recently is marketing. It is often very difficult to
determine the impact of increased or decreased advertising
spending on new sales and retention. A linear regression model
may help with this. If you have a few years of historical data,
you can use marketing spending as the x variable and new sales
as the y variable. You could do a similar analysis with
marketing spending again as the x variable and retention of
current customers as the y variable.
· Comment on Mar 07, 2015, 3:03 PM
Message collapsed. Message unread Re: Linear Regression for
Marketing
posted by DELILAH HENDERSON at Mar 07, 2015, 3:03 PM
Last updated Mar 07, 2015, 3:03 PM
·
Erik, that is a good example. I was trying to apply linear
regression where I work since I work at a university. I think we
have the same relationship as your example, health insurance
industry. I was thinking that instead of using marketing vs
retention of current customers, in our example it could be
variety of classes offered vs student enrollment. In our industry
the type of classes or degrees offered would have a direct
relation to student enrollment. I know we have studied this in
depth because the type of classes offered and degrees has
changed over the year based on student enrollment in these
classes. One definitely affects the other.
d
· Comment on Mar 07, 2015, 5:34 PM
Message collapsed. Message unread Re: Linear Regression for
Marketing
posted by LOUIS DAILY at Mar 07, 2015, 5:34 PM
Last updated Mar 07, 2015, 5:34 PM
Delilah,
What university do you work for?
thanks
Lou
· Comment on Mar 07, 2015, 5:33 PM
Message collapsed. Message unread Re: Linear Regression for
Marketing
posted by LOUIS DAILY at Mar 07, 2015, 5:33 PM
Last updated Mar 07, 2015, 5:33 PM
Erik,
Sure. Good example of time series analysis in business.
thanks
Lou
·
Sales Price and Linear Regression The thread has 5 unread
messages.
created by ERIK SEIDEL
Last updated Mar 07, 2015, 2:54 PM
5
· Comment on Mar 05, 2015, 5:06 AM
Message collapsed. Message unread Sales Price and Linear
Regression
posted by ERIK SEIDEL at Mar 05, 2015, 5:06 AM
Last updated Mar 05, 2015, 5:06 AM
·
As I think about linear regression further, I understand more
and more how many applications it can have. Linear regression
models can be a great tool for any company's sales department.
For sales, linear regression can be used in tandem with a break-
even analysis to determine the appropriate price to charge for a
product. If a company gradually changes the price for a product
but the linear regression line is flat or even sloping slightly
upward, this means that people are continuing to purchase the
product regardless of the price increase. On the other hand, if
the slope is moving downward, the company may need to pull
back on its price increases. This can help a company determine
just how much it may be able to increase its product prices
without losing sales volume and market share.
· Comment on Mar 05, 2015, 8:44 AM
Message collapsed. Message unread Re: Sales Price and Linear
Regression
posted by CRYSTAL RAMOS at Mar 05, 2015, 8:44 AM
Last updated Mar 05, 2015, 8:44 AM
·
In statistics, linear regression is an approach for modeling the
relationship between a scalar dependent variable y and one or
more explanatory variables (or independent variable) denoted X.
The case of one explanatory variable is called simple linear
regression. For more than one explanatory variable, the process
is called multiple linear regression.[1] (This term should be
distinguished from multivariate linear regression, where
multiple correlated dependent variables are predicted, rather
than a single scalar variable.)[2]
In linear regression, data are modeled using linear predictor
functions, and unknown model parameters are estimated from
the data. Such models are called linear models.[3] Most
commonly, linear regression refers to a model in which the
conditional mean of y given the value of X is an affine function
of X. Less commonly, linear regression could refer to a model
in which the median, or some other quantile of the conditional
distribution of y given X is expressed as a linear function of X.
Like all forms of regression analysis, linear regression focuses
on the conditional probability distribution of y given X, rather
than on the joint probability distribution of y and X, which is
the domain of multivariate analysis.
· Comment on Mar 05, 2015, 9:19 PM
Message collapsed. Message unread Re: Sales Price and Linear
Regression
posted by ARACHEAL VENTRESS at Mar 05, 2015, 9:19 PM
Last updated Mar 05, 2015, 9:19 PM
·
Thanks Erik for your example of how linear regression is used
by businesses such as the retail industry when making strategic
moves. I conducted more research about linear regression and
business and found that linear regression can also be used
to analyze risk
The example that I found discussed health insurance companies
might conduct a linear regression plotting number of claims per
customer against age and discover that older customers tend to
make more health insurance claims. The results of such an
analysis might guide important business decisions made to
account for risk.
Reference
http://smallbusiness.chron.com/ways-linear-regression-
· Comment on Mar 07, 2015, 9:11 AM
Message collapsed. Message unread Re: Sales Price and Linear
Regression
posted by Pierre at Mar 07, 2015, 9:11 AM
Last updated Mar 07, 2015, 9:11 AM
·
Great post Erik,
Linear regressions are useful tools when it comes to predicting
sales and the future growth of sales by analyzing the
independent and dependent variable that effect the regression.
There are two types of linear regressions. A simple regression is
a regression that has only one independent variable present, and
the second type of regression is a multiple regression when
there are two or more independent variables used. The use of
multiple independent variables in a regression is a more
complicated relationship between the independent variables and
the dependent variable in the multiple regression than the
relationship between the two variables in a simple regression.
According to the Columbia University Business school,
"the most basic type of regression is that of simple linear
regression. A simple linear regression uses only one
independent variable, and it describes the relationship between
the independent variable and dependent variable as a straight
line."
Reference:
Statistical Sampling and Regression: Simple Linear Regression.
(n.d.). Retrieved March 7, 2015, from
https://www0.gsb.columbia.edu/premba/analytical/s7/s7_6.cfm
· Comment on Mar 07, 2015, 2:54 PM
Message collapsed. Message unread Re: Sales Price and Linear
Regression
posted by DELILAH HENDERSON at Mar 07, 2015, 2:54 PM
Last updated Mar 07, 2015, 2:54 PM
·
Thanks, Erik. That was a good post. I found an article,
Introduction to Linear Regression, that said that "In simple
linear regression, we predict scores on one variable from the
scores on a second variable. The variable we are predicting is
called the criterion variableand is referred to as Y. The variable
we are basing our predictions on is called thepredictor
variable and is referred to as X. When there is only one
predictor variable, the prediction method is called simple
regression. In simple linear regression, the topic of this section,
the predictions of Y when plotted as a function of X form a
straight line."
I believe this is what you were stating in your example on sales.
That one variable can affect a second variable.The article also
mentioned that "Linear regression consists of finding the best-
fitting straight line through the points. The best-fitting line is
called a regression line."
Reference:
http://onlinestatbook.com/2/regression/intro.html

More Related Content

Similar to Business Research Methods, Ch. 19New Message · Chapter 19 C.docx

2012-Nathans-PARE-RegressionGuidebook.pdf
2012-Nathans-PARE-RegressionGuidebook.pdf2012-Nathans-PARE-RegressionGuidebook.pdf
2012-Nathans-PARE-RegressionGuidebook.pdf
s_p2000
 
Non experimental%20 quantitative%20research%20designs-1
Non experimental%20 quantitative%20research%20designs-1Non experimental%20 quantitative%20research%20designs-1
Non experimental%20 quantitative%20research%20designs-1
jhtrespa
 
7Repeated Measures Designs for Interval DataLearnin.docx
7Repeated Measures Designs  for Interval DataLearnin.docx7Repeated Measures Designs  for Interval DataLearnin.docx
7Repeated Measures Designs for Interval DataLearnin.docx
evonnehoggarth79783
 
Correlational research
Correlational researchCorrelational research
Correlational research
Jijo G John
 
cikm2016_mean_variance_evaluation
cikm2016_mean_variance_evaluationcikm2016_mean_variance_evaluation
cikm2016_mean_variance_evaluation
João Palotti
 
Chapter 15 Social Research
Chapter 15 Social ResearchChapter 15 Social Research
Chapter 15 Social Research
arpsychology
 

Similar to Business Research Methods, Ch. 19New Message · Chapter 19 C.docx (17)

Feature selection
Feature selectionFeature selection
Feature selection
 
ANALYZING DATA BY BY SELLIGER AND SHAOAMY (1989)
ANALYZING DATA BY BY SELLIGER AND SHAOAMY (1989)ANALYZING DATA BY BY SELLIGER AND SHAOAMY (1989)
ANALYZING DATA BY BY SELLIGER AND SHAOAMY (1989)
 
Evaluating Collaborative Filtering Recommender Systems
Evaluating Collaborative Filtering Recommender SystemsEvaluating Collaborative Filtering Recommender Systems
Evaluating Collaborative Filtering Recommender Systems
 
Invited Lecture on Interactive Information Retrieval
Invited Lecture on Interactive Information RetrievalInvited Lecture on Interactive Information Retrieval
Invited Lecture on Interactive Information Retrieval
 
Real Estate Data Set
Real Estate Data SetReal Estate Data Set
Real Estate Data Set
 
2012-Nathans-PARE-RegressionGuidebook.pdf
2012-Nathans-PARE-RegressionGuidebook.pdf2012-Nathans-PARE-RegressionGuidebook.pdf
2012-Nathans-PARE-RegressionGuidebook.pdf
 
Non experimental%20 quantitative%20research%20designs-1
Non experimental%20 quantitative%20research%20designs-1Non experimental%20 quantitative%20research%20designs-1
Non experimental%20 quantitative%20research%20designs-1
 
The use of LISREL in validating marketing constructs
The use of LISREL in validating marketing constructsThe use of LISREL in validating marketing constructs
The use of LISREL in validating marketing constructs
 
The Implementation of Structural Equation Modelling Partial Least Squares (SE...
The Implementation of Structural Equation Modelling Partial Least Squares (SE...The Implementation of Structural Equation Modelling Partial Least Squares (SE...
The Implementation of Structural Equation Modelling Partial Least Squares (SE...
 
Reading Material: Qualitative Interview
Reading Material: Qualitative InterviewReading Material: Qualitative Interview
Reading Material: Qualitative Interview
 
Terms for smartPLS.pptx
Terms for smartPLS.pptxTerms for smartPLS.pptx
Terms for smartPLS.pptx
 
7Repeated Measures Designs for Interval DataLearnin.docx
7Repeated Measures Designs  for Interval DataLearnin.docx7Repeated Measures Designs  for Interval DataLearnin.docx
7Repeated Measures Designs for Interval DataLearnin.docx
 
FactorAnalysis.ppt
FactorAnalysis.pptFactorAnalysis.ppt
FactorAnalysis.ppt
 
FHCC: A SOFT HIERARCHICAL CLUSTERING APPROACH FOR COLLABORATIVE FILTERING REC...
FHCC: A SOFT HIERARCHICAL CLUSTERING APPROACH FOR COLLABORATIVE FILTERING REC...FHCC: A SOFT HIERARCHICAL CLUSTERING APPROACH FOR COLLABORATIVE FILTERING REC...
FHCC: A SOFT HIERARCHICAL CLUSTERING APPROACH FOR COLLABORATIVE FILTERING REC...
 
Correlational research
Correlational researchCorrelational research
Correlational research
 
cikm2016_mean_variance_evaluation
cikm2016_mean_variance_evaluationcikm2016_mean_variance_evaluation
cikm2016_mean_variance_evaluation
 
Chapter 15 Social Research
Chapter 15 Social ResearchChapter 15 Social Research
Chapter 15 Social Research
 

More from humphrieskalyn

Evaluate the presence and effects of alteration in the homeostatic s.docx
Evaluate the presence and effects of alteration in the homeostatic s.docxEvaluate the presence and effects of alteration in the homeostatic s.docx
Evaluate the presence and effects of alteration in the homeostatic s.docx
humphrieskalyn
 
EV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docx
EV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docxEV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docx
EV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docx
humphrieskalyn
 
Evaluate the Health History and Medical Information for Mrs. J.,.docx
Evaluate the Health History and Medical Information for Mrs. J.,.docxEvaluate the Health History and Medical Information for Mrs. J.,.docx
Evaluate the Health History and Medical Information for Mrs. J.,.docx
humphrieskalyn
 

More from humphrieskalyn (20)

Evaluate the role of leadership on organizational behaviorProv.docx
Evaluate the role of leadership on organizational behaviorProv.docxEvaluate the role of leadership on organizational behaviorProv.docx
Evaluate the role of leadership on organizational behaviorProv.docx
 
Evaluate the role that PKI plays in cryptography.Ensure that you.docx
Evaluate the role that PKI plays in cryptography.Ensure that you.docxEvaluate the role that PKI plays in cryptography.Ensure that you.docx
Evaluate the role that PKI plays in cryptography.Ensure that you.docx
 
Evaluate the presence and effects of alteration in the homeostatic s.docx
Evaluate the presence and effects of alteration in the homeostatic s.docxEvaluate the presence and effects of alteration in the homeostatic s.docx
Evaluate the presence and effects of alteration in the homeostatic s.docx
 
Evaluate the role of a digital certificate in cryptography.  How doe.docx
Evaluate the role of a digital certificate in cryptography.  How doe.docxEvaluate the role of a digital certificate in cryptography.  How doe.docx
Evaluate the role of a digital certificate in cryptography.  How doe.docx
 
Evaluate the merits of Piaget’s stage theory for explaining cognitiv.docx
Evaluate the merits of Piaget’s stage theory for explaining cognitiv.docxEvaluate the merits of Piaget’s stage theory for explaining cognitiv.docx
Evaluate the merits of Piaget’s stage theory for explaining cognitiv.docx
 
Evaluate the notion that white collar offenders are intrinsically di.docx
Evaluate the notion that white collar offenders are intrinsically di.docxEvaluate the notion that white collar offenders are intrinsically di.docx
Evaluate the notion that white collar offenders are intrinsically di.docx
 
EV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docx
EV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docxEV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docx
EV 551 Hazardous Materials Assessment – Summer2020Homework 1 – 4.docx
 
Evaluate the history of cryptography from its origins.  Analyze how .docx
Evaluate the history of cryptography from its origins.  Analyze how .docxEvaluate the history of cryptography from its origins.  Analyze how .docx
Evaluate the history of cryptography from its origins.  Analyze how .docx
 
Evaluate the evidence provided by Apollo Shoes.Decide how to s.docx
Evaluate the evidence provided by Apollo Shoes.Decide how to s.docxEvaluate the evidence provided by Apollo Shoes.Decide how to s.docx
Evaluate the evidence provided by Apollo Shoes.Decide how to s.docx
 
Evaluate the Health History and Medical Information for Mrs. J.,.docx
Evaluate the Health History and Medical Information for Mrs. J.,.docxEvaluate the Health History and Medical Information for Mrs. J.,.docx
Evaluate the Health History and Medical Information for Mrs. J.,.docx
 
Evaluate the current state of the health care system in Sacramento. .docx
Evaluate the current state of the health care system in Sacramento. .docxEvaluate the current state of the health care system in Sacramento. .docx
Evaluate the current state of the health care system in Sacramento. .docx
 
Evaluate the advantages and disadvantages of the various decis.docx
Evaluate the advantages and disadvantages of the various decis.docxEvaluate the advantages and disadvantages of the various decis.docx
Evaluate the advantages and disadvantages of the various decis.docx
 
Evaluate some technologies that can help with continuous monitoring..docx
Evaluate some technologies that can help with continuous monitoring..docxEvaluate some technologies that can help with continuous monitoring..docx
Evaluate some technologies that can help with continuous monitoring..docx
 
Evaluate progress on certification plansReport your prog.docx
Evaluate progress on certification plansReport your prog.docxEvaluate progress on certification plansReport your prog.docx
Evaluate progress on certification plansReport your prog.docx
 
Evaluate how you have achieved course competencies and your plans to.docx
Evaluate how you have achieved course competencies and your plans to.docxEvaluate how you have achieved course competencies and your plans to.docx
Evaluate how you have achieved course competencies and your plans to.docx
 
Evaluate how information privacy and security relates to the Interne.docx
Evaluate how information privacy and security relates to the Interne.docxEvaluate how information privacy and security relates to the Interne.docx
Evaluate how information privacy and security relates to the Interne.docx
 
Evaluate assessment of suicide in forensic settings andor cri.docx
Evaluate assessment of suicide in forensic settings andor cri.docxEvaluate assessment of suicide in forensic settings andor cri.docx
Evaluate assessment of suicide in forensic settings andor cri.docx
 
Evaluate different approaches to ethical decision making. Then, choo.docx
Evaluate different approaches to ethical decision making. Then, choo.docxEvaluate different approaches to ethical decision making. Then, choo.docx
Evaluate different approaches to ethical decision making. Then, choo.docx
 
Evaluate and grade websites in terms of their compliance with PL pri.docx
Evaluate and grade websites in terms of their compliance with PL pri.docxEvaluate and grade websites in terms of their compliance with PL pri.docx
Evaluate and grade websites in terms of their compliance with PL pri.docx
 
Evaluate at least (2) factors that make financial statement analys.docx
Evaluate at least (2) factors that make financial statement analys.docxEvaluate at least (2) factors that make financial statement analys.docx
Evaluate at least (2) factors that make financial statement analys.docx
 

Recently uploaded

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
MateoGardella
 

Recently uploaded (20)

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 

Business Research Methods, Ch. 19New Message · Chapter 19 C.docx

  • 1. Business Research Methods, Ch. 19 New Message · Chapter 19: Cluster Analysis The thread has 1 unread message. created by Jynx Gresser Last updated Mar 07, 2015, 11:34 PM 1 · Comment on Mar 07, 2015, 11:34 PM Message collapsed. Message unread Chapter 19: Cluster Analysis posted by Jynx Gresser at Mar 07, 2015, 11:34 PM Last updated Mar 07, 2015, 11:34 PM · According to Cooper and Schindler (2011), cluster analysis is "a set of interdependence techniques for grouping similar objects or people" (p. 550). This method is often utilized in the fields of medicine, biology, and marketing (Cooper & Schindler, 2011). Within the field of marketing, one can divide customers into groups based on buying behaviors as well as age, lifestyle, and financial characteristics (Cooper & Schindler, 2011). Cluster analysis is often compared to a factor analysis, but differs in the ways that correlations are treated; they are similarity measures rather than control variables on a linear model (Cooper & Schindler, 2011). There are five basic steps in the application of cluster analysis and they include selection of the sample to be clustered, definition of the variables on which to measure objects, events, or people, computation of similarities through correlations, selection of mutually exclusive clusters, and cluster comparison and validation (Cooper & Schindler, 2011). The biggest takeaway from this analysis method is clustering similar groups together to provide a heightened awareness of links between data amongst different
  • 2. demographic variables. A dendogram provides a visual representation on how to categorize clusters and understand their differences. I look forward to utilizing this type of analysis when I begin my marketing classes that involves product development and how it effects buying behavior. Does anyone in class utilize this method and have some insight into how it is applied? Reference Cooper, D. R., & Schindler, P. S. (2011). Business Research Methods (11th ed.). New York, NY: McGraw's/Irwin. Retrieved from the University of Phoenix eBook Collection database. Jynx Gresser · SEM The thread has 3 unread messages. created by ARACHEAL VENTRESS Last updated Mar 07, 2015, 10:03 PM 3 · Comment on Mar 06, 2015, 1:25 PM Message collapsed. Message unread SEM posted by ARACHEAL VENTRESS at Mar 06, 2015, 1:25 PM Last updated Mar 06, 2015, 1:25 PM · Structural equation modeling (SEM) implies a structure for the covariances between observed variables, and accordingly it is sometimes called covariance structure modeling. More
  • 3. commonly, researchers refer to structural equation models as LISREL (linear structural relations) models--the name of the first and most widely cited SEM computer program. SEM is a powerful alternative to other multivariate techniques, which are limited to representing only a single relationship between the dependent and independent variables. The major advantages of SEM are (1) that multiple and interrelated dependence relationships can be estimated simultaneously and (2) that it can represent unobserved concepts, or latent variables, in these relationships and account for measurement error in the estimation process. Reference Cooper, D. R., & Schindler, P. S. (2011). Business Research Methods (11th ed.). New York, NY: McGraw's/Irwin. Retrieved from the University of Phoenix eBook Collection database. · Comment on Mar 07, 2015, 8:36 PM Message collapsed. Message unread SEM posted by PATRICIA MARCUS at Mar 07, 2015, 8:36 PM Last updated Mar 07, 2015, 8:36 PM · Great post Arachael. SEM is a flexible and extensive method for testing theory. It is a statistical modeling modeling technique to establish relationships among variables. It is best developed on the basis of substantive theory. Statistical estimates of these hypothesized covariance indicates within a margin of error how well the models fit with days. Structural equation models subsume factor analysis, regression, and path analysis. Integration of that types of analysis is an important advancement because it helps make possibleempirical specification of the linkage between imperfectly measured variables and rhetorical constructs of interest. A key feature of
  • 4. SEM is that observational variables are seen as a representation of a small number of constructs that can't be directly measures but only inferred from the observed measured variables. . · Comment on Mar 07, 2015, 10:03 PM Message collapsed. Message unread Re: SEM posted by MaDonna Keys at Mar 07, 2015, 10:03 PM Last updated Mar 07, 2015, 10:03 PM · According to stat soft its states that the SEM or the Structural Equation Modeling is a very powerful analysis technique because it includes a few different versions of specialized number of methods. The methods are used in special analysis cases. There are basics that are included in application structural equation modeling. There are six major applications that are included in structural equation modeling those are casual modeling, confirmatory factor analysis, second order factor analysis, regression models,covariance structure model, and correlation structure model. Casual modeling and or path analysis which is a type of modeling that focuses on hypothesizes among variables that are latent and manifested, tested and the models of the casual models with a linear equation system. Confirmatory factor or analysis focuses on intercorrelations and their testing and the factor analysis of specific hypotheses and their structure of factors when tested. Second order factor analysis which involves the analyzation of the common factors through a correlation matrix to provide a second order of factors through the analysis . Regression models is weights in regression weights are constrained to equal each other based upon numerical values extending the linear regression analysis of stats.
  • 5. Covariance structural models which includes the hypothesizing that a covariance matrix has a particular form. This can be tested on variables with all equal variances. Correlation structural models this also uses hypothesizing the correlation matrix has a specific structure or form and an example of this includes circumplex. Retrieved from Structural Equation Modeling accessed March 8, 2015 · Multivariate Analysis The thread has 2 unread messages. created by STEPHANIE RECTOR Last updated Mar 07, 2015, 8:25 PM 2 · Comment on Mar 05, 2015, 10:26 AM Message collapsed. Message unread Multivariate Analysis posted by STEPHANIE RECTOR at Mar 05, 2015, 10:26 AM Last updated Mar 05, 2015, 10:26 AM · Many businesses today rely on multiple independent and multiple dependent variables because of how complex consumer preferences are. According to our readings,multivariate analysis are "those statistical techniques which focus upon, and bring out in bold relief, the structure of simultaneous relationships among three or more phenomena" (Cooper, 2011). Dependence and interdependency are the two types of multivariate techniques and selecting the correct one is of high importance. You would utilize the dependency technique if the criterion and predictor variables are clear in the research question. Cooper also states, "Alternatively, if the variables are interrelated without designating some as dependent and others independent, then inter- dependence of the variables is assumed. Factor analysis,
  • 6. cluster analysis, and multidimensional scaling are examples of interdependency techniques" (Cooper, 2011). Multivariate analysis can be complicated because of the inclusion of physics- based analysis. These help calculate how variables influence hierarchical "systems-of-systems." Reference: Cooper, D. R., & Schindler, P. S. (2011). Business Research Methods (11th ed.). New York, NY: McGraw's/Irwin. Retrieved from the University of Phoenix eBook Collection database. · Comment on Mar 05, 2015, 6:34 PM Message collapsed. Message unread Re: Multivariate Analysis posted by LOUIS DAILY at Mar 05, 2015, 6:34 PM Last updated Mar 05, 2015, 6:34 PM Stephanie, Yes, when we have more than one dependent variables, we can use multivariate techniques to analyze the design. thanks Lou · Conjoint Analysis The thread has 3 unread messages. created by KIM DUNLAP Last updated Mar 07, 2015, 8:57 PM 3 · Comment on Mar 06, 2015, 6:26 PM Message collapsed. Message unread Conjoint Analysis posted by KIM DUNLAP at Mar 06, 2015, 6:26 PM Last updated Mar 06, 2015, 6:26 PM ·
  • 7. Conjoint Analysis is one that seems to be used often in the marketing aspect of business. It is a method or tool that allows us to the relative importance of combinations of attributes and rank them in order of importance. So in other words, on almost any product, you can assign different attributes such as cost, brand, color options, size, price, options available. You could then assign combinations different levels and go out and research the public. By doing this you can rank the importance of the different combinations to your customers and market those qualities that matter most to your customer base. Statistical software that assists the researcher in analyzing conjoint data is helpful. · Comment on Mar 07, 2015, 5:20 PM Message collapsed. Message unread Re: Conjoint Analysis posted by JUDEENE WALKER at Mar 07, 2015, 5:20 PM Last updated Mar 07, 2015, 5:20 PM · Conjoint analysis is a popular marketing research technique that marketers used to determine what features a new product should have and how it should be priced. The main steps involved in using conjoint analysis include determination of the salient attributes for the given product from the points of view of the consumers. In simple terms conjoint analysis is an advanced market research technique that gets to "the root of the matter" of how people make decisions and what they really value in products and services. Using this technique one has to present people with choices then analyze the data. Benefit of using conjoint analysis is to evaluate product/service attributes in a way that no other method can. Conjoint analysis provides the ability to use results to develop market simulation models that can be used well in to the future. In this ever
  • 8. changing competitive market it is important to use CA as it allows changes to be incorporated in to a simulation model showing predictions of how buyers will respond to different changes. · Comment on Mar 07, 2015, 8:57 PM Message collapsed. Message unread Re: Conjoint Analysis posted by PATRICIA MARCUS at Mar 07, 2015, 8:57 PM Last updated Mar 07, 2015, 8:57 PM · Great post Judeene..From what I gather from conjoint analysis and I am in agreement with you is that it presents people with choices and then it allows them to analyze those choices. It allows businesses to work or and quantify the hidden rules people use to make traps between different gratis or component pays off the offer. The principle behind vonjoint analysis is that it stays by breaking down every thing about a product or service and then test the combination of them to find out what the customer prefer. .Vonjoint analysis can be relatively compress because it twos an understanding of how to use and create. attributes and levels.Even then a vonjoint analysis don't always fit. So depending on the product or services itis possible that certain approaches are not always suitable and other methods are needed. Interpreting Statistics Write a one-paragraph (2-3 complete sentences) summary of the article, demonstrating that you read and understood it. (Look at your notes on Summarizing.)
  • 9. Researchers at UCLA measured the quantity of same-sex couples that are married, have registered civil unions or domestic partnerships. Researchers also viewed the number of couples that are legally ending their relationships, in contrast to the divorce rate for straight couples. Based on their research findings, they drew several conclusions about the current and possible future trends of trends of gay marriage and divorce. STATISTIC # 1 1. Explain the statistic you selected from your article. (Look at your class notes on interpreting statistics and read through the Statistics PPT again.) About two-thirds of registered or married same-sex couples are lesbians and only about one-third are gay men. 2. What are the two pieces of data being compared in the statistic? The two pieces of data being compared are the number of registered same-sex civil unions or marriages among women and the number of registered same-sex civil unions or marriages among men. 3. Describe at least one conclusion that can be drawn from your statistic. One conclusion that can be drawn is that marriage is more appealing to women than to
  • 10. men. Another possible conclusion that one could draw is that gay women are more comfortable publicly proclaiming their sexual orientation than gay men. STATISTIC #2 1. Explain the static you chose. About 1% of married or registered same-sex couples get divorced each year while approximately 2% of straight couples get divorced each year. 2. What are the two pieces of data being compared in the statistic? The two pieces of data being compared are the divorce rates for same-sex couples and the divorce rates for straight couples. 3. Describe at least one conclusion that can be drawn from your statistic. One conclusion that can be drawn is that partners in same-sex marriages are happier than partners in straight marriages. These findings are based on the article cited below. Hertz, Frederick. "Frederick Hertz: Divorce & Marriage Rates for Same-Sex Couples." Breaking News and Opinion on The Huffington Post. N.p., n.d. Web. 10 Oct. 2013. <http://www.huffingtonpost.com/frederick-hertz/divorce- marriage-rates- fo_b_1085024.html>.
  • 11. Sexual assault reports climb at area colleges N.E. schools’ data tied to a greater awareness By Matt Rocheleau | G L O B E C O R R E S P O N D E N T O C T O B E R 0 6 , 2 0 1 4 • Share via e-mail Reports of sexual assaults on area college campuses rose markedly last year, an increase that safety specialists attribute primarily to heightened national awareness of the problem, which prompts more victims to come forward. A survey of information from more than two dozen of the largest New England colleges found that reports of “forcible sex offenses” climbed by 40 percent overall between 2012 and 2013, according to a Globe review of data that colleges provided in annual federally mandated reports released last week. Continue reading below Last year, there were a total of 289 reports of the offenses at the colleges. That compares with 206 in 2012. The 2013 total was more than twice as many as reported five years before. Sexual assaults on and around college campuses, long considered a vastly underreported crime, have received increased attention in recent months from the Obama administration, the schools, and
  • 12. students. As a result, campuses have stepped up training, support, and outreach, and the rising number for 2013 is seen as signaling that victims are more comfortable reporting assaults. “It means that students are coming forward and reporting crimes that are happening and ending that culture of silence,” said Alison Kiss, director of the Clery Center for Security On Campus, a nonprofit that trains colleges to comply with the federal Clery Act. Specialists also believe the spike in reporting may indicate that colleges are becoming more thorough and transparent in collecting and disclosing sexual assault data. At all but four of the campuses in the review, the number of sexual assault reports rose or held steady last year. Hampshire College said its reports of assaults increased from 13 to 20, giving the school the highest rate of sexual assaults per 1,000 students in 2013 — 13.6 — among colleges surveyed by the Globe. The next highest rate was 5.5 assaults at Dartmouth College; the school reported 24 assaults in 2012, and 35 in 2013. The average rate among schools in the survey was 0.99 per 1,000 students in 2013. Diana Fernandez, Hampshire’s coordinator for Title IX, a federal law that mandates gender equality in campus life, attributed the above-average rate at the college to its new efforts to inform students about how to report the crime and about the increased resources available for victims.
  • 13. “It’s continued education and really working with the community on that education and training,” Fernandez said. She said Hampshire recently added online training for students on sexual assault prevention. Hampshire has also expanded workshops that teach students about consent and how to intervene if they see a friend in a vulnerable situation. “We really look at how can we promote these conversations and how can we work with our students to have these conversations,” she said. University of Connecticut saw sexual assault reports nearly double from 13 in 2012 to 25 last year, putting its rate for 2013 at 0.88 per 1,000 students. “I think the numbers will continue to rise here and at other campuses for years to come, and that’s really important because it gives us a better picture of what’s happening,” said UConn’s Title IX coordinator, Elizabeth Conklin. Meanwhile, Amherst College was one of just four schools in the survey that saw sexual assault reports decline. The college reported 14 sexual assaults in both 2010 and 2011, 17 offenses in 2012, and nine in 2013, which put its rate for 2013 at 5.04 per 1,000 students. Campus spokeswoman Caroline Hanna said in an e-mail that the college has taken numerous steps to better prevent and respond to sexual assault and encourages
  • 14. students to report the crime. But, “as these numbers tend to fluctuate across institutions and year over year, it is difficult for us to speculate about the reason for the change in last year’s numbers,” Hanna said. Under the Clery Act, colleges are required by Oct. 1 each year to issue a report that includes statistics of allegations of crimes that occurred on campus, including dorms and other public property; at property owned by, but separated from, the main campus; and at fraternities and sororities. They exclude other off-campus housing. An estimated 88 percent of college victims do not formally report sexual assaults, federal studies suggest. Even so, some area schools have reported surprisingly low numbers of assault allegations, given their size. Suffolk University, which enrolled 8,800 students last year, said there were no reports of sexual assaults in 2013. Since 2008, the downtown university reported two cases, both in 2010. School spokesman Greg Gatlin said the low numbers may be because fewer than 15 percent of Suffolk students live on campus. The university “works closely with students, faculty, and staff to encourage the reporting of any incident and to make our community aware of available resources,” he said in an e-mail. Bunker Hill Community College, with an enrollment of about 14,000, listed one report of sexual
  • 15. assault in both 2008 and 2012, and zero reports in the four years in between. Spokeswoman Karen Norton said the college is confident its data is accurate. “We have a lot of processes in place about how we gather the information and report it,” Norton said. The two-year school is different from more traditional colleges, she said: There are no dormitories; two-thirds of the students are part time; and about one-third take at least some of their classes online. The average age of students is 27, she said. Without singling out specific schools, the Clery Center’s Kiss and other experts say that low numbers may indicate colleges are not doing enough to raise awareness about the issue or to educate how victims can report. It may also mean colleges are not being honest or thorough in their reporting, experts said. In recent years, numerous colleges have been caught or accused of failing to disclose individual reports of sexual assault or for releasing inaccurate data. Under new federal guidelines, colleges were required for the first time in 2013 to also compile and disclose the number of reports of dating violence, domestic violence, and stalking on their campuses. But two local schools failed to include those statistics on their latest Clery reports. One school was Roxbury Community College, which
  • 16. acknowledged last year — amid an ongoing federal investigation — that it had underreported crime figures in prior Clery reports. Campus president Valerie Roberson said in an e-mail that the college had no reports of domestic violence, dating violence, or stalking for 2013. “As a commuter college, these crimes would be unusual, especially when compared to what might occur at a residential college,” she wrote. Wentworth Institute of Technology also failed to include the newly required data. After the Globe pointed out the omission, Wentworth spokesman Caleb Cochran said it was an oversight. On Thursday, the school issued an updated report saying it did not receive reports in any of the three categories last year. Rocheleau, Mtatt. "Reports of sexual assaults on area college campuses rise sharply -­‐ The
  • 17. Boston Globe." BostonGlobe.com. N.p., 6 Oct. 2014. Web. 9 Oct. 2014. <http://www.bostonglobe.com/metro/2014/10/05/reports- ­‐sexual-­‐assaults-­‐area-­‐college-­‐campuses-­‐rise-­‐ sharply/F0R0BoigySPVOaWn5YXeDI/story.html?p1=Article_F acet_Related_Article>. · Statistics for Business and Economics, Ch. 13 Methods Of Forecasting The thread has 1 unread message. created by MaDonna Keys Last updated Mar 07, 2015, 10:59 PM 1 · Comment on Mar 07, 2015, 10:59 PM Message collapsed. Message unread Methods Of Forecasting posted by MaDonna Keys at Mar 07, 2015, 10:59 PM Last updated Mar 07, 2015, 10:59 PM · According to investopedia when it comes forcasting there are a few different methods in which forecasts can be made. When it comes to the use of any method it falls into one of these two
  • 18. categories. The categories are qualitative and quantitative. When it comes models that are Qualitative they are usually short term predictions. The forecast scope is limited. The qualitative forecasts is based on expert basis. They are dependent upon market mavens or the market as a whole the weighing of the informed consensus. The best used when it comes to predicting company successes based upon the short term and it meets the reliability measures based on opinions. Quantitative methods it involves the discount of the expertise and tries to remove the human element out the analysis. This analysis is concerned only with data not much of relying on people. Variables that consist of gross sales, gross domestic products, and others. This kind of method is used on the basis of long terms measured in months to years. Qualitative methods includes market research which involves polls of large numbers of people when it comes to a specific product and how many people will purchase that product. Delphi methods which includes the analysis of field experts. Quantitative methods include the indicator approach, Econometric modeling, and Time Series. Indicator approach this is relationship between particular indicators such as performance lagging or leading data. Econometric which involves the testing of consistency of data over a particular amount of time. Time Series refers to collections of past related data for future predictions of events. Retrieved From The Basics Of Business Forecasting Accessed March 8,2015 · index numers The thread has 4 unread messages. created by ARIEL SMITH Last updated Mar 07, 2015, 9:19 PM
  • 19. 4 · Comment on Mar 04, 2015, 12:42 PM Message collapsed. Message unread index numers posted by ARIEL SMITH at Mar 04, 2015, 12:42 PM Last updated Mar 04, 2015, 12:42 PM · According to the text, a common technique for characterizing a business or economic time series is to compute index numbers. Index numbers measure how time series values change relative to a preselected time period, called the base period. There are two types of indexes used in business and economic: price and quantity indexes. Price indexes measure changes in the price of a commodity or group of commodities over time. One example of a price index is The Consumer Price Index (CPI) is a price index because it measures price changes of a group of commodities that are intended to reflect typical purchases of American consumers. An index constructed to measure the change in the total number of automobiles produced annually by American manufacturers would be an example of a quantity index. McClave, J. T., Benson, P. G., & Sincich, T. (2010). Statistics for Business and Economics, 11th Edition. [VitalSource Bookshelf version]. Retrieved from http://online.vitalsource.com/books/9781269882163/id/ch13tab0 1 · Comment on Mar 05, 2015, 6:45 PM Message collapsed. Message read Re: index numers posted by LOUIS DAILY at Mar 05, 2015, 6:45 PM Last updated Mar 05, 2015, 6:45 PM
  • 20. Ariel, Yes, and the CPI is the usual measure of inflation in the economy. thanks Lou · Comment on Mar 06, 2015, 1:13 PM Message collapsed. Message unread Re: index numers posted by ARACHEAL VENTRESS at Mar 06, 2015, 1:13 PM Last updated Mar 06, 2015, 1:13 PM · Ariel Thanks for your thorough explanation of index numbers. An index number is one simple number that we can look at to give us a general overview of what is happening in a particular field. Another example of a real world number index is the Dow Jones International Average. The Dow Jones industrial average or DJIA for short, is a stock index. It measures the performance of 30 large public companies in the United States. These 30 companies are representative of companies in the whole of the United States and therefore, this index is mentioned in relation to how businesses in the United States are doing. If this index goes down, then that means companies in the United States are not doing well. If the index goes up, then that means businesses in the United States are doing well. This index is very important in the forecasting of how well our economy doing in terms of profit or deficit. Reference http://study.com/academy/lesson/index-numbers-in-statistics- uses-examples.html · Comment on Mar 06, 2015, 2:04 PM
  • 21. Message collapsed. Message unread Re: index numers posted by STEPHANIE RECTOR at Mar 06, 2015, 2:04 PM Last updated Mar 06, 2015, 2:04 PM · Hello Ariel and Aracheal, thanks for the insight on index numbers and examples. According to our readings, "Methods of calculating index numbers range from very simple to extremely complex, depending on the numbers and types of commodities represented by the index" (McClave, 2010). Index numbers that are based on price or quantity of one commodity is referred to as a simple index number. This number describes the relative changes through time in the price or quantity of that commodity. An example of this would be to construct a simple index to describe the relative changes in gold prices over the last 15 years. The year 2000 would be considered the "base period." The simple index number can be calculated for any chosen year by dividing that year's price by the price during the base year 2000, then multiplying the remaining by 100. McClave, J. T., Benson, P. G., & Sincich, T. (2010). Statistics for Business and Economics, 11th Edition. · Comment on Mar 07, 2015, 9:19 PM Message collapsed. Message unread Re: index numers posted by SAID SHEIK ABDI at Mar 07, 2015, 9:19 PM Last updated Mar 07, 2015, 9:19 PM · Economists frequently use index numbers when making comparisons over time. An index starts in a given year, the base year, at an index number of 100. In subsequent years, percentage increases push the index number above 100, and
  • 22. percentage decreases push the figure below 100. An index number of 102 means a 2% rise from the base year, and an index number of 98 means a 2% fall. Using an index makes quick comparisons easy. For example, when comparing house prices from the base year of 2005, an index number of 110 in 2006 indicates an increase in house prices of 10% in 2006. The best-known index in the United States is the consumer price index, which gives a sort of "average" value for inflation based on price changes for a group of selected products. The Dow Jones and NASDAQ indexes for the New York and American Stock Exchanges, respectively, are also index numbers. http://mathworld.wolfram.com/IndexNumber.html · Forecasting The thread has 9 unread messages. created by LOUIS DAILY Last updated Mar 07, 2015, 9:13 PM 9 · Comment on Mar 02, 2015, 11:41 AM Message collapsed. Message unread Forecasting posted by LOUIS DAILY at Mar 02, 2015, 11:41 AM Last updated Mar 02, 2015, 11:41 AM In the previous chapters, we studied Linear Regression. Let's say our Y variable is sales and our X variable is time (perhaps Jan, Feb, March, April, etc). If the X variable is time, we call this a time series. Can you think of a use for a regression line with this time series? If we had three years of data, how many years ahead do you think we could predict using a regression line? If each December sales went sky high because of Christmas, how could we adjust for this before constructing the regression
  • 23. line? · Comment on Mar 03, 2015, 5:42 AM Message collapsed. Message read Re: Forecasting posted by ERIK SEIDEL at Mar 03, 2015, 5:42 AM Last updated Mar 03, 2015, 5:42 AM · A regression line with a time series would be helpful in predicting future sales levels or other trends. For example, medical expenses for our health insurance company tend to fluctuate each year. Some years the flu is worse, and other years there is just heavier utilization than others. But if you create a time series of this data over a few years, you will see in almost every case that there is an overall increasing trend to medical expenses per member. This may be related to increases in our provider contracts, increased utilization of services, or a combination of the two. If we had three years of data, you could probably predict another five years into the future. However, consideration would have to be given to any expected material changes in the data to a change in the business, industry, etc. If trying to project sales and sales are higher each December, I think the best way to account for this would be to either include total year sales as the x variable or only look at year-to-year December sales as the x variable. · Comment on Mar 03, 2015, 8:23 AM Message collapsed. Message read Re: Forecasting posted by LOUIS DAILY at Mar 03, 2015, 8:23 AM Last updated Mar 03, 2015, 8:23 AM Erik, Time is usually the x variable, the independent variable. Then we plot the change in the y variable over time. After that, we "smooth" the data for random fluctuations, seasonality, etc.
  • 24. Finally, we use regression to model the trend (if any). thanks Lou · Comment on Mar 04, 2015, 4:25 PM Message collapsed. Message unread Re: Forecasting posted by STEPHANIE RECTOR at Mar 04, 2015, 4:25 PM Last updated Mar 04, 2015, 4:25 PM · Can you think of a use for a regression line with this time series? If we had three years of data, how many years ahead do you think we could predict using a regression line? If each December sales went sky high because of Christmas, how could we adjust for this before constructing the regression line? A sequence of data points make up a time series and the successive measurements are made over a particular time frame. According to Wikipedia, "Time series are used in statistics, signal processing, pattern recognition, econometrics,mathematical finance, weather forecasting, earthquake prediction, electroencephalography, control engineering, astronomy,communications engineering, and largely in any domain of applied science and engineering which involves temporal measurements" (wikipedia.org). We can use time series forecasting to predict future value based on previous values. If we had three years of data we could predict ahead the next three to five years. It is when it is long term that time series can lag. Before constructing a regression line, we could adjust for the high December sales by "smoothing," which involves removing irregular fluctuations in a time series. This
  • 25. method essentially "smooths" out most residual effects. References: http://en.wikipedia.org/wiki/Time_series · Comment on Mar 05, 2015, 6:39 PM Message collapsed. Message unread Re: Forecasting posted by LOUIS DAILY at Mar 05, 2015, 6:39 PM Last updated Mar 05, 2015, 6:39 PM Stephanie, With three years of data, I think we can be comfortable with predicting one year ahead--any further out would be "on a limb". thanks Lou · Comment on Mar 06, 2015, 1:06 PM Message collapsed. Message unread Re: Forecasting posted by ARACHEAL VENTRESS at Mar 06, 2015, 1:06 PM Last updated Mar 06, 2015, 1:06 PM · Professor: A time series is a sequence of data points, typically consisting of successive measurements made over a time interval. Examples of time series are ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. Time series are very frequently plotted via line charts. Time series are used in statistics, signal processing, pattern recognition, econometrics, mathematical finance, weather forecasting, earthquake prediction, astronomy, and largely in any domain of applied science and engineering which involves temporal measurements.
  • 26. Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test theories that the current values of one or more independent time series affect the current value of another time series, this type of analysis of time series is not called "time series analysis", which focuses on comparing values of a single time series or multiple dependent time series at different points in time. Reference http://en.wikipedia.org/wiki/Time_series · Comment on Mar 06, 2015, 11:56 PM Message collapsed. Message unread Re: Forecasting posted by Jynx Gresser at Mar 06, 2015, 11:56 PM Last updated Mar 06, 2015, 11:56 PM · According to Gellert (n. d.), "it can be highly beneficial for companies to develop a forecast of the future values of some important metrics, such as demand for its products or variables that describe the economic climate" (para. 1). At my company, we have an actual daily, weekly, and monthly sales goal as well as a forecast which is based on previous sales data with a little lift. This would be an example of time series analysis because it provides an estimate of what we could achieve in sales. Another example could be gift set store inventory based on past dollar sales and this is particularly evident during important sales holidays, like Mother's Day. "It is possible to develop a linear regression model that simply fits a line to the variables historical performance and extrapolates that into the future" (Gellert, n. d., para. 4). Although, this line is unable to account for holidays where there are extreme values and nonlinearity
  • 27. (Gellert, n. d.). I would probably only predict one year out even with multiple years of data because this involves the most up to date knowledge for analysis. For December sales, I would adjust for the sky high sales by switching to a weekly sales liner regression. Unfortunately, December could be thrown out as an extreme value for the year, but if each week or another busy month was analyzed then we could understand its impact more effectively in regards to forecasting. Reference Gellert, A. (n. d.). Linear regression forecasting methods by companies. The Houston Chronicle. Retrieved from http://smallbusiness.chron.com/linear-regression-forecasting- method-companies-73112.html Jynx Gresser · Comment on Mar 07, 2015, 8:26 AM Message collapsed. Message unread Forecasting posted by CRYSTAL RAMOS at Mar 07, 2015, 8:26 AM Last updated Mar 07, 2015, 8:26 AM · The use of historic data to determine the direction of future trends. Forecasting is used by companies to determine how to allocate their budgets for an upcoming period of time. This is typically based on demand for the goods and services it offers, compared to the cost of producing them. Investors utilize forecasting to determine if events affecting a company, such as sales expectations, will increase or decrease the price of shares
  • 28. in that company. Forecasting also provides an important benchmark for firms which have a long-term perspective of operations. Stock analysts use various forecasting methods to determine how a stock's price will move in the future. They might look at revenue and compare it to economic indicators, or may look at other indicators, such as the number of new stores a company opens or the number of orders for the goods it manufactures. Economists use forecasting to extrapolate how trends, such as GDP or unemployment, will change in the coming quarter or year. The further out the forecast, the higher the chances that the estimate will be less accurate. · Comment on Mar 07, 2015, 10:16 AM Message collapsed. Message unread Re: Forecasting posted by ERIK SEIDEL at Mar 07, 2015, 10:16 AM Last updated Mar 07, 2015, 10:16 AM · There are numerous applications of using historical data to predict future results. Some of these include forecasting sales or expenses for future years, months, weeks, or even days. I think there are also many operational applications of using statistics to forecast future events that companies should take into consideration in order to maximize efficiency. For example, the customer service areas of companies often have peaks and valleys in workload. These may vary by time of day, week, month, or year. Companies that do not take these variations into account can be very inefficient and spend money and resources unnecessarily. Customer service departments that are most efficient will have a flexible staff and use statistical data from past experience to determine when to increase or decrease staffing. · Comment on Mar 07, 2015, 5:25 PM
  • 29. Message collapsed. Message unread Re: Forecasting posted by TAMARQUES PORTER at Mar 07, 2015, 5:25 PM Last updated Mar 07, 2015, 5:25 PM · Products that exhibit slow-moving demand or have sporadic demand require a specific type of statistical forecast model. Intermittent model works for products with erratic demand. Products with erratic demand do not exhibit a seasonal component; instead a graph drawn of the products demand attributes shows peaks and flat periods at intermittent points along the time series. The goal of this model is to provide a safety stock value instead of a forecast value. The safety stock value allows for just enough inventories to cover needs. Forecasting new products remains one of the toughest forecasting tasks available. New product forecasting requires input from human and computer generated sources. New product forecasting methods seek to manage the high ramp up period associated with a new product introduction. These methods also work for maturing products approaching the end of their life cycle. · Comment on Mar 07, 2015, 9:13 PM Message collapsed. Message unread Re: Forecasting posted by SAID SHEIK ABDI at Mar 07, 2015, 9:13 PM Last updated Mar 07, 2015, 9:13 PM · Forecasting is the prediction of future events and conditions and is a key element in service organizations, especially banks, for management decision-making. There are typically two types of events: 1) uncontrollable external events - originating with the national economy, governments, customers and competitors and 2) controllable internal events (e.g., marketing, legal, risk, new product decisions) within the firm.
  • 30. If we had three years of data, how many years ahead do you think we could predict using a regression line? I think we can easily predict a year If each December sales went sky high because of Christmas, how could we adjust for this before constructing the regression line? We know the December is the peak sale month and we easily smooth by changing Y variable and then construct the regression line. http://www.isixsigma.com/tools-templates/risk- management/use-forecasting-basics-predict-future-conditions/ · Composite index number The thread has 4 unread messages. created by ARIEL SMITH Last updated Mar 07, 2015, 4:54 PM 4 · Comment on Mar 04, 2015, 12:51 PM Message collapsed. Message unread Composite index number posted by ARIEL SMITH at Mar 04, 2015, 12:51 PM Last updated Mar 04, 2015, 12:51 PM · According to the text, a composite index number represents combinations of the prices or quantities of several commodities. The booked used a great example. To construct an index for the total number of sales of two major automobile manufacturers: General Motors and Ford. The first step you would have to do is collect data on the sales of each manufacturer during the period in which you are interested in. To summarize the information from both time series in a single index, we add the sales of each manufacturer for each year, we form a new time series consisting of the total number of automobiles sold by the two
  • 31. manufacturers. Then we construct a simple index for the total of the two series. The resulting index is called a simple composite index. A simple composite index is a simple index for a time series consisting of the total price or total quantity of two or more commodities. McClave, J. T., Benson, P. G., & Sincich, T. (2010). Statistics for Business and Economics, 11th Edition. [VitalSource Bookshelf version]. Retrieved from http://online.vitalsource.com/books/9781269882163/id/ch13fig0 2 · Comment on Mar 05, 2015, 6:47 PM Message collapsed. Message unread Re: Composite index number posted by LOUIS DAILY at Mar 05, 2015, 6:47 PM Last updated Mar 05, 2015, 6:47 PM Ariel, That example is a time series showing the fluctuation in the price of silver. thanks Lou · Comment on Mar 07, 2015, 8:25 AM Message collapsed. Message unread Composite index number posted by CRYSTAL RAMOS at Mar 07, 2015, 8:25 AM Last updated Mar 07, 2015, 8:25 AM
  • 32. · A grouping of equities, indexes or other factors combined in a standardized way, providing a useful statistical measure of overall market or sector performance over time. Also known simply as a "composite". Usually, a composite index has a large number of factors which are averaged together to form a product representative of an overall market or sector. For example, the Nasdaq Composite index is a market capitalization-weighted grouping of approximately 5,000 stocks listed on the Nasdaq market. These indexes are useful tools for measuring and tracking price level changes to an entire stock market or sector. Therefore, they provide a useful benchmark against which to measure an investor's portfolio. The goal of a well diversified portfolio is usually to outperform the main composite indexes. · Comment on Mar 07, 2015, 4:54 PM Message collapsed. Message unread Re: Composite index number posted by JUDEENE WALKER at Mar 07, 2015, 4:54 PM Last updated Mar 07, 2015, 4:54 PM · Compose index numbers are in dices calculated to reflect the change in activity of a number of items from the base period to the period under consideration. CIN allows us to measure with a single number, the relative variations within a group of variables upon moving from on situation to another. The aim of using composite index numbers is to summarize all the simple index numbers contained in a complex number into just one index. Example: the index of prices for taxi is 100 compared to a given base year. The index for rental is 160 (using the same base year). Calculate the composite index for taxi and rent using a weighting of 55 for taxi and 45 for rental using the composite
  • 33. index formula: total of index/ total weighting. CI= (100*55+ 160*45)/ (55+45) = 12,700/100= 127 · The first message in the thread has been removed created by ARIEL SMITH Last updated Mar 04, 2015, 12:51 PM · Comment on Mar 04, 2015, 12:50 PM This message has been removed Message collapsed. Statistics for Business and Economics, Ch. 11 · · Multiple Linear Regression The thread has 7 unread messages. created by LOUIS DAILY Last updated Mar 07, 2015, 10:40 AM 7 · Comment on Mar 02, 2015, 11:39 AM Message collapsed. Message unread Multiple Linear Regression posted by LOUIS DAILY at Mar 02, 2015, 11:39 AM Last updated Mar 02, 2015, 11:39 AM The simple Linear Regression equation is a relationship between one independent variable (X) and one dependent variable (Y). X is used to predict Y. But X might only help in predicting Y, not predict it perfectly. If we had other variables, say, X2 and X3, that also can independently contribute to the prediction, we might have a much better prediction equation. We might have something like: Y = slope X1 + slope X2 + slope X3 + intercept. This would be a multiple regression equation. Why is this an improvement over a simple linear equation model? How many variables do you think we would need to make a real good prediction?
  • 34. What if the relationship between the three independent variables in this equation and the dependent variable is not linear at all, but a curved relationship (curvilinear)?· Comment on Mar 04, 2015, 1:31 PM Message collapsed. Message unread Re: Multiple Linear Regression posted by STEPHANIE RECTOR at Mar 04, 2015, 1:31 PM Last updated Mar 04, 2015, 1:31 PM · Why is this an improvement over a simple linear equation model? How many variables do you think we would need to make a real good prediction? What if the relationship between the three independent variables in this equation and the dependent variable is not linear at all, but a curved relationship (curvilinear)? Realistically, some research requires the use of multiple variables because some issues include several influential factors. A simple linear equation model will not be able to include those other factors, so we must use a multiple regression model to determine a conclusion. On a side note, multiple regression will not adequately explain the relationship between independent and dependent variables if they are not linear. The steps in multiple regression are basically the same for simple regression. According to csulb.edu, "Regression with only one dependent and one independent variable normally requires a minimum of 30 observations. A good rule of thumb is to add at least an additional 10 observations for each additional independent variable added to the equation. The number of independent variables in the equation should be limited by two factors. First, the independent variables should be included in the equation only if they are based on the researcher's theory about what factors influence the dependent variable. Second, variables that do not contribute very much to explaining the
  • 35. variance in the dependent variable (i.e., to the total R2), should be eliminated" (csulb.edu, 2015). When the relationship between the three independent variables in this equation and the dependent variable is not linear at all, but a curved relationship (curvilinear) then the data points will increase together to a certain point. Then as one continues to increase, the other decreases or the other way around. You basically can see on a scatter plot a line rising to a peak, then declining. Reference: http://web.csulb.edu/~msaintg/ppa696/696regmx.htm· Comment on Mar 05, 2015, 6:51 PM Message collapsed. Message unread Re: Multiple Linear Regression posted by LOUIS DAILY at Mar 05, 2015, 6:51 PM Last updated Mar 05, 2015, 6:51 PM Stephanie, Yes, sometimes the relationship between variables is not linear, but curvilinear--we can fit a curve to the data. thanks Lou· Comment on Mar 07, 2015, 10:40 AM Message collapsed. Message unread Re: Multiple Linear Regression posted by ERIK SEIDEL at Mar 07, 2015, 10:40 AM Last updated Mar 07, 2015, 10:40 AM · An example of a curvilinear relationship that quickly came to mind when reading about this was predicting our company's administrative expense ratio for a new line of business. An administrative expense ratio is calculated by simply taking total
  • 36. administrative expenses (sales, customer services, overhead, etc.) as a percentage of total revenue. Our health insurance company has invested in recent years in going into new markets and lines of business. One thing I have learned is that it takes a certain critical mass of membership in order to gain sufficient economies of scale to lower the administrative expense ratio to the point where the line of business is profitable. The slope of this based on my experience is curvilinear. As we gain the first 5,000 to 10,000 members for a particular line of business, the curve moves downward significantly due to being able to spread overhead, marketing, and other fixed and start-up costs over a larger base of membership. After 10,000 members or so, the slope continues downward but at a much slower rate.· Comment on Mar 06, 2015, 8:26 PM Message collapsed. Message unread Re: Multiple Linear Regression posted by CRYSTAL RAMOS at Mar 06, 2015, 8:26 PM Last updated Mar 06, 2015, 8:26 PM · Multiple Linear Regression Multiple linear regression attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data. Every value of the independent variable x is associated with a value of the dependent variable y. The population regression line for p explanatory variables x1, x2, ... , xp is defined to be y = 0 + 1x1 + 2x2 + ... + pxp. This line describes how the mean response y changes with the explanatory variables. The observed values for y vary about their means y and are assumed to have the same standard deviation . The fitted values b0, b1, ..., bp estimate the parameters 0, 1, ..., p of the population regression line. Since the observed values for y vary about their means y, the multiple regression model includes a term for this variation. In words, the model is expressed as DATA = FIT + RESIDUAL, where the "FIT" term represents the expression 0 + 1x1 + 2x2
  • 37. + ... pxp. The "RESIDUAL" term represents the deviations of the observed values y from their means y, which are normally distributed with mean 0 and variance . The notation for the model deviations is . Formally, the model for multiple linear regression, given n observations, is yi = 0 + 1xi1 + 2xi2 + ... pxip + i for i = 1,2, ... n. In the least-squares model, the best-fitting line for the observed data is calculated by minimizing the sum of the squares of the vertical deviations from each data point to the line (if a point lies on the fitted line exactly, then its vertical deviation is 0). Because the deviations are first squared, then summed, there are no cancellations between positive and negative values. The least-squares estimates b0, b1, ... bp are usually computed by statistical software. The values fit by the equation b0 + b1xi1 + ... + bpxip are denoted i, and the residuals ei are equal to yi - i, the difference between the observed and fitted values. The sum of the residuals is equal to zero. The variance ² may be estimated by s² = , also known as the mean-squared error (or MSE). The estimate of the standard error s is the square root of the MSE.· Comment on Mar 06, 2015, 11:14 PM Message collapsed. Message unread Re: Multiple Linear Regression posted by Jynx Gresser at Mar 06, 2015, 11:14 PM Last updated Mar 06, 2015, 11:14 PM · According to McClave, Benson, and Sincich (2011), "most practical applications of regression analysis employ models that are more complex than the simple straight-line model" (p. 625). Most probabilities involve one or more independent variables that affect the dependent variable (McClave et al., 2011). This is where multiple linear regression comes into the statistical equation. Princeton University Library (n. d.). found that the
  • 38. simple linear regression displays the relationship between the independent and dependent variable while the multiple linear regression predicts the relationship with multiple independent variables, for example, height, weight, and gender. The predictive value of the relationship between two or more variables can be increased and overall purpose of multiple regressions is to prove a relationship (Princeton University Library, n. d.). As the number of variables increases, the statistical probability of the relationship between the variables is strengthened. What is the best number of relationships to have? I couldn't find any specific numbers to answer this question. Curvilinear relationships can occur in multiple regressions especially when two or more variables stop having affect on one another. The predictive value of these variables starts to decrease; a good example would be friends and happiness (Princeton University Library, n. d.). Happiness can increase to a certain value of friends then start to decrease, thus making the line curvilinear. References McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics for business and economics (11th ed.). Boston, MA: Prentice Hall. Retrieved from the University of Phoenix eBook Collection database. Princeton University Library. (n. d.). Data and statistical services. Retrieved from http://dss.princeton.edu/online_help/analysis/regression_in tro.htm Jynx Gresser· Comment on Mar 07, 2015, 9:29 AM
  • 39. Message collapsed. Message unread Re: Multiple Linear Regression posted by KIM DUNLAP at Mar 07, 2015, 9:29 AM Last updated Mar 07, 2015, 9:29 AM · Hi Professor Lou, Multiple Linear Regression models are important because in most research there is more than one variable x that affects the outcome y. The multiple linear regression model allows us to take this into consideration mathematically and allows us to statistically prove the affects of multiple variables. Where is looks like it becomes very complicated is where you can have multiple pairs or options of pairs of variables that have to be ranked in the analysis. While I understand the need to do that and it would be helpful in relation to complete studies involving humans - I have to admit I feel it is nothing that I could accomplish on my own at this point in time. I never did find any type of information that suggested there is an optimal number of variables to provide us with the best predictability. · Multiple Regression Models The thread has 2 unread messages. created by BEAU KUSH Last updated Mar 05, 2015, 6:49 PM 2 · Comment on Mar 04, 2015, 3:02 PM Message collapsed. Message unread Multiple Regression Models posted by BEAU KUSH at Mar 04, 2015, 3:02 PM Last updated Mar 04, 2015, 3:02 PM · According to McClave, Benson, and Sincich (2011), most practical applications of regression analysis employ models that
  • 40. are more complex than the simple straight-line model. Moreover, the models typically utilize realistic probabilistic models which contain many variables that might be related to one of the factors being analyzed (McClave, Benson, and Sincich 2011). The probabilistic models that include more than one independent variable are called multiple regression models. Utilizing the independent variables in these models helps researchers make accurate predictions about the outcomes or results. Essentially, researchers will utilize multiple regression models when there is a requirement to predict the value variables based on the value of two or more other variables. Multiple regression allows researchers to make logical predictions based on the data provided. This method saves time and provides a reliable method for obtaining accuracy. Reference: McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics for Business and Economics (11th ed.). Boston, MA: Prentice Hall.· Comment on Mar 05, 2015, 6:49 PM Message collapsed. Message unread Re: Multiple Regression Models posted by LOUIS DAILY at Mar 05, 2015, 6:49 PM Last updated Mar 05, 2015, 6:49 PM Beau, Yes, multiple regression uses more than one independent variable, or predictor variables. This can lead to a much better predictive model than simple linear regression. thanks Lou
  • 41. Statistics for Business and Economics, Ch. 10 · · Caution for Regression Analysis The thread has 4 unread messages. created by ERIK SEIDEL Last updated Mar 07, 2015, 8:17 PM 4 · Comment on Mar 04, 2015, 5:34 AM Message collapsed. Message read Caution for Regression Analysis posted by ERIK SEIDEL at Mar 04, 2015, 5:34 AM Last updated Mar 04, 2015, 5:34 AM · When preparing a linear regression analysis, the resulting correlation, or lack of correlation, can be misleading to users. The purpose of a linear regression model is to demonstrate whether an increase or decrease in variable x has a positive or negative impact on variable y. If you see a clear slope that is increasing or decreasing, the logical conclusion is that variable x is directly impacting variable y, or vice versa. However, it is not necessarily true that a change in variable x is the reason for the change in variable y. For example, a study of the impact of drinking more water to someone's level of health may show that drinking more water improves people's health overall. However, it may not be drinking water that is actually improving health. It may be that eating less as a result of drinking more water is actually the cause of the improved health. · Comment on Mar 05, 2015, 9:43 PM Message collapsed. Message read Re: Caution for Regression Analysis posted by ARACHEAL VENTRESS at Mar 05, 2015, 9:43 PM
  • 42. Last updated Mar 05, 2015, 9:43 PM · Erik- thanks for your post and your example- relating things to real life examples make the applications more clear. Reading a little further in this week's chapter, there is caution in making inferences based on one variable impacting another. Our text explains that when using the sample correlation coefficient, r, to infer the nature of the relationship between x and y, two caveats exist: (1) A high correlation does not necessarily imply that a causal relationship exists between x and y− only that a linear trend may exist; (2) a low correlation does not necessarily imply that x and y are unrelated--only that x and y are not strongly linearly related. Reference McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics for Business and Economics (11th ed.). Boston, MA: Prentice Hal · Comment on Mar 07, 2015, 5:30 PM Message collapsed. Message unread Re: Caution for Regression Analysis posted by LOUIS DAILY at Mar 07, 2015, 5:30 PM Last updated Mar 07, 2015, 5:30 PM Aracheal, Yes, some variables are related curvilinearly. thanks Lou · Comment on Mar 07, 2015, 3:11 PM Message collapsed. Message read Re: Caution for Regression Analysis
  • 43. posted by DELILAH HENDERSON at Mar 07, 2015, 3:11 PM Last updated Mar 07, 2015, 3:11 PM · Erik, that is a good example. Do you think it would help if we were careful to have variable x be as specific as variable y in order for the linear regression analysis to be more true or valid to consumers? Maybe something more defined as the impact of drinking more water to improving kidney function instead of improving someone's level of health overall? Improving someone's level of health seems much broader than the variable of drinking more water. Your example was really good on showing that just because two variables are put together doesn't necessarily mean that y naturally follows x. I thought this example was along the lines of our first chapter showing some of the surveys that were either skewed or false because of improperly conducting the survey (if one was conducted at all.) I liked your examples on this post and the one regarding sales and marketing. Thanks for the examples - they help me understand the concept better. d · Comment on Mar 07, 2015, 5:31 PM Message collapsed. Message unread Re: Caution for Regression Analysis posted by LOUIS DAILY at Mar 07, 2015, 5:31 PM Last updated Mar 07, 2015, 5:31 PM Delilah, Finer measurements are always better when constructing a regression line or curve.
  • 44. thanks Lou · Comment on Mar 07, 2015, 5:29 PM Message collapsed. Message unread Re: Caution for Regression Analysis posted by LOUIS DAILY at Mar 07, 2015, 5:29 PM Last updated Mar 07, 2015, 5:29 PM Erik, Yep "correlation does not imply causation" thanks Lou · Comment on Mar 07, 2015, 8:17 PM Message collapsed. Message unread Caution for Regression Analysis posted by PATRICIA MARCUS at Mar 07, 2015, 8:17 PM Last updated Mar 07, 2015, 8:17 PM · Great post Erik..I agree that when preparing a linear regression analysis the resulting correlation or lack of correlation can be misleading. The regression analysis is a parametric test used for the inference from a dapple to a population. The goal of the regression analysis is to investigate how effective one or more variables are pricing the value of a dependent variable. The relationship between two variables is top establish that changes in the explanatory variable causes changes in the response variable. Even when the strong association is visible the conclusion that the association is due to a casual link bergen the variables is elusive. Correlations and regression only describe linear data and are not resistant. Therefore you should always plot the data before interpreting the regression or correlation.
  • 45. · Simple Linear Regression The thread has 11 unread messages. created by LOUIS DAILY Last updated Mar 07, 2015, 5:53 PM 11 · Comment on Mar 02, 2015, 11:35 AM Message collapsed. Message read Simple Linear Regression posted by LOUIS DAILY at Mar 02, 2015, 11:35 AM Last updated Mar 02, 2015, 11:35 AM You may recall your high school algebra about the equation for a straight line: Y = m X + k where m is the slope of the line and k is the y intercept. Let's say we have two variables X and Y. Let's say they represent height (Y) and weight (X). They can be put into Excel as two columns. We can construct a scatter plot with Excel. The scatter plot will show us the shape of this relationship. The taller you are, on the average, the heavier you are, so there will probably be an upward sloping linear scatter plot. The relationship is not perfect, so you won't be able to draw a line that contains all the points. But what is the best fitting line that we can draw? This assumes of course that a line is in fact a good description of the relationship. The technique of Linear Regression answers this question. Linear Regression will compute the line which best describes the relationship. It does this by computing the slope and intercept of this best fitting line. It is the best fit in the sense that it is the line that is closest on the average to all of the data points (the line of least squares). So we have a "model"--the best description of the relationship between the two variables assuming the relationship is linear (like a line). Can you think of any uses for such a model--for this best fitting line and the equation that describes it? · Comment on Mar 03, 2015, 5:35 AM
  • 46. Message collapsed. Message unread Re: Simple Linear Regression posted by ERIK SEIDEL at Mar 03, 2015, 5:35 AM Last updated Mar 03, 2015, 5:35 AM · Our health insurance company uses regression analysis to review the correlation between various medical expense related metrics. Our actuarial informatics department is probably the heaviest user of regression analysis. For example, regression analysis can be used to determine if certain physical traits or health conditions are related to other health conditions. We may pull together the medical record data from all of our health plan members and look at variables such as high blood pressure and weight. We would chart all members that have blood pressure records above a certain level. The blood pressure readings become the x variable. The y variable would then be the weight of these members. If we see a positive correlation line between weight and high blood pressure, then our medical management department would know to pay particular attention to members which a higher BMI if we are trying to encourage healthy blood pressure of our members. · Comment on Mar 03, 2015, 8:20 AM Message collapsed. Message unread Re: Simple Linear Regression posted by LOUIS DAILY at Mar 03, 2015, 8:20 AM Last updated Mar 03, 2015, 8:20 AM Erik, Thanks for that inside look. Yes, there are lots and lots of uses for regression. thanks Lou
  • 47. · Comment on Mar 03, 2015, 8:38 PM Message collapsed. Message unread Re: Simple Linear Regression posted by KIM DUNLAP at Mar 03, 2015, 8:38 PM Last updated Mar 03, 2015, 8:38 PM · Hi Erik, Thank you for your example. I always have a much better understanding of topics when I hear how people use them in their everyday work. Linear regression can be used for predictive analysis. Linear regression helps us describe and explain the relationship between one dependent variable and at least one independent variable. According to Statistics Solution s, there are three major uses for linear regression. They are 1) Causal analysis 2) Forecasting and effect and 3) Trend forecasting. One important thing to note is that regression analysis assumes a causal relationship between one or more independent variables and one dependent variable. https://www.statisticssolutions.com/what-is-linear-regression/ · Comment on Mar 06, 2015, 8:50 AM Message collapsed. Message unread Re: Simple Linear Regression
  • 48. posted by LOUIS DAILY at Mar 06, 2015, 8:50 AM Last updated Mar 06, 2015, 8:50 AM Kim, Regression is mathematically closely related to correlation. While we create causal "models" all the time, we still have to be careful about the direction of causality. Correlation does not imply causation is still the rule. thanks Lou · Comment on Mar 04, 2015, 12:47 PM Message collapsed. Message unread Re: Simple Linear Regression posted by STEPHANIE RECTOR at Mar 04, 2015, 12:47 PM Last updated Mar 04, 2015, 12:47 PM · Hello Class, According to Wikipedia, "In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or moreexplanatory variables (or independent variable) denoted X. The case of one
  • 49. explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression" (wikipedia, 2015). This type of analysis is used commonly for practical applications. Linear regression requires finding the best-fitting straight line through the data points. A scatter plot is often a useful tool in determining how closely the two variables relate. After reading Eric's response, this reminded me of how my previous sleep company used linear regression in relation to sleep studies. With the research our tech managers implemented into our sleep questionairs, I quickly realized the strong correlation between a patient's BMI (body mass index) and a diagnosis of sleep apnea. The higher a patients BMI was, the more likely they were to be diagnosed with sleep apnea. The obstruction of airways were higher the higher the BMI was. In addition, if untreated with surgery or a CPAP, a patient with sleep apnea had higher increased chances of suffering from severe heart conditions in the longterm. References: http://en.wikipedia.org/wiki/Linear_regression · Comment on Mar 06, 2015, 8:53 AM Message collapsed. Message unread Re: Simple Linear Regression
  • 50. posted by LOUIS DAILY at Mar 06, 2015, 8:53 AM Last updated Mar 06, 2015, 8:53 AM Stephanie, Thanks for sharing that information on your sleep study. That apprently was a correlational study, but correlation is mathematically very related to regression. So...besides calculating the correlation coefficient, there could be a regression line modelling the relationship. thanks Lou · Comment on Mar 05, 2015, 8:48 AM Message collapsed. Message unread Re: Simple Linear Regression posted by CRYSTAL RAMOS at Mar 05, 2015, 8:48 AM Last updated Mar 05, 2015, 8:48 AM · In statistics, simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable. In other words, simple linear regression fits a straight line through the set of n points in such a way that makes the sum of squared residuals of the model (that is, vertical distances
  • 51. between the points of the data set and the fitted line) as small as possible. The adjective simple refers to the fact that this regression is one of the simplest in statistics. The slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that it passes through the center of mass (x, y) of the data points. · Comment on Mar 07, 2015, 5:53 PM Message collapsed. Message unread Re: Simple Linear Regression posted by LOUIS DAILY at Mar 07, 2015, 5:53 PM Last updated Mar 07, 2015, 5:53 PM Crystal, It is "simple" because there is only one independent variable. thanks Lou · Comment on Mar 05, 2015, 7:11 PM Message collapsed. Message unread Re: Simple Linear
  • 52. Regression posted by JUDEENE WALKER at Mar 05, 2015, 7:11 PM Last updated Mar 05, 2015, 7:11 PM · HI professor, now that you mention it I do remember the equation for a straight line Y=mX + K. This equation expresses the fact that if we plot y against x, and the variables obey a relationship of this form one will obtain a straight line graph with slope m and an intercept Simple linear regression is one of the most commonly used techniques for determining how one variable of interest is affected by changes in another variable. From what I remember and read from our text simple linear is used for three main purposes: a) To predict values of one variable from values of another for which more data are available. b) To describe the linear dependence of one variable on another c) To correct for linear dependence of one variable on another · Comment on Mar 06, 2015, 7:46 PM Message collapsed. Message unread Re: Simple Linear
  • 53. Regression posted by SAID SHEIK ABDI at Mar 06, 2015, 7:46 PM Last updated Mar 06, 2015, 7:46 PM · When you think of regression, think prediction. A regression uses the historical relationship between an independent and a dependent variable to predict the future values of the dependent variable. Businesses use regression to predict such things as future sales, stock prices, currency exchange rates, and productivity gains resulting from a training program. Types of Regression A regression models the past relationship between variables to predict their future behavior. As an example, imagine that your company wants to understand how past advertising expenditures have related to sales in order to make future decisions about advertising. The dependent variable in this instance is sales and the independent variable is advertising expenditures.Usually, more than one independent variable influences the dependent variable. You can imagine in the above example that sales are influenced by advertising as well as other factors, such as the number of sales representatives and the commission percentage paid to sales representatives. When one independent variable is used in a regression, it is called a simple regression; when two or more independent variables are used, it is called a multiple regression.
  • 54. Regression models can be either linear or nonlinear. A linear model assumes the relationships between variables are straight- line relationships, while a nonlinear model assumes the relationships between variables are represented by curved lines. In business you will often see the relationship between the return of an individual stock and the returns of the market modeled as a linear relationship, while the relationship between the price of an item and the demand for it is often modeled as a nonlinear relationship. http://www.dummies.com/how-to/content/how-to-use-a-linear- regression-to-identify-market-.html · Comment on Mar 06, 2015, 7:55 PM Message collapsed. Message unread Re: Simple Linear Regression posted by Jynx Gresser at Mar 06, 2015, 7:55 PM Last updated Mar 06, 2015, 7:55 PM · I do remember this relationship, specifically y = mx + b and its interpretations with a line graph. Foremost, simple linear regression is a unique concept in statistics and is considered to be "the methodology of estimating and using a straight line relationship" (McClave, Benson, & Sincich, 2011, p. 561).
  • 55. Furthermore, it describes the relationship between the two variables X [independent] and Y [dependent] (California State University-Long Beach, n. d.). The regression equation is written as y= a + bx + e; a or alpha being the constant and b or beta being coefficient of x and the slope of the regression line (California State University-Long Beach, n. d.). There are multiple uses of simple linear regression and they include the effect of pricing on consumer behavior and analyzing the risk involved in business decisions. When companies change prices, they can monitor the amount of product sold with a particular price and then determine the relationship between pricing and purchasing with a linear relationship (Hamel, n. d.). I work in retail and I see this relationship quite frequently, for example, higher prices can reduce the number of clients who purchase the product and lower prices equate to a higher number of purchases. This can help assist with future pricing decisions. Therefore, there is a correlation between these two variables, but it doesn't mean that this is the only cause for this relationship. References California State University-Long Beach. (n. d.). Simple regression. Retrieved from
  • 56. http://web.csulb.edu/~msaintg/ppa696/696regs.htm Hamel, G. (n. d.). What are some ways that linear regression can be applied in business settings?. The Houston Chronicle. Retrieved from http://smallbusiness.chron.com/ways-linear- regression-can-applied-business-settings-35431.html McClave, J. T., Benson, P. G., & Sincich, T. (2011). Statistics for business and economics (11th ed.). Boston, MA: Prentice Hall. Retrieved from the University of Phoenix eBook Collection database. Jynx Gresser · Confidence intervals vs Prediction Intervals The thread has 7 unread messages. created by ARACHEAL VENTRESS Last updated Mar 07, 2015, 5:51 PM 7 · Comment on Mar 05, 2015, 10:03 PM
  • 57. Message collapsed. Message unread Confidence intervals vs Prediction Intervals posted by ARACHEAL VENTRESS at Mar 05, 2015, 10:03 PM Last updated Mar 05, 2015, 10:03 PM · Confidence intervals tell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution. If you do this many times, and calculate a confidence interval of the mean from each sample, you'd expect about 95 % of those intervals to include the true value of the population mean. The key point is that the confidence interval tells you about the likely location of the true population parameter. Prediction intervals tell you where you can expect to see the next data point sampled. Assume that the data really are randomly sampled from a Gaussian distribution. Collect a sample of data and calculate a prediction interval. Then sample one more value from the population. If you do this many times, you'd expect that next value to lie within that prediction interval in 95% of the samples.The key point is that the prediction interval tells you about the distribution of values, not the uncertainty in determining the population mean. Prediction intervals must account for both the uncertainty in knowing the value of the population mean, plus data scatter. So
  • 58. a prediction interval is always wider than a confidence interval. Reference http://www.graphpad.com · Comment on Mar 06, 2015, 5:44 AM Message collapsed. Message unread Re: Confidence intervals vs Prediction Intervals posted by KIM DUNLAP at Mar 06, 2015, 5:44 AM Last updated Mar 06, 2015, 5:44 AM · Hii Aracheal, Thank you for your very clear post on confidence intervals vs. prediction intervals. The other thing I picked up on confidence intervals vs. prediction intervals is that the prediction interval is wider than the confidence interval because the added uncertainty of predicting a single response or point vs. a mean response. For example, I looked at a study where they were predicting the life expectancy of men who smoked 20 cigarettes a day. The confidence interval for the data was (68.70, 77.61) and the prediction interval for the same data was (55.36, 90.95) - a much wider spread. http://www.real-statistics.com/wp-
  • 59. content/uploads/2012/12/confidence-prediction-intervals- excel.jpg · Comment on Mar 06, 2015, 7:50 PM Message collapsed. Message unread Re: Confidence intervals vs Prediction Intervals posted by SAID SHEIK ABDI at Mar 06, 2015, 7:50 PM Last updated Mar 06, 2015, 7:50 PM · A prediction interval is an interval associated with a random variable yet to be observed, with a specified probability of the random variable lying within the interval. For example, I might give an 80% interval for the forecast of GDP in 2014. The actual GDP in 2014 should lie within the interval with probability 0.8. Prediction intervals can arise in Bayesian or frequentist statistics. A confidence interval is an interval associated with a parameter and is a frequentist concept. The parameter is assumed to be non-??random but unknown, and the confidence interval is computed from data. Because the data are random, the interval is random. A 95% confidence interval will contain the true parameter with probability 0.95. That is, with a large number of repeated samples, 95% of the intervals would contain the true
  • 60. parameter. http://www.real-statistics.com/regression/confidence-and- prediction-intervals/ · Comment on Mar 07, 2015, 9:57 AM Message collapsed. Message unread Re: Confidence intervals vs Prediction Intervals posted by ERIK SEIDEL at Mar 07, 2015, 9:57 AM Last updated Mar 07, 2015, 9:57 AM · I can see the importance of calculating and understanding confidence intervals. There are many uncertainties in the business world, especially in today's fast-paced environment where things change almost every day. When management is preparing to make a strategic decision that will impact a company's future, there will always be some uncertainties. The best managers can do when making decisions is basing those decisions on the best data available. While management cannot be certain about the outcome of a decision, they at least want to be reasonably confident about the success of the project or strategic change in the company. By using data to determine whether there can be 95% certainty of an outcome, management can be confident that the outcome will most likely have the
  • 61. result that is expected. · Comment on Mar 07, 2015, 2:47 PM Message collapsed. Message unread Re: Confidence intervals vs Prediction Intervals posted by DELILAH HENDERSON at Mar 07, 2015, 2:47 PM Last updated Mar 07, 2015, 2:47 PM · Aracheal, that was a good post telling of the differences between confidence intervals and prediction intervals. I found an article that shows how to calculate the two and showed a two good graphs to help explain the difference. The article, Confidence and prediction intervals for forecasted values, from a website showing how to use Excel, showed that the graph on the left was a confidence interval. It said that with a confidence interval, "This means that there is a 95% probability that the true linear regression line of the population will lie within the confidence interval of the regression line calculated from the sample data. In the graph on the left of Figure 1, a linear regression line is calculated to fit the sample data points. The confidence interval consists of the space between the two curves (dotted lines). Thus there is a 95% probability that the true best-fit line for the
  • 62. population lies within the confidence interval (e.g. any of the lines in the figure on the right above)." The graph on the right shows the prediction interval. The article mentioned, "There is also a concept called prediction interval. Here we look at any specific value ofx, x0, and find an interval around the predicted value ?0 for x0 such that there is a 95% probability that the real value of y (in the population) corresponding to x0 is within this interval (see the graph on the right side of Figure 1)." The two graphs helped me understand the two a little better. Reference: http://biostatistics/regression/confidence-and-prediction- intervals/ · Comment on Mar 07, 2015, 5:51 PM Message collapsed. Message unread Re: Confidence intervals vs Prediction Intervals posted by LOUIS DAILY at Mar 07, 2015, 5:51 PM Last updated Mar 07, 2015, 5:51 PM Great illustration Delilah,
  • 63. thanks Lou · Comment on Mar 07, 2015, 5:47 PM Message collapsed. Message unread Re: Confidence intervals vs Prediction Intervals posted by LOUIS DAILY at Mar 07, 2015, 5:47 PM Last updated Mar 07, 2015, 5:47 PM Aracheal, Good job in distinguishing between confidence intervals and prediction intervals thanks Lou · The least square approach The thread has 5 unread messages. created by ARIEL SMITH Last updated Mar 07, 2015, 5:38 PM 5 · Comment on Mar 04, 2015, 1:02 PM Message collapsed. Message unread The least square approach
  • 64. posted by ARIEL SMITH at Mar 04, 2015, 1:02 PM Last updated Mar 04, 2015, 1:02 PM · The least square approach, is a mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the residuals to be treated as a continuous differentiable quantity. However, because squares of the offsets are used, outlying points can have a disproportionate effect on the fit, a property which may or may not be desirable depending on the problem at hand. The linear least squares fitting technique is the simplest and most commonly applied form of linear regression and provides a solution to the problem of finding the best fitting straight line through a set of points. In fact, if the functional relationship between the two quantities being graphed is known to within additive or multiplicative constants, it is common practice to transform the data in such a way that the resulting line is a straight line, say by plotting vs. instead of vs. in the case of analyzing the period of a pendulum as a function of its length . Wolfram Math World. (2013). Retrieved from http://mathworld.wolfram.com/LeastSquaresFitting.html
  • 65. · Comment on Mar 05, 2015, 7:40 PM Message collapsed. Message unread Re: The least square approach posted by JUDEENE WALKER at Mar 05, 2015, 7:40 PM Last updated Mar 05, 2015, 7:40 PM · After further reading I found that linear regression was the first type of regression analysis to be studied rigorously and used extensively in more practical applications. This is so because models that depend linearly on its unknown parameters are easier o fit than those models which are nonlinearly related to its parameter. When two variables are highly correlated the points on the scatter diagram more or less follow a diagonal line. What the regression method does is to find the line that minimizes the average distance in the vertical direction from the line to all points. If the goal is forecasting or reduction, linear regression can be used to fit a predictive model to an observed data set of x and y values. · Comment on Mar 07, 2015, 5:38 PM Message collapsed. Message unread Re: The least square
  • 66. approach posted by LOUIS DAILY at Mar 07, 2015, 5:38 PM Last updated Mar 07, 2015, 5:38 PM Judeene Good. Just to be exact, the least squares technique minimizes the squared distances, thus "least squares". thanks Lou · Comment on Mar 06, 2015, 8:37 PM Message collapsed. Message unread Re: The least square approach posted by CRYSTAL RAMOS at Mar 06, 2015, 8:37 PM Last updated Mar 06, 2015, 8:37 PM · A statistical technique to determine the line of best fit for a model. The least squares method is specified by an equation with certain parameters to observed data. This method is extensively used in regression analysis and estimation. In the most common application - linear or ordinary least squares - a straight line is sought to be fitted through a number
  • 67. of points to minimize the sum of the squares of the distances (hence the name "least squares") from the points to this line of best fit. In contrast to a linear problem, a non-linear least squares problem has no closed solution and is generally solved by iteration. The earliest description of the least squares method was by Carl Freidrich Gauss in 1795. · Comment on Mar 07, 2015, 2:57 PM Message collapsed. Message unread Re: The least square approach posted by DELILAH HENDERSON at Mar 07, 2015, 2:57 PM Last updated Mar 07, 2015, 2:57 PM · Thanks, Ariel, for your post. I looked up some more information on the least square approach and found an article "Least Squared Method' that explained it a little bit more. The article said "A statistical technique to determine the line of best fit for a model. The least squares method is specified by an equation with certain parameters to observed data. This method is extensively used in regression analysis and estimation." The article also gave a little more information, "In the most common application - linear or ordinary least squares - a straight line is sought to be fitted through a number of points to minimize the
  • 68. sum of the squares of the distances (hence the name "least squares") from the points to this line of best fit. In contrast to a linear problem, a non-linear least squares problem has no closed solution and is generally solved by iteration. The earliest description of the least squares method was by Carl Freidrich Gauss in 1795." Reference: http://www.investopedia.com/terms/l/least-squares-method.asp · Linear Regression for Marketing The thread has 4 unread messages. created by ERIK SEIDEL Last updated Mar 07, 2015, 5:34 PM 4 · Comment on Mar 06, 2015, 6:34 AM Message collapsed. Message unread Linear Regression for Marketing posted by ERIK SEIDEL at Mar 06, 2015, 6:34 AM Last updated Mar 06, 2015, 6:34 AM · Due to many changes and financial pressures in the health
  • 69. insurance industry, our company (and I'm sure others as well) are taking a harder look at administrative expenses and determining where budgets can be reduced. Without being administratively efficient, companies will go out of business. We're seeing more and more consolidations of health care organizations because of this. One area our company has focused on recently is marketing. It is often very difficult to determine the impact of increased or decreased advertising spending on new sales and retention. A linear regression model may help with this. If you have a few years of historical data, you can use marketing spending as the x variable and new sales as the y variable. You could do a similar analysis with marketing spending again as the x variable and retention of current customers as the y variable. · Comment on Mar 07, 2015, 3:03 PM Message collapsed. Message unread Re: Linear Regression for Marketing posted by DELILAH HENDERSON at Mar 07, 2015, 3:03 PM Last updated Mar 07, 2015, 3:03 PM · Erik, that is a good example. I was trying to apply linear regression where I work since I work at a university. I think we have the same relationship as your example, health insurance
  • 70. industry. I was thinking that instead of using marketing vs retention of current customers, in our example it could be variety of classes offered vs student enrollment. In our industry the type of classes or degrees offered would have a direct relation to student enrollment. I know we have studied this in depth because the type of classes offered and degrees has changed over the year based on student enrollment in these classes. One definitely affects the other. d · Comment on Mar 07, 2015, 5:34 PM Message collapsed. Message unread Re: Linear Regression for Marketing posted by LOUIS DAILY at Mar 07, 2015, 5:34 PM Last updated Mar 07, 2015, 5:34 PM Delilah, What university do you work for? thanks Lou
  • 71. · Comment on Mar 07, 2015, 5:33 PM Message collapsed. Message unread Re: Linear Regression for Marketing posted by LOUIS DAILY at Mar 07, 2015, 5:33 PM Last updated Mar 07, 2015, 5:33 PM Erik, Sure. Good example of time series analysis in business. thanks Lou · Sales Price and Linear Regression The thread has 5 unread messages. created by ERIK SEIDEL Last updated Mar 07, 2015, 2:54 PM 5 · Comment on Mar 05, 2015, 5:06 AM Message collapsed. Message unread Sales Price and Linear Regression posted by ERIK SEIDEL at Mar 05, 2015, 5:06 AM Last updated Mar 05, 2015, 5:06 AM
  • 72. · As I think about linear regression further, I understand more and more how many applications it can have. Linear regression models can be a great tool for any company's sales department. For sales, linear regression can be used in tandem with a break- even analysis to determine the appropriate price to charge for a product. If a company gradually changes the price for a product but the linear regression line is flat or even sloping slightly upward, this means that people are continuing to purchase the product regardless of the price increase. On the other hand, if the slope is moving downward, the company may need to pull back on its price increases. This can help a company determine just how much it may be able to increase its product prices without losing sales volume and market share. · Comment on Mar 05, 2015, 8:44 AM Message collapsed. Message unread Re: Sales Price and Linear Regression posted by CRYSTAL RAMOS at Mar 05, 2015, 8:44 AM Last updated Mar 05, 2015, 8:44 AM · In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variable) denoted X.
  • 73. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.[1] (This term should be distinguished from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.)[2] In linear regression, data are modeled using linear predictor functions, and unknown model parameters are estimated from the data. Such models are called linear models.[3] Most commonly, linear regression refers to a model in which the conditional mean of y given the value of X is an affine function of X. Less commonly, linear regression could refer to a model in which the median, or some other quantile of the conditional distribution of y given X is expressed as a linear function of X. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of y given X, rather than on the joint probability distribution of y and X, which is the domain of multivariate analysis. · Comment on Mar 05, 2015, 9:19 PM Message collapsed. Message unread Re: Sales Price and Linear Regression posted by ARACHEAL VENTRESS at Mar 05, 2015, 9:19 PM Last updated Mar 05, 2015, 9:19 PM
  • 74. · Thanks Erik for your example of how linear regression is used by businesses such as the retail industry when making strategic moves. I conducted more research about linear regression and business and found that linear regression can also be used to analyze risk The example that I found discussed health insurance companies might conduct a linear regression plotting number of claims per customer against age and discover that older customers tend to make more health insurance claims. The results of such an analysis might guide important business decisions made to account for risk. Reference http://smallbusiness.chron.com/ways-linear-regression- · Comment on Mar 07, 2015, 9:11 AM Message collapsed. Message unread Re: Sales Price and Linear Regression posted by Pierre at Mar 07, 2015, 9:11 AM Last updated Mar 07, 2015, 9:11 AM · Great post Erik,
  • 75. Linear regressions are useful tools when it comes to predicting sales and the future growth of sales by analyzing the independent and dependent variable that effect the regression. There are two types of linear regressions. A simple regression is a regression that has only one independent variable present, and the second type of regression is a multiple regression when there are two or more independent variables used. The use of multiple independent variables in a regression is a more complicated relationship between the independent variables and the dependent variable in the multiple regression than the relationship between the two variables in a simple regression. According to the Columbia University Business school, "the most basic type of regression is that of simple linear regression. A simple linear regression uses only one independent variable, and it describes the relationship between the independent variable and dependent variable as a straight line." Reference: Statistical Sampling and Regression: Simple Linear Regression. (n.d.). Retrieved March 7, 2015, from https://www0.gsb.columbia.edu/premba/analytical/s7/s7_6.cfm · Comment on Mar 07, 2015, 2:54 PM
  • 76. Message collapsed. Message unread Re: Sales Price and Linear Regression posted by DELILAH HENDERSON at Mar 07, 2015, 2:54 PM Last updated Mar 07, 2015, 2:54 PM · Thanks, Erik. That was a good post. I found an article, Introduction to Linear Regression, that said that "In simple linear regression, we predict scores on one variable from the scores on a second variable. The variable we are predicting is called the criterion variableand is referred to as Y. The variable we are basing our predictions on is called thepredictor variable and is referred to as X. When there is only one predictor variable, the prediction method is called simple regression. In simple linear regression, the topic of this section, the predictions of Y when plotted as a function of X form a straight line." I believe this is what you were stating in your example on sales. That one variable can affect a second variable.The article also mentioned that "Linear regression consists of finding the best- fitting straight line through the points. The best-fitting line is called a regression line." Reference: