Presented to:Prof.Rahul Dalvi
Presented By :- Anshu Tiwari
Roll No:- 2012098









Marketing researchers often need to answer questions
about the single variable
For eg:users of brand may b...
Value
Label
Very
Unfamiliar

Valu Frequency (N)
e

%

Valid %

Cumulative
%

1

0

0.0

0.0

0.0

2

2

6.7

6.9

6.9

3

...
The most commonly used statistics associated with
frequencies are measures of Location (mean, mode &
median), measures of ...
Median:- The median of a sample is the middle value when the data are
arranged in ascending or descending order. So the me...




Skewness:- Distribution can be either be symmetric or
skewed. In a symmetric distribution, the values on either
isde...











The concepts of sampling distribution , standard error of the mean
or the proportion, and the confidence ...


Formulate the null hypothesis Ho and the alternative H1.



Select an appropriate statistical technique and the corres...


Two Variables :- Cross tabulation is also know as bivariate crosstabulation. Consider again the cross-tabulation of Int...


The statistical significance of the observed association is commonly
measured by the Chi-Square statistic.



The stre...










The chi-square statistic (χ square) is used to test the statistical
significance of the observed associati...


The phi coefficient (ф) is used as a measure of the
strength of association in the special case of table with
two rows ...


The phi coefficient is specific to a 2*2 table, the
contigency coefficient (C) can be used to assess the
strength of as...






Cramer’s V is a modified version of the phi coefficient, ф , and is used
in tables larger than 2* 2. When phi is ...


Lambda assumes that the variables are measured on a nominal scale.



A symmetric lambda measures the % improvement in...













The previous section considered hypothesis testing to associations.
Now we would be focusing on hypoth...
Independent
Samples

Parametric
Tests
(Metric Data)

Two
Samples
Paired Samples
One Sample

t =test
z = test

Two group
t=...
Nonparametri
c Tests
(Nonmetric
Data)

Two
Sampl
es

Independent
Samples

Chi-square
MannWhitney
Median
K-S

Chisquare
K–S...


Parametric Tests provide inferences for making statements
about the means of parent population.



A t test is commonl...
 Non

parametric tests are used when the
independent variables are nonmetric.
 Like parametric tests, nonparametric
test...


An important nonparametric test for examining differences
in the location of two populations based on paired
observatio...
marketing research & applications on SPSS
Upcoming SlideShare
Loading in...5
×

marketing research & applications on SPSS

670

Published on

Published in: Education, Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
670
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Transcript of "marketing research & applications on SPSS"

  1. 1. Presented to:Prof.Rahul Dalvi
  2. 2. Presented By :- Anshu Tiwari Roll No:- 2012098
  3. 3.        Marketing researchers often need to answer questions about the single variable For eg:users of brand may be characterized by brand loyal. Familiar with the new product offering. Mean familiarity rating Income distributions of Brand Users. Distributions skewed towards the low income bracket. For all the above questions the answers can be determined by examining frequency distributions.
  4. 4. Value Label Very Unfamiliar Valu Frequency (N) e % Valid % Cumulative % 1 0 0.0 0.0 0.0 2 2 6.7 6.9 6.9 3 6 20.0 20.7 27.6 4 6 20.0 20.7 48.3 5 3 10.0 10.3 58.6 6 8 26.7 27.6 86.2 Very Familiar 7 4 13.3 13.8 100.0 Missing 9 1 3.3 Total 30 100.0 100.0
  5. 5. The most commonly used statistics associated with frequencies are measures of Location (mean, mode & median), measures of variability (range, interquartile range. Standard deviation, and coefficient of variation) and measures of shape (skewness & kurtosis).  Mean:- X bar = ∑ Xin / n where i=1 & X bar is given, Xi = observed values of the variable X N= Number of observations (sample size) X bar = (2*2+6*3+6*4+3*5+8*6+4*7) / 29 = (4+18+24+15+48+28) / 29 = 137/29 = 4.724  Mode :- Mode is the value that occurs most frequently. It represents the highest peak of the distribution. So the mode here is 6 . 
  6. 6. Median:- The median of a sample is the middle value when the data are arranged in ascending or descending order. So the median is an appropriate measure odf central tendency for ordinal data. The median is 5.  So the mean= 4.724, mode = 6, Median = 5 . So when the measures of central tendency in a different way. So which measures should be used.  Measures of Variability : Range:- = X largest – X smallest i.e 7-2 = 5  Interquartile Range = Difference between the 75th & 25th percentile. So the interquartile range is 6-3 = 3  Variance & Standard Deviation :- Variance can never be negative. Standard Deviation is the saquare root of the variance. So it can be calculated as s = √ ∑ (Xi – X bar )2 / n-1  S square { 2*(2-4.724)2square + 6-(3-4.724)2 + 6*(4-4.724)2 + 3*(5-4.724)2 + 8*(6-4.724)2 + 4*(7-4.724)2 } / 29-1 = {14.840+17.833+3.145+0.299+13.025+20.721} / 28, = 69.793/28, = 2.493 S = √2.493 = 1.579 
  7. 7.   Skewness:- Distribution can be either be symmetric or skewed. In a symmetric distribution, the values on either isde of the center of the distribution are the same, and the mean, mode & median are equal. In a skewed distribution the positive & negative deviations from the mean are unequal. Kurtosis:- It is a measure of the relative peakedness or flatness of the curve defined by the frequency distribution. The kurtosis of a normal distribution is zero. If the kurtosis is positive, then the distribution is more peaked than a normal distribution. A negative value means that the distribution is flatter than a normal distribution.
  8. 8.       The concepts of sampling distribution , standard error of the mean or the proportion, and the confidence interval. All these concepts are relevant to hypothesis testing and should be reviewed. Eg of Hypotheses generated in marketing research :The department store is being patronized by more than 105% of the households, The heavy and light users of a brand differ in terms of psychographic characteristics. Familiarity with a restaurant results in a greater preference for that restaurant. One hotel has a more upscale image than its close competitor.
  9. 9.  Formulate the null hypothesis Ho and the alternative H1.  Select an appropriate statistical technique and the corresponding test statistic.  Choose the level of significance, ά.  Determine the size & collect the data.  Determine probability & Determine the critical value of the test.  Compare the probability & Determine if TSCR falls into rejection.  Making statistical decision to reject or not reject the null hypothesis.  Express the statistical decision in terms of the marketing research problem.
  10. 10.  Two Variables :- Cross tabulation is also know as bivariate crosstabulation. Consider again the cross-tabulation of Internet usage with sex (male & female).  Three Variables:- The third variable clarifies the initial association (or lack of it) observed between the two variables.   Reveal Suppressed Association - Over here the researcher suspected desire to travel abroad may be influenced by age. However, a cross-tabulation of the two variables produced the result. General Comments on Cross-Tabulation:More than three variables can be cross-tabulated, but the interpretation is quite complex. Also because the number of cells increases multiplicatively, maintaining an adequate no. of respondents or cases in each cell can be problematic.
  11. 11.  The statistical significance of the observed association is commonly measured by the Chi-Square statistic.  The strength of association, or degree of association, is important from a practical or substantive perspective.  The strength of the association can be meausred by the phi correlation coefficient, the contigency co-efficient, Cramer’s V, & the lambda coeficient.  Explanation of all the above coefficient are explained on the next slide.
  12. 12.       The chi-square statistic (χ square) is used to test the statistical significance of the observed association in a cross-tabulation. The null hypothesis, Ho is that there is no association between the variables. The value of chi-square is calculated as:Χ square = ∑ (fo - fe)square / fe(denoted value). fo is observed frequency. The chi-square distribution is a skewed distribution whose shape depends solely on the number of degrees of freedom. As the number of degrees of freedom increases, the chi-square distribution becomes more symmetrical.
  13. 13.  The phi coefficient (ф) is used as a measure of the strength of association in the special case of table with two rows & two columns (a 2*2table).  The phi coefficient is proportional to the √ of the chisquare statistic.  Sample size n, this statistic is calculated as:- √χsquare/n. ф=
  14. 14.  The phi coefficient is specific to a 2*2 table, the contigency coefficient (C) can be used to assess the strength of association in a table of any size.  Chi-square   :- C = √ / χsquare χsquare + n The contingency coefficient varies between 0 & 1. Where 0 value occurs in the case of no association (i.e., the variables are statistically independent), but the maximum value of 1 is never achieved. This value of C indicates that the association is not very strong. Another statistic that can be calculated for any table is Cramer’s V.
  15. 15.    Cramer’s V is a modified version of the phi coefficient, ф , and is used in tables larger than 2* 2. When phi is calculated for a table larger than 2*2, it has no upper limit. Cramer’s V is obtaining by adjusting phi for either the number of rows o r the number of columns in the table, based on which of the two is smaller. For a table with r rows & c columns, the relationship between Cramer’s V and the phi correlation coefficient is expressed as :- V = √ф square / min(r-1), (c-1) Or V =  √χ square/n / min(r-1),(c-1) Another statistic commomly estimated is the lambda coefficient
  16. 16.  Lambda assumes that the variables are measured on a nominal scale.  A symmetric lambda measures the % improvement in predicting the value of the dependent variable, given the value of the independent variable.  Lambda also varies between 0 & 1.  A value of 0 means no improvement in prediction.  A value of 1 indicates that the predication can be made without error.  This happens when each independent variable is associated with a single category of the dependent variable.
  17. 17.        The previous section considered hypothesis testing to associations. Now we would be focusing on hypothesis testing related to differences. Hypothesis-testing procedures can be broadly classified as parametric or non-parametric, based on the measurement scale of the variables involved. Parametric tests assume that the variables of interest are measured on at least an interval scale. Non-parametric tests assume that the variables are measured on a nominal or ordinal scale. They are been further classified based on whether one, two, or more samples are involved. The no. of samples is determined based on how the data are treated for the purpose of analysis, not based on how the data were collected.
  18. 18. Independent Samples Parametric Tests (Metric Data) Two Samples Paired Samples One Sample t =test z = test Two group t= test z = test Paired t test
  19. 19. Nonparametri c Tests (Nonmetric Data) Two Sampl es Independent Samples Chi-square MannWhitney Median K-S Chisquare K–S Runs Binomial Chisquare One Sample Paired Samples Sign Wilcoxon McNemar Chisquare
  20. 20.  Parametric Tests provide inferences for making statements about the means of parent population.  A t test is commonly used for purpose.  The test is based on the student’s t statistic .  The t statistic assumes that the variable is normally distributed, with mean is known (or assumed to be known), and the population variance is estimated from the sample.  One or two samples and compute the mean and standard deviation for each sample.
  21. 21.  Non parametric tests are used when the independent variables are nonmetric.  Like parametric tests, nonparametric tests are available for testing variables from one sample, two independent samples, or two related samples
  22. 22.  An important nonparametric test for examining differences in the location of two populations based on paired observations is the Wilcoxon matched-pairs signed-rank test.  The test analyzes the differences between the paired observation, taking into account the magnitude of the differences.  It computes the differences between the pairs of variables and ranks the absolute differences.  The next step is to sum the positive & negative ranks.

×