Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Research methodology iii

The last of the series..........but yet many things still remain to discuss!!!!!

  • Login to see the comments

Research methodology iii

  1. 1. RESEARCH METHODOLOGY PART III DR. ANWAR HASAN SIDDIQUI, senior resident, dep't of physiology, jnmc, amu, aligarh
  2. 2. Research Process I. Define Research Problem Review concepts and theories III. Formulate hypotheses IV. Design research(including sample design) V. Collect data (Execution) Review previous research finding VI. Analyse data (Test hypotheses) VII. Interpret and report II. Review the literature
  3. 3. Hypothesis Testing (a) Null hypothesis and alternative hypothesis: • In the context of statistical analysis, we often talk about null hypothesis and alternative hypothesis. • According to Fisher, a hypothesis which is tested for plausible rejection under the assumption that it is true is called the ‘Null Hypothesis’. • If we are to compare drug A with drug B about its efficacy and if we proceed on the assumption that both drugs are equally efficacious, then this assumption is termed as the null hypothesis. • Any other rival hypothesis is called ‘alternative hypothesis’ BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  4. 4. Hypothesis Testing • The null hypothesis is generally symbolized as H0 and the alternative hypothesis as Ha. • The null hypothesis and the alternative hypothesis are chosen before the sample is drawn . • Alternative hypothesis is usually the one which one wishes to prove and the null hypothesis is the one which one wishes to disprove. • Thus, a null hypothesis represents the hypothesis we are trying to reject, and alternative hypothesis represents all other possibilities. BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  5. 5. Hypothesis Testing (b) Type I error and type II error. • In the context of testing of hypotheses, there are basically two types of errors we can make. • We may reject H0 when H0 is true which results in Type I error. • We may accept H0 when in fact H0 is not true which results in Type II error. • Type I error means rejection of hypothesis which should have been accepted and Type II error means accepting the hypothesis which should have been rejected. • Type I error is denoted by a (alpha) known as a error, also called the level of significance of test; and Type II error is denoted by b (beta) known as b error. BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  6. 6. Hypothesis Testing Type I error • The probability of Type I error is usually determined in advance and is understood as the level of significance of testing the hypothesis. • If type I error is fixed at 5 per cent, it means that there are about 5 chances in 100 that we will reject H0 when H0 is true. BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  7. 7. Hypothesis Testing • The probability of committing a Type I error (chances of getting it wrong) is commonly referred to as p-value. • When a hypothesis test results in a p-value (probability of an a error) that is less than the significance level, the result of the hypothesis test is called statistically significant. • The conventional range for alpha is between 0.01 and 0.10. • Although numbers such as 0.10, 0.05 and 0.01 are values commonly used for alpha, there is no overriding mathematical theorem that says these are the only levels of significance that we can use. BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  8. 8. Hypothesis Testing Type II error • Type II error means accepting the hypothesis which should have been rejected. • The probability of a type II error is given by the Greek letter beta. • The probability of avoiding a type II error is called the power of the hypothesis test, and is denoted by the quantity 1 - β. • An attempt to decrease one type of error is accompanied in general by an increase in the other type of error. The only way to reduce both types of error is to increase the sample size, which may or may not be possible. BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  9. 9. Hypothesis Testing
  10. 10. Hypothesis Testing
  11. 11. Hypothesis Testing • The choice of significance level should be based on the consequences of Type I and Type II errors. • If the consequences of a type I error are serious , then a very small significance level is appropriate. • Example 1: Two drugs are being compared for effectiveness in treating the same condition. Drug 1 is very affordable, but Drug 2 is extremely expensive. The null hypothesis is "both drugs are equally effective," and the alternate is "Drug 2 is more effective than Drug 1." In this situation, a Type I error would be deciding that Drug 2 is more effective, when in fact it is no better than Drug 1, but would cost the patient much more money. That would be undesirable, so a small significance level is warranted. Deciding what significance level to use:
  12. 12. Hypothesis Testing • If the consequences of a Type I error are not very serious (and especially if a Type II error has serious consequences), then a larger significance level is appropriate. • Example 2: Two drugs are known to be equally effective for a certain condition. They are also each equally affordable. However, there is some suspicion that Drug 2 causes a serious side-effect in some patients, whereas Drug 1 has no reports of the side effect. • The null hypothesis is "the incidence of the side effect in both drugs is the same", and the alternate is "the incidence of the side effect in Drug 2 is greater than that in Drug 1." • Falsely rejecting the null hypothesis when it is in fact true (Type I error) would have no great consequences for the consumer, but a Type II error (i.e., failing to reject the null hypothesis when in fact the alternate is true, which would result in deciding that Drug 2 is no more harmful than Drug 1 when it is in fact more harmful) could have serious consequences from a public health standpoint. So setting a large significance level is appropriate. Deciding what significance level to use:
  13. 13. Hypothesis Testing Power: • Power is the probability of correctly rejecting a false null hypothesis. • Power = 1 – β • Since Power is the probability of correctly rejecting a false null hypothesis, It is to our best interest to increase power. • There are several ways to increase power: – Increase the sample size. – Increase the alpha level. You will have more of a chance of rejecting the alternative at the 5% level of significance than a 1% test. – Consider an alternative that is farther away from the null hypothesis. BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  14. 14. Hypothesis Testing BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  15. 15. Hypothesis Testing Confidence interval: • A confidence interval is a range of values within which the population parameter is expected to occur. • The two confidence intervals that are used extensively are the 95% and the 99%. • The upper and lower limit for a confidence interval is given by : • Where: X = Sample mean Z = critical value obtained from table s= standard deviation n= sample size BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES n s zX 
  16. 16. Hypothesis Testing BASIC CONCEPTS CONCERNING TESTING OF HYPOTHESES
  17. 17. • The Dean of the Business School wants to estimate the mean number of hours worked per week by students. A sample of 49 students showed a mean of 24 hours with a standard deviation of 4 hours. Develop a 95 percent confidence interval for the population mean. • Solution: • 24 ± 1.96(4/√49)= 24 ± 1.12 • The confidence limit is 22.88 to 25.12. n s zX 
  18. 18. TESTS OF HYPOTHESES Statisticians have developed several tests of hypotheses (also known as the tests of significance) for the purpose of testing of hypotheses which can be classified as: (a) Parametric tests or standard tests of hypotheses; and (b) Non-parametric tests or distribution-free test of hypotheses.
  19. 19. TESTS OF HYPOTHESES • Parametric tests usually assume certain properties of the parent population from which we draw samples. • Assumptions like observations come from a normal population, sample size is large, assumptions about the population parameters like mean, variance, etc • Non-parametric tests do not depend on any assumption about the parameters of the parent population. • Non parametric test generally are less statistically powerful than the analogous parametric procedure • a nonparametric test will require a slightly larger sample size to have the same power as the corresponding parametric test. Parametric tests vs Non parametric test
  20. 20. TESTS OF HYPOTHESES The basic distinction for paramteric versus non-parametric is: • If measurement scale is nominal or ordinal then use non- parametric statistics • If you are using interval or ratio scales you use parametric statistics. • Parametric tests can be used only when the distribution of data is normal. • If a distribution deviates markedly from normality then you take the risk that the statistic will be inaccurate. The safest thing to do is to use an non-parametric statistic. Parametric tests vs Non parametric test
  21. 21. TESTS OF HYPOTHESES
  22. 22. TESTS OF HYPOTHESES • The important parametric tests are: (1) z-test; (2) t-test; (3)* χ2-test, and (4) F-test. • All these tests are based on the assumption of normality i.e., the source of data is considered to be normally distributed Parametric tests
  23. 23. TESTS OF HYPOTHESES • This is a most frequently used test in research studies. • z-test is generally used for comparing the mean of a sample to some hypothesised mean for the population in case of large sample, or when population variance is known. • This test may be used for judging the significance of median, mode, and coefficient of correlation. z-test
  24. 24. TESTS OF HYPOTHESES One sample Two samples z-test
  25. 25. TESTS OF HYPOTHESES z-test illustration
  26. 26. TESTS OF HYPOTHESES • t-test is based on t-distribution and is considered an appropriate test for judging the significance of a sample mean or for judging the significance of difference between the means of two samples in case of small sample(s) when population variance is not known. • Student in 1908 derived the ‘t’ distribution table to compute the ‘t statistics. t-test • When the observation is carried out on two independent sample (control and case) and their mean are compared for significance difference the comparison test is knows as ‘unpaired t test’. • When the observations are carried out in a single sample before and after treatment and compared for significance the test is knows as ‘paired t test’
  27. 27. TESTS OF HYPOTHESES t-test One sample Two samples unpaired paired
  28. 28. TESTS OF HYPOTHESES t-test illustration

×