Upcoming SlideShare
×

# Inferential statistics (2)

7,681 views

Published on

Published in: Education, Technology
1 Comment
12 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No

Are you sure you want to  Yes  No
Views
Total views
7,681
On SlideShare
0
From Embeds
0
Number of Embeds
59
Actions
Shares
0
794
1
Likes
12
Embeds 0
No embeds

No notes for slide

### Inferential statistics (2)

1. 1. ED 502 MR. JESSIE NIQUE
2. 2. Inferential Statistics <ul><li>Making generalizations and predicting significance in differences of characteristics or relationships among research variables </li></ul>
3. 3. Two Types of Techniques
4. 4. <ul><li>1. Parametric Technique </li></ul><ul><li>2. Non-Parametric Technique </li></ul>
5. 5. Parametric Technique <ul><li>makes various assumptions about the nature of the population from which the sample for study is drawn </li></ul><ul><li>capable of determining the actual difference or relationship in the study </li></ul>
6. 6. Non-Parametric Technique <ul><li>makes few, if any, assumptions about the nature of the population </li></ul><ul><li>useful in measurements of nominal and ordinal data </li></ul><ul><li>cannot determine the relationships in a study </li></ul>
7. 7. Commonly Employed Parametric Tests in Inferring Quantitative Data
8. 8. <ul><li>t-test </li></ul><ul><li>One-Way Analysis of Variance (1-Way ANOVA) </li></ul><ul><li>Two-Way Analysis of Variance (2-Way ANOVA) </li></ul><ul><li>Multiple Analysis of Variance (MANOVA) </li></ul><ul><li>Duncan’s Multiple Range Test </li></ul><ul><li>Scheffe Test </li></ul><ul><li>Pearson Product-Moment Correlation </li></ul>
9. 9. t-Test <ul><li>used to establish the significance of the difference between the means of two samples </li></ul><ul><li>the test results to a t-value, which is checked against a statistical table </li></ul>
10. 10. t-Test <ul><li>The t-test for INDEPENDENT MEANS compares the mean scores of two different or independent groups </li></ul><ul><li>The t-test for CORRELATED MEANS compares the mean scores of the same group before and after a treatment is given or the mean scores of the same group with different treatments </li></ul>
11. 11. example. . . t-test for INDEPENDENT MEANS <ul><ul><li>In comparing the water resistance capabilities of two groups of insulating board substrates, namely the coconut shell-wood resin composite and the coconut bark-wood resin composite, the respective means of water resistance values for the two composite groups were compared and the significance of their difference in the said test variable was determined </li></ul></ul>
12. 12. t-test for INDEPENDENT MEANS is useful for the following reasons <ul><li>There were only two means of independent variables compared </li></ul><ul><li>The two mean sources are independent from each other as to characteristics </li></ul>
13. 13. example. . . t-test for CORRELATED MEANS <ul><ul><li>A study compared the initial and final effluent values of the wastewater that underwent a pilot treatment process in terms of dissolved solids. The mean value of the initial dissolved solids before and after the wastewater underwent treatment process were measured. The effectiveness of the treatment process was based on the significance of the difference in the mean values of dissolved solids present in the wastewater before and after treatment. </li></ul></ul>
14. 14. t-test for CORRELATED MEANS is useful for the following reasons <ul><li>There were only two means of independent variables compared </li></ul><ul><li>The two mean sources are from the same wastewater reservoir and have been measured before and after the treatment process </li></ul>
15. 15. One-Way Analysis of Variance (1-Way ANOVA) <ul><li>Useful when the researcher wants to determine the significant differences among the means of more than two groups </li></ul><ul><li>Appropriate to use with three or more groups </li></ul><ul><li>The variable both within and between each of the group is analyzed to establish the F value </li></ul><ul><li>Also used when more than one independent variable is investigated </li></ul>
16. 16. example. . . <ul><ul><li>A study was conducted to compare the effect of hot water treatment on the shelf life of newly harvested mangoes. The mangoes were grouped into five (5), initially weighed and then immersed in 60°C hot water for 1minute, 2minutes, 3min, 4min and 5min respectively. They were stored properly and weights were monitored daily for one week and collated this establishing the weights for the group. This is to predict the effectiveness of the varying exposure time of newly harvested mangoes to hot water in order to increase the shelf life. </li></ul></ul>
17. 17. the 1-Way ANOVA is useful for the following reasons <ul><li>There were more than two means as independent variables compared; i.e. means of final weights from treatment groups immersed in hot water for 1min, 2min, 3min, 4min and 5minutes. </li></ul><ul><li>There was only one independent variable involved, i.e. the time exposure of newly harvested mangoes to hot water </li></ul>
18. 18. Two-Way Analysis of Variance (2-Way ANOVA) <ul><li>A special test for factorial ANOVA; that is, it determines the main and simultaneous effects of two independent factors on one or more dependent variables </li></ul>
19. 19. example. . . <ul><ul><li>In the study “ Effects of Varying Levels of Oyster Shell Grits and Particle Size on the Growth and Breaking Strength of Metatarsal Bones of Broilers”, the simultaneous effect of the two independent variables (1) level of oyster shell grits and (2) particle sizes were determined in terms of the mean growth , mean weight and mean vertical compression. The significance of differences in the growth, weight vertical compression of the metatarsal bones of broilers as affected by these independent variables were determined using the 2-Way ANOVA. </li></ul></ul>
20. 20. the 2-Way ANOVA is useful for the following reasons <ul><li>There are more than two means as independent variables compared; i.e. five levels for the first independent variable and three particle sizes for the other independent variables </li></ul><ul><li>There were two independent variables, i.e. level of oyster shell grits and particle sizes, whose effects on the dependent variables of the study were simultaneously determined </li></ul>
21. 21. Multiple Analysis of Variance (MANOVA) <ul><li>another factorial ANOVA; that determines the simultaneous effects of three or more independent variables on one or more dependent variables being studied </li></ul><ul><li>when more than 2 groups are compared, the F value can not identify which pairs of means are significantly different </li></ul><ul><li>a post hoc analysis is therefore re quired to show significance </li></ul>
22. 22. example. . . <ul><ul><li>In the study that determined the acute toxicity and mid-lethal concentration (Ls 50 ) of ammonia on nile tilapia fingerlings when they are subjected to different ammonia concentration, pH levels and time exposure </li></ul></ul>
23. 23. the MANOVA is useful for the following reasons <ul><li>There were more than two means being compared among the independent variables; i.e. five concentration levels, five pH levels and four exposure periods </li></ul><ul><li>There were three independent variables, i.e. concentration levels, pH levels and exposure periods, whose effects in the acute toxicity and mid-lethal concentration (Ls 50 ) of ammonia on nile tilapia fingerlings were determined </li></ul>
24. 24. Duncan’s Multiple Range Test <ul><li>a post hoc multiple comparison test that is employed when the ANOVA test showed significance </li></ul><ul><li>makes a stepwise and pair-wise comparison that will determine which of the given means differ </li></ul>
25. 25. Scheffe Test <ul><li>another post hoc multiple comparison test that is employed when the ANOVA test showed significance </li></ul><ul><li>performs simultaneous joint pair-wise comparison for all possible pair-wise combination of means </li></ul>
26. 26. Pearson Product-Moment Correlation <ul><li>used to determine the co-variation between two variables in the study </li></ul><ul><li>result of test may establish if the correlation is a positive or negative </li></ul><ul><li>if the results show that the test organisms responds highly in one variable and also responds in the same way in another, then the correlation is positive, otherwise the correlation is negative </li></ul>
27. 27. Commonly Employed NON-Parametric Tests in Inferring Quantitative Data
28. 28. <ul><li>Chi-Square Test </li></ul><ul><li>Wilcoxon Signed-Rank Test </li></ul><ul><li>Friedman’s Test </li></ul>
29. 29. Chi-Square Test <ul><li>tabulates a given variable into categories </li></ul><ul><li>compares an observed and expected frequencies in each given category to test either that all categories contain the same proportion of values or that each given category contains a user-specified proportion of values </li></ul>
30. 30. Wilcoxon Signed-Rank Test <ul><li>used with two related variables to test the hypothesis </li></ul><ul><li>that the two variables have the same distribution, taking into account information about the magnitude of differences within pairs </li></ul><ul><li>gives more weight to pairs that show large differences than to pairs that show small ones </li></ul>
31. 31. Friedman’s Test <ul><li>an equivalent to the two-way analysis of variance with one observation for every cell </li></ul><ul><li>thus tests the hypothesis that k related variables come from the same population </li></ul>
32. 32. Level of Significance in Inferential Statistical Testing <ul><li>this should be established before running the actual test </li></ul><ul><li>any result that has a significant level of 0.05 or less is considered an indication of unlikely accepting the null hypothesis </li></ul><ul><li>fixes the probability of rejecting the null hypothesis only 5 times in 100 sets of replicated experiments </li></ul><ul><li>the probability of achieving an error of rejecting the null hypothesis is 5% </li></ul>