This document discusses quantitative data analysis methods. It begins by explaining that collected data must be analyzed using appropriate statistical methods to meet information needs based on the objectives for data collection. Various types of variables, scales of measurement, and descriptive and inferential statistics are defined. Specific statistical tests that can be used to compare groups or determine relationships among variables are also outlined, including parametric tests that assume normal distributions and nonparametric alternatives.
1. Descriptive statistics provide a simple summary of data through measures of central tendency, frequency, and variability.
2. Common measures include the mean, median, mode, standard deviation, and outliers.
3. Inferential statistics allow researchers to make generalizations about populations based on analyses of samples. They include t-tests, ANOVA, correlation, and regression.
This document discusses various methods for analyzing quantitative data, including coding data, creating a codebook, entering data into a grid format for analysis, checking data for accuracy, and using computers and statistical software to analyze data. It covers descriptive statistics for one and two variables, such as frequency distributions, measures of central tendency and variation, scatterplots, cross-tabulations, and measures of association between two variables.
This document discusses various statistical concepts such as measures of central tendency, measures of dispersion, standard deviation, normal distribution, and tests of significance. It provides examples and formulas to calculate range, mean deviation, standard deviation, z-scores, confidence intervals, and the chi-square test. Biological variation is common and various statistical methods can be used to analyze data and find patterns in large datasets.
This document outlines topics related to statistics that will be covered. It is divided into 6 parts. Part 1 discusses the role of statistics in research, descriptive statistics, sampling procedures, sample size, and inferential statistics. Part 2 covers choice of statistical tests, defining variables, scales of measurements, and number of samples. Parts 3 and 4 discuss parametric and non-parametric tests. Part 5 is about goodness of fit tests. Part 6 covers choosing correct statistical tests and introduction to multiple regression. The document also provides examples and definitions of key statistical concepts like mean, median, mode, range, and different sampling methods.
This document provides an overview of statistics and statistical tests. It defines descriptive statistics as concerned with data collection, presentation and interpretation, while inferential statistics involves drawing conclusions from statistical analysis. Parametric tests can be applied to normally distributed interval/ratio data, while non-parametric tests do not require normality assumptions. Examples of parametric and non-parametric tests are provided, along with guidelines for applying a two-sample t-test to compare means between two independent groups. Two examples of applying a t-test are given to test differences between groups.
This document discusses statistical procedures for analyzing different types of data based on their structure. It describes three basic data structures: 1) a single group with one score per participant, 2) a single group with multiple variables measured per participant, and 3) multiple groups with scores measuring the same variable. For each data structure, it provides examples of descriptive and inferential statistics that can be used based on the scale of measurement (nominal, ordinal, interval/ratio).
Frequency Measures for Healthcare Professioanlsalberpaules
Frequency distributions summarize data by grouping values of a variable and counting the number of observations in each group. This document discusses measures used to describe frequency distributions, including measures of central tendency (mode, median, mean) and measures of variability. The mode is the most frequent value, median is the middle value, and mean averages all values. These measures summarize the central or typical value in a data set.
1. Descriptive statistics provide a simple summary of data through measures of central tendency, frequency, and variability.
2. Common measures include the mean, median, mode, standard deviation, and outliers.
3. Inferential statistics allow researchers to make generalizations about populations based on analyses of samples. They include t-tests, ANOVA, correlation, and regression.
This document discusses various methods for analyzing quantitative data, including coding data, creating a codebook, entering data into a grid format for analysis, checking data for accuracy, and using computers and statistical software to analyze data. It covers descriptive statistics for one and two variables, such as frequency distributions, measures of central tendency and variation, scatterplots, cross-tabulations, and measures of association between two variables.
This document discusses various statistical concepts such as measures of central tendency, measures of dispersion, standard deviation, normal distribution, and tests of significance. It provides examples and formulas to calculate range, mean deviation, standard deviation, z-scores, confidence intervals, and the chi-square test. Biological variation is common and various statistical methods can be used to analyze data and find patterns in large datasets.
This document outlines topics related to statistics that will be covered. It is divided into 6 parts. Part 1 discusses the role of statistics in research, descriptive statistics, sampling procedures, sample size, and inferential statistics. Part 2 covers choice of statistical tests, defining variables, scales of measurements, and number of samples. Parts 3 and 4 discuss parametric and non-parametric tests. Part 5 is about goodness of fit tests. Part 6 covers choosing correct statistical tests and introduction to multiple regression. The document also provides examples and definitions of key statistical concepts like mean, median, mode, range, and different sampling methods.
This document provides an overview of statistics and statistical tests. It defines descriptive statistics as concerned with data collection, presentation and interpretation, while inferential statistics involves drawing conclusions from statistical analysis. Parametric tests can be applied to normally distributed interval/ratio data, while non-parametric tests do not require normality assumptions. Examples of parametric and non-parametric tests are provided, along with guidelines for applying a two-sample t-test to compare means between two independent groups. Two examples of applying a t-test are given to test differences between groups.
This document discusses statistical procedures for analyzing different types of data based on their structure. It describes three basic data structures: 1) a single group with one score per participant, 2) a single group with multiple variables measured per participant, and 3) multiple groups with scores measuring the same variable. For each data structure, it provides examples of descriptive and inferential statistics that can be used based on the scale of measurement (nominal, ordinal, interval/ratio).
Frequency Measures for Healthcare Professioanlsalberpaules
Frequency distributions summarize data by grouping values of a variable and counting the number of observations in each group. This document discusses measures used to describe frequency distributions, including measures of central tendency (mode, median, mean) and measures of variability. The mode is the most frequent value, median is the middle value, and mean averages all values. These measures summarize the central or typical value in a data set.
1. Dr. Ritesh Malik gave a presentation on health information and basic medical statistics at Theni Govt. Medical College in Tamil Nadu, India.
2. The presentation covered topics such as data versus information, measures of central tendency (mean, median, mode), standard deviation, standard error, and tests of significance.
3. Tests of significance allow researchers to determine whether observed differences are statistically significant or likely due to chance, such as the standard error of the mean, standard error of proportion, and chi square test.
The document provides an overview of quantitative data analysis and various statistical concepts including the normal distribution, z-tests, confidence intervals, and t-tests. It discusses how the normal distribution was developed by de Moivre and Gauss. It then explains the key properties of the normal distribution and how it can be used to describe many natural phenomena. Examples are provided to illustrate how to calculate and interpret confidence intervals and choose the appropriate statistical test.
Descriptive statistics are used to summarize and describe characteristics of a data set. It includes measures of central tendency like mean, median, and mode, measures of variability like range and standard deviation, and the distribution of data through histograms. Inferential statistics are used to generalize results from a sample to the population it represents through estimation of population parameters and hypothesis testing. Correlation and regression analysis are used to study relationships between two or more variables.
UNIVARIATE & BIVARIATE ANALYSIS
UNIVARIATE BIVARIATE & MULTIVARIATE
UNIVARIATE ANALYSIS
-One variable analysed at a time
BIVARIATE ANALYSIS
-Two variable analysed at a time
MULTIVARIATE ANALYSIS
-More than two variables analysed at a time
TYPES OF ANALYSIS
DESCRIPTIVE ANALYSIS
INFERENTIAL ANALYSIS
DESCRIPTIVE ANALYSIS
Transformation of raw data
Facilitate easy understanding and interpretation
Deals with summary measures relating to sample data
Eg-what is the average age of the sample?
INFERENTIAL ANALYSIS
Carried out after descriptive analysis
Inferences drawn on population parameters based on sample results
Generalizes results to the population based on sample results
Eg-is the average age of population different from 35?
DESCRIPTIVE ANALYSIS OF UNIVARIATE DATA
1. Prepare frequency distribution of each variable
Missing Data
Situation where certain questions are left unanswered
Analysis of multiple responses
Measures of central tendency
3 measures of central tendency
1.Mean
2.Median
3.Mode
MEAN
Arithmetic average of a variable
Appropriate for interval and ratio scale data
x
MEDIAN
Calculates the middle value of the data
Computed for ratio, interval or ordinal scale.
Data needs to be arranged in ascending or descending order
MODE
Point of maximum frequency
Should not be computed for ordinal or interval data unless grouped.
Widely used in business
MEASURE OF DISPERSION
Measures of central tendency do not explain distribution of variables
4 measures of dispersion
1.Range
2.Variance and standard deviation
3.Coefficient of variation
4.Relative and absolute frequencies
DESCRIPTIVE ANALYSIS OF BIVARIATE DATA
There are three types of measure used.
1.Cross tabulation
2.Spearmans rank correlation coefficient
3.Pearsons linear correlation coefficient
Cross Tabulation
Responses of two questions are combined
Spearman’s rank order correlation coefficient.
Used in case of ordinal data
This document discusses repeated measures analysis of variance (ANOVA). It explains that repeated measures ANOVA compares measures taken on the same subjects across different treatment conditions, controlling for individual differences. It provides the computational formulas for calculating sums of squares for between treatments, between subjects, and error. It also discusses degrees of freedom, mean squares, and the F-ratio test used to determine if there are significant differences among treatment means while accounting for correlations between measures from the same subject.
This document provides an overview of behavioral statistics and the statistics course. It discusses why statistics is important, particularly for behavioral science. It also outlines course objectives like interpreting research findings, employing statistical models, and recognizing limitations. Key concepts covered include descriptive and inferential statistics, variables, scales of measurement, research methods, and statistical notation. The goal is to help students learn statistical procedures to organize, summarize, and interpret information from research studies.
How to choose the right statistics techniques in different situation. This short presentation provide a compact summary on various method of statistics either descriptive and inferential.
for further inquiry please reach me at bodhiyawijaya@gmail.com
The document defines various statistical measures and types of statistical analysis. It discusses descriptive statistical measures like mean, median, mode, and interquartile range. It also covers inferential statistical tests like the t-test, z-test, ANOVA, chi-square test, Wilcoxon signed rank test, Mann-Whitney U test, and Kruskal-Wallis test. It explains their purposes, assumptions, formulas, and examples of their applications in statistical analysis.
This document discusses various statistical methods used to organize and interpret data. It describes descriptive statistics, which summarize and simplify data through measures of central tendency like mean, median, and mode, and measures of variability like range and standard deviation. Frequency distributions are presented through tables, graphs, and other visual displays to organize raw data into meaningful categories.
This document provides an overview of descriptive statistics and related concepts. It begins with an introduction to descriptive analysis and then covers various types of variables and levels of measurement. It describes measures of central tendency including mean, median and mode. Measures of dispersion like range, standard deviation and normal distribution are also discussed. The document also covers measures of asymmetry, relationship and concludes with emphasizing the importance of statistical planning in research.
This document discusses inferential statistics and hypothesis testing. It provides examples of researchers formulating hypotheses and collecting data to test them. Researchers take random samples from populations to test if there are meaningful differences between groups. Hypothesis testing involves comparing experimental and control groups after exposing them to different levels of an independent variable. The goal is to determine if the independent variable caused a detectable change in the dependent variable. Inferential statistics are used to test if sample means differ significantly, which would suggest the hypothesis is supported or not supported. Proper sampling and estimating sampling distributions, standard errors, and variability are important concepts for accurately testing hypotheses about populations based on sample data.
This document discusses non-parametric tests, which are statistical tests that make fewer assumptions about the population distribution compared to parametric tests. Some key points:
1) Non-parametric tests like the chi-square test, sign test, Wilcoxon signed-rank test, Mann-Whitney U-test, and Kruskal-Wallis test are used when the population is not normally distributed or sample sizes are small.
2) They are applied in situations where data is on an ordinal scale rather than a continuous scale, the population is not well defined, or the distribution is unknown.
3) Advantages are that they are easier to compute and make fewer assumptions than parametric tests,
Repeated measures ANOVA is used to compare mean scores on the same individuals across multiple time points or conditions. It extends the dependent t-test to allow for more than two time points or conditions. Key assumptions include having a continuous dependent variable, at least two related groups or conditions, no outliers, normally distributed differences between groups, and sphericity. Repeated measures ANOVA separates variance into between-subjects, between-measures, and error components to test if there are differences in mean scores between related groups while accounting for correlations between measures on the same individuals.
This document discusses parametric tests used for statistical analysis. It introduces t-tests, ANOVA, Pearson's correlation coefficient, and Z-tests. T-tests are used to compare means of small samples and include one-sample, unpaired two-sample, and paired two-sample t-tests. ANOVA compares multiple population means and includes one-way and two-way ANOVA. Pearson's correlation measures the strength of association between two continuous variables. Z-tests compare means or proportions of large samples. Key assumptions and calculations for each test are provided along with examples. The document emphasizes the importance of choosing the appropriate statistical test for research.
Analysis of Variance and Repeated Measures DesignJ P Verma
This presentation discusses the basic concept used in analysis of variance and it shows the difference between independent measures ANOVA and Repeated measures ANOVA
This document discusses repeated measures designs and analyzing data from such designs using repeated measures ANOVA. It explains that repeated measures ANOVA involves comparing measures taken from the same subjects across different treatment conditions while controlling for individual differences. The document provides details on the null and alternative hypotheses, calculating variance components, and assumptions of repeated measures ANOVA.
INFERENTIAL STATISTICS: AN INTRODUCTIONJohn Labrador
For instance, we use inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study.
This document discusses key concepts in inferential statistics and hypothesis testing. It explains that inferential statistics allow estimating population characteristics from sample data and are used to answer questions about comparisons or relationships. Hypothesis testing involves forming a null hypothesis, which is statistically tested to determine if an observed difference is likely due to chance or a real treatment effect. Type 1 and type 2 errors in hypothesis testing are defined. The document also outlines factors like level of significance, power, and choice of parametric vs non-parametric tests based on the study design and data.
This document provides an overview of analysis of variance (ANOVA). It begins by defining parametric tests and discussing the assumptions of ANOVA. The key ideas of ANOVA are introduced, including comparing the variance between groups to the variance within groups. Calculations for one-way ANOVA are demonstrated, including sums of squares, mean squares, and the F-statistic. Examples are provided to illustrate one-way ANOVA calculations and interpretations. Violations of assumptions and extensions to two-way ANOVA are also discussed.
The document provides information on statistical techniques for comparing means between groups, including t-tests, analysis of variance (ANOVA), and their assumptions and applications. T-tests are used to compare two groups, while ANOVA allows comparison of three or more groups and controls for increased Type I error rates. Steps for conducting t-tests, ANOVA, and post-hoc tests using SPSS are outlined along with examples and interpretations.
1. Dr. Ritesh Malik gave a presentation on health information and basic medical statistics at Theni Govt. Medical College in Tamil Nadu, India.
2. The presentation covered topics such as data versus information, measures of central tendency (mean, median, mode), standard deviation, standard error, and tests of significance.
3. Tests of significance allow researchers to determine whether observed differences are statistically significant or likely due to chance, such as the standard error of the mean, standard error of proportion, and chi square test.
The document provides an overview of quantitative data analysis and various statistical concepts including the normal distribution, z-tests, confidence intervals, and t-tests. It discusses how the normal distribution was developed by de Moivre and Gauss. It then explains the key properties of the normal distribution and how it can be used to describe many natural phenomena. Examples are provided to illustrate how to calculate and interpret confidence intervals and choose the appropriate statistical test.
Descriptive statistics are used to summarize and describe characteristics of a data set. It includes measures of central tendency like mean, median, and mode, measures of variability like range and standard deviation, and the distribution of data through histograms. Inferential statistics are used to generalize results from a sample to the population it represents through estimation of population parameters and hypothesis testing. Correlation and regression analysis are used to study relationships between two or more variables.
UNIVARIATE & BIVARIATE ANALYSIS
UNIVARIATE BIVARIATE & MULTIVARIATE
UNIVARIATE ANALYSIS
-One variable analysed at a time
BIVARIATE ANALYSIS
-Two variable analysed at a time
MULTIVARIATE ANALYSIS
-More than two variables analysed at a time
TYPES OF ANALYSIS
DESCRIPTIVE ANALYSIS
INFERENTIAL ANALYSIS
DESCRIPTIVE ANALYSIS
Transformation of raw data
Facilitate easy understanding and interpretation
Deals with summary measures relating to sample data
Eg-what is the average age of the sample?
INFERENTIAL ANALYSIS
Carried out after descriptive analysis
Inferences drawn on population parameters based on sample results
Generalizes results to the population based on sample results
Eg-is the average age of population different from 35?
DESCRIPTIVE ANALYSIS OF UNIVARIATE DATA
1. Prepare frequency distribution of each variable
Missing Data
Situation where certain questions are left unanswered
Analysis of multiple responses
Measures of central tendency
3 measures of central tendency
1.Mean
2.Median
3.Mode
MEAN
Arithmetic average of a variable
Appropriate for interval and ratio scale data
x
MEDIAN
Calculates the middle value of the data
Computed for ratio, interval or ordinal scale.
Data needs to be arranged in ascending or descending order
MODE
Point of maximum frequency
Should not be computed for ordinal or interval data unless grouped.
Widely used in business
MEASURE OF DISPERSION
Measures of central tendency do not explain distribution of variables
4 measures of dispersion
1.Range
2.Variance and standard deviation
3.Coefficient of variation
4.Relative and absolute frequencies
DESCRIPTIVE ANALYSIS OF BIVARIATE DATA
There are three types of measure used.
1.Cross tabulation
2.Spearmans rank correlation coefficient
3.Pearsons linear correlation coefficient
Cross Tabulation
Responses of two questions are combined
Spearman’s rank order correlation coefficient.
Used in case of ordinal data
This document discusses repeated measures analysis of variance (ANOVA). It explains that repeated measures ANOVA compares measures taken on the same subjects across different treatment conditions, controlling for individual differences. It provides the computational formulas for calculating sums of squares for between treatments, between subjects, and error. It also discusses degrees of freedom, mean squares, and the F-ratio test used to determine if there are significant differences among treatment means while accounting for correlations between measures from the same subject.
This document provides an overview of behavioral statistics and the statistics course. It discusses why statistics is important, particularly for behavioral science. It also outlines course objectives like interpreting research findings, employing statistical models, and recognizing limitations. Key concepts covered include descriptive and inferential statistics, variables, scales of measurement, research methods, and statistical notation. The goal is to help students learn statistical procedures to organize, summarize, and interpret information from research studies.
How to choose the right statistics techniques in different situation. This short presentation provide a compact summary on various method of statistics either descriptive and inferential.
for further inquiry please reach me at bodhiyawijaya@gmail.com
The document defines various statistical measures and types of statistical analysis. It discusses descriptive statistical measures like mean, median, mode, and interquartile range. It also covers inferential statistical tests like the t-test, z-test, ANOVA, chi-square test, Wilcoxon signed rank test, Mann-Whitney U test, and Kruskal-Wallis test. It explains their purposes, assumptions, formulas, and examples of their applications in statistical analysis.
This document discusses various statistical methods used to organize and interpret data. It describes descriptive statistics, which summarize and simplify data through measures of central tendency like mean, median, and mode, and measures of variability like range and standard deviation. Frequency distributions are presented through tables, graphs, and other visual displays to organize raw data into meaningful categories.
This document provides an overview of descriptive statistics and related concepts. It begins with an introduction to descriptive analysis and then covers various types of variables and levels of measurement. It describes measures of central tendency including mean, median and mode. Measures of dispersion like range, standard deviation and normal distribution are also discussed. The document also covers measures of asymmetry, relationship and concludes with emphasizing the importance of statistical planning in research.
This document discusses inferential statistics and hypothesis testing. It provides examples of researchers formulating hypotheses and collecting data to test them. Researchers take random samples from populations to test if there are meaningful differences between groups. Hypothesis testing involves comparing experimental and control groups after exposing them to different levels of an independent variable. The goal is to determine if the independent variable caused a detectable change in the dependent variable. Inferential statistics are used to test if sample means differ significantly, which would suggest the hypothesis is supported or not supported. Proper sampling and estimating sampling distributions, standard errors, and variability are important concepts for accurately testing hypotheses about populations based on sample data.
This document discusses non-parametric tests, which are statistical tests that make fewer assumptions about the population distribution compared to parametric tests. Some key points:
1) Non-parametric tests like the chi-square test, sign test, Wilcoxon signed-rank test, Mann-Whitney U-test, and Kruskal-Wallis test are used when the population is not normally distributed or sample sizes are small.
2) They are applied in situations where data is on an ordinal scale rather than a continuous scale, the population is not well defined, or the distribution is unknown.
3) Advantages are that they are easier to compute and make fewer assumptions than parametric tests,
Repeated measures ANOVA is used to compare mean scores on the same individuals across multiple time points or conditions. It extends the dependent t-test to allow for more than two time points or conditions. Key assumptions include having a continuous dependent variable, at least two related groups or conditions, no outliers, normally distributed differences between groups, and sphericity. Repeated measures ANOVA separates variance into between-subjects, between-measures, and error components to test if there are differences in mean scores between related groups while accounting for correlations between measures on the same individuals.
This document discusses parametric tests used for statistical analysis. It introduces t-tests, ANOVA, Pearson's correlation coefficient, and Z-tests. T-tests are used to compare means of small samples and include one-sample, unpaired two-sample, and paired two-sample t-tests. ANOVA compares multiple population means and includes one-way and two-way ANOVA. Pearson's correlation measures the strength of association between two continuous variables. Z-tests compare means or proportions of large samples. Key assumptions and calculations for each test are provided along with examples. The document emphasizes the importance of choosing the appropriate statistical test for research.
Analysis of Variance and Repeated Measures DesignJ P Verma
This presentation discusses the basic concept used in analysis of variance and it shows the difference between independent measures ANOVA and Repeated measures ANOVA
This document discusses repeated measures designs and analyzing data from such designs using repeated measures ANOVA. It explains that repeated measures ANOVA involves comparing measures taken from the same subjects across different treatment conditions while controlling for individual differences. The document provides details on the null and alternative hypotheses, calculating variance components, and assumptions of repeated measures ANOVA.
INFERENTIAL STATISTICS: AN INTRODUCTIONJohn Labrador
For instance, we use inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments of the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study.
This document discusses key concepts in inferential statistics and hypothesis testing. It explains that inferential statistics allow estimating population characteristics from sample data and are used to answer questions about comparisons or relationships. Hypothesis testing involves forming a null hypothesis, which is statistically tested to determine if an observed difference is likely due to chance or a real treatment effect. Type 1 and type 2 errors in hypothesis testing are defined. The document also outlines factors like level of significance, power, and choice of parametric vs non-parametric tests based on the study design and data.
This document provides an overview of analysis of variance (ANOVA). It begins by defining parametric tests and discussing the assumptions of ANOVA. The key ideas of ANOVA are introduced, including comparing the variance between groups to the variance within groups. Calculations for one-way ANOVA are demonstrated, including sums of squares, mean squares, and the F-statistic. Examples are provided to illustrate one-way ANOVA calculations and interpretations. Violations of assumptions and extensions to two-way ANOVA are also discussed.
The document provides information on statistical techniques for comparing means between groups, including t-tests, analysis of variance (ANOVA), and their assumptions and applications. T-tests are used to compare two groups, while ANOVA allows comparison of three or more groups and controls for increased Type I error rates. Steps for conducting t-tests, ANOVA, and post-hoc tests using SPSS are outlined along with examples and interpretations.
The document discusses statistical methods for comparing means between groups, including t-tests and analysis of variance (ANOVA). It provides information on different types of t-tests (one sample, independent samples, and paired samples t-tests), assumptions of t-tests, and how to perform t-tests in SPSS. It also covers one-way ANOVA, including its assumptions, components of variation, properties of the F-test, and how to run a one-way ANOVA in SPSS. Examples are provided for each statistical test.
This document provides an overview of common statistical tests used to analyze quantitative data, including t-tests, ANOVAs, and regression. It defines the assumptions and applications of t-tests (independent samples t-test, paired t-test, one-sample t-test) and ANOVA (one-way, factorial). Linear and multiple regression are introduced as ways to model relationships between continuous variables and test predictions. Examples of research questions and outputs are provided.
This document provides an overview of parametric and nonparametric statistical methods. It defines key concepts like standard error, degrees of freedom, critical values, and one-tailed versus two-tailed hypotheses. Common parametric tests discussed include t-tests, ANOVA, ANCOVA, and MANOVA. Nonparametric tests covered are chi-square, Mann-Whitney U, Kruskal-Wallis, and Friedman. The document explains when to use parametric versus nonparametric methods and how measures like effect size can quantify the strength of relationships found.
ANOVA and meta-analysis are statistical techniques used to analyze data from multiple groups or studies. ANOVA allows researchers to determine if variability between groups is statistically significant or due to chance. It compares the means of three or more independent groups and tests the hypothesis that their means are equal. Meta-analysis systematically combines results from independent studies on a topic to obtain an overall estimate of effect. It involves identifying relevant studies, determining their eligibility, abstracting their data, and statistically analyzing the data to summarize results. Both techniques provide a more robust analysis than examining individual studies alone.
This Slides presents different types of Parametric Test- like
T-test,
Parametric Test,
Assumption of Parametric Test,
Paired T Test,
One Sample T Test,
ANOVA,
ANCOVA,
Regression,
Two Way ANOVA,
Repeated Measure ANOVA,
Multiple Regression
This document provides an overview of one-way ANOVA, including its assumptions, steps, and an example. One-way ANOVA tests whether the means of three or more independent groups are significantly different. It compares the variance between sample means to the variance within samples using an F-statistic. If the F-statistic exceeds a critical value, then at least one group mean is significantly different from the others. Post-hoc tests may then be used to determine specifically which group means differ. The example calculates statistics to compare the analgesic effects of three drugs and finds no significant difference between the group means.
Causal-comparative research aims to identify potential causes of existing differences between groups by comparing them without manipulation. It is used when experimental manipulation is not possible. Threats to internal validity like lack of randomization make causation difficult to infer. Analysis of covariance can statistically control for initial group differences, while frequency tables and t-tests are commonly used to analyze data. Results always require cautious interpretation due to limitations of the design.
The document provides an overview of different statistical analysis methods including independent ANOVA, repeated measures ANOVA, and MANOVA. It discusses key aspects of each method such as their appropriate uses, assumptions, and how to conduct the analyses and interpret results in SPSS. For ANOVA, it covers topics like F-ratio, significance levels, post-hoc tests, effect sizes, and examples. For MANOVA, it compares it to ANOVA and explains how MANOVA can assess differences across groups on multiple dependent variables simultaneously.
This document provides an overview of statistical tests that can be used based on the number and type of variables in a study. It outlines both parametric and non-parametric tests for different situations including comparing 1 or multiple groups, measuring relationships between 2 variables, and assessing within-subject effects. The appropriate statistical techniques depend on whether the data is nominal, ordinal, interval or ratio-level and whether the study has independent or related samples.
This document contains class notes from an empirical research methods course. It outlines key concepts related to sampling, statistics, experimental design, and data analysis techniques including t-tests, analysis of variance (ANOVA), and factorial ANOVA. Examples are provided to illustrate how to conduct statistical tests in SPSS and how to interpret and report results. Key terms are defined throughout to explain assumptions, computations, and interpretations of different statistical analyses.
This document contains class notes from an empirical research methods course. It outlines key concepts related to sampling, statistics, experimental design, and data analysis techniques including t-tests, analysis of variance (ANOVA), and factorial ANOVA. Examples are provided to illustrate how to conduct statistical tests in SPSS and how to interpret and report results. Key terms are defined throughout to explain assumptions, computations, and interpretations of different statistical analyses.
The document describes a study that compared two methods of instruction. One group was taught a problem-solving method directly, while the other group was told to figure it out themselves (the "discovery method"). After 3 weeks, both groups were given a novel problem to solve. The discovery method group performed better. The document discusses using a t-test to determine if the difference in performance was statistically significant or due to chance. It provides the formula for an independent samples t-test when comparing means between two unrelated groups. The t-test calculates whether the difference between two sample means is larger than would be expected by chance, given the variability in the samples.
This document discusses inferential statistics and various statistical tests used to analyze differences between groups. It describes measures of difference such as the t-test, analysis of variance (ANOVA), chi-square test, Mann-Whitney test, and Kruskal-Wallis test. It also covers regression analysis techniques like simple and multiple linear regression. Key steps are outlined for conducting t-tests, ANOVA, and interpreting their results from SPSS output. Degrees of freedom and their role in statistical tests are also explained.
ANOVA (analysis of variance) allows researchers to compare the means of three or more groups. It partitions the total variation in the data into variation between groups and variation within groups. The ANOVA F-statistic is the ratio of between-group variation to within-group variation. A large F-statistic indicates the between-group variation is larger than expected by chance, providing evidence the group means are not all equal. Researchers can then follow up with post-hoc tests to determine which specific group means are different.
This document discusses statistical tests used to analyze data from different types of study designs. It provides an overview of tests for comparing two or more groups, including ANOVA and chi-square tests. It also reviews alternatives that can be used if the assumptions of those tests, like normality, are violated. Examples are given of how to calculate ANOVA by hand and how it relates to the t-test. In summary, the document reviews best practices for selecting the appropriate statistical test based on the study design, number of groups, type of outcome variable, and whether observations are independent or correlated between groups.
This document discusses statistical tests used to analyze data from different types of study designs. It provides an overview of tests for comparing two or more groups, including ANOVA and chi-square tests. It also reviews alternatives that can be used if the assumptions of those tests, like normality, are violated. Examples are given of how to calculate ANOVA by hand and how it relates to the t-test. In summary, the document reviews best practices for selecting the appropriate statistical test based on the study design, number of groups, type of outcome variable, and whether observations are independent or correlated between groups.
The document discusses various parametric statistical tests including t-tests, ANOVA, ANCOVA, and MANOVA. It provides definitions and assumptions for parametric tests and explains how they can be used to analyze quantitative data that follows a normal distribution. Specific parametric tests covered in detail include the independent samples t-test, paired t-test, one-way ANOVA, two-way ANOVA, and ANCOVA. Examples are provided to illustrate how each test is conducted and how results are interpreted.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
2. DATA ANALYSIS
Beforethe collected data can be utilized,
appropriate analytic methods must be
applied to meet the users' need for
information
Mainconsideration – OBJECTIVES for
which the data were collected
(Mendoza, et al Foundations of statistical
analysis for the health sciences 2009)
3. Organization and Presentation
of Data
Collected data – questionnaires, examination
papers, rating scales, interview transcription,
secondary data
Start by thoroughly reviewing all accomplished
instruments and other data
- have respondents answered all questions?
- are there any inconsistencies?
- verify identification numbers
- systematize your coding system
4. Coding
Assignnumerical values to research
variables
Enter
the RAW data into your
computer software
Encoding data into a computer
facilitates computation of statistical
testing
Ex. Microsoft Excel
5.
6.
7.
8. After
entering the data you are now
ready to process them
Sample table entry
9. BIOSTATISTICS deals with both qualitative and
quantitative data; either constants or variables
CONSTANT VARIABLE
phenomenon whose value phenomenon whose values/
remains the same categories cannot be
from person to person, predicted with certainty
from time to time,
from place to place
# minutes in an hour age of gestation
pull of gravity smoking habit
speed of light attitudes towards certain issues
weight
educational attainment
(Mendoza, et al Foundations of statistical
analysis for the health sciences 2009)
10. VARIABLES
QUANTITATIVE QUALITATIVE
categories can be categories are used as labels
measured and ordered to distinguish one group from
according to quantity/ another
amount; values can be
expressed numerically
(discrete or continuous)
birth weight sex
hospital bed capacity urban-rural
arm circumference religion
population size region
disease status
occupation
(Mendoza, et al Foundations of statistical
analysis for the health sciences 2009)
11. Types of scales
Nominal Ordinal Interval Ratio
(QL) (QL/QN)
numbers refer can be ranked exact distance zero point is
to categories, or ordered between 2 fixed
groups, labels categories can
of data be
determined;
zero point is
arbitrary
measurement disease temperature, weight,
scale set for severity (mild, IQ money
data collection moderate,
severe)
12. It is important to distinguish the type
of variable one is dealing with
- major determinant of type of
statistical technique
- type of graph that can be
constructed
- statistical measure that can be
computed
(Mendoza, et al Foundations of statistical
analysis for the health sciences 2009)
13. DESCRIPTIVE STATISTICS
Describe the characteristics of the
members of one group
No attempt to compare or relate these to
the characteristics of another group
Measures of central tendency and
variation
14. MEASURES OF CENTRAL
TENDENCY
Methodof compressing a mass of
numerical data for better comprehension
and description of what it tends to portray
MEAN, MEDIAN, MODE – “typical “ or
average values which may be utilized to
represent a series of observations
(Mendoza, et al Foundations of statistical
analysis for the health sciences 2009)
15. MEASURES OF CENTRAL
TENDENCY
A. MEAN (X) – arithmetic mean that
represents a set of scores with a single
number
Computed by dividing the sum of all scores
by the number of scores
16. MEASURES OF CENTRAL
TENDENCY
B. MEDIAN (Md)- 50th percentile
- Point above and below which half
of the scores fall
- Better choice than MEAN if there
are extreme values
18. MEASURES OF SPREAD OR
DISPERSION
A. RANGE
– difference between the highest
and lowest scores plus 1
19. MEASURES OF SPREAD OR
DISPERSION
B. VARIANCE – average of the
squared deviations from the MEAN
Computing for VARIANCE
1. Get the deviation score for each score by
subtracting it from the mean
2. Square each resulting deviation
3. Get the sum of all squared deviations
4. Divide the result by the number of subjects for the
population (N) or the number of subjects minus 1
(n-1) for a sample
21. MEASURES OF SPREAD OR
DISPERSION
C. STANDARD DEVIATION
- indicates how much scores are
spread around the mean
- Square root of the variance
22. Ex. Scores of 2 groups of students:
Grp 1 : 46 60 65 65 70 80 90
Grp 2 : 62 66 68 70 70 70 70
GROUP 1 GROUP 2
MEAN 68 68
MEDIAN 65 70
MODE 65 70
RANGE 90-46+1 = 45 70-62+1 = 9
S.D. 14.3 3.06
23. Variances and standard deviation in the sample distribution of scores
SCORES Deviation of score from X Square of the
Group 1 deviation
46 46 – 68 = 22 484
60 60 – 68 = 8 64
65 65 – 68 = 3 9
65 65 – 68 = 3 9
70 70 – 68 = 2 4
80 80 – 68 = 12 144
90 90 – 68 = 22 484
VARIANCE = sum of squared deviations
n-1
= 484 + 64 + 9 + 9 + 4 + 144 + 484 = 199.67
7–1
STD. DEVIATION = square root of variance =
= 14.3
24. TESTS
- Also used to make inferences
- PARAMETRIC tests - for interval and
ratio variables assuming that:
sample was drawn from a
normally distributed population
if two groups are analyzed they
have the same variance
25. TESTS COMPARING GROUPS
1. Tests to determine the difference
between TWO groups
2. Tests to determine the difference
among THREE or more groups
26. TESTS COMPARING GROUPS
1. Tests to determine the difference
between TWO groups
a. T-test for independent groups
b. T-test for paired data
27. TESTS COMPARING GROUPS
1. Tests to determine the difference
between TWO groups
a. T-test for independent groups
- detects statistically significant
differences between means
- for static group comparison or
randomized control group design
(compare scores of 2 unmatched groups)
28. TESTS COMPARING GROUPS
a. T-test for independent groups
ex. 2 groups of slow learners
Oral • Mean post-
instruction instruction
group scores
T test for
more
independent effective?
• Mean post- groups
Videotape instruction
group scores
Dominguez (1985) “ A comparative study of the
achievement of slow learners taught by oral tutorials with
those taught by self-instructional programmed videotapes.”
29. TESTS COMPARING GROUPS
1. Tests to determine the difference
between TWO groups
b. T-test for paired data
- identify statistically significant
changes in a single group
- or between matched groups
30. TESTS COMPARING GROUPS
2. Tests to determine the difference
among THREE or more groups
a. Univariate analysis of variance
(ANOVA)
b. Analysis of Covariance
(ANCOVA)
c. Multivariate analysis of variance
(MANOVA)
31. a. ANOVA
Univariate analysis of variance
- to determine significant difference
among 3 or more group means
(1 variable)
Ex. Posttest scores of students to compare
effectiveness of 3 instructional strategies
32. ANOVA Univariate analysis of variance
Written Written +
matl’s videotape
n=47 n= 46
N= 168 2nd yr med students
23 2nd yr physician
Written + assistant students
small group Written+
practice video+ SGP
n = 43
n = 55
Students’ knowledge and skills were assessed after instruction to
determine any significant difference among the groups through
ONE-WAY ANOVA
“Teaching a screening musculoskeletal examination: A randomized
control trial of different instructional methods.” Lawry et al. (1999)
33. ANOVA Univariate analysis of variance
One-way ANOVA
Two-way ANOVA
4 X 2 ANOVA
Three-way ANOVA
34. b. ANCOVA Analysis of Covariance
- Used to control differences among
groups that existed before the study
- Usually used in quasi-experimental
designs
- Ex. When Pre-test means of groups are significantly
different from each other ANCOVA can be used to
adjust pretest scores so they can be treated as
identical
35. c. MANOVA
Multivariate analysis of variance
- Groups are compared with respect
to 2 or more dependent variables
36. TESTS TO DETERMINE THE RELATIONSHIP
AMONG VARIABLES IN A GROUP
1. Pearson product moment
correlation coefficient
(interval / ratio variables)
2. Regression
37. TESTS TO DETERMINE THE RELATIONSHIP
AMONG VARIABLES IN A GROUP
1. Pearson product moment correlation
coefficient (interval / ratio variables)
- when there are 2 scores per
subject
- study intends to determine how
these scores are related
Ex. Survey of pharmacists to determine work patterns
and whether other factors (age, gender, # years
in work force) affected the work patterns
(Knapp et al. 1992)
38. TESTS TO DETERMINE THE RELATIONSHIP
AMONG VARIABLES IN A GROUP
1. Pearson product moment correlation coefficient (interval /
ratio variables)
2. Regression
- Simple regression – predicting one
variable from another variable
- Multiple regression – predicting values
of 1 variable on the basis of the values
of 2 or more variables
39. TESTS TO DETERMINE THE RELATIONSHIP
AMONG VARIABLES IN A GROUP
1. Pearson product moment correlation coefficient
2. Regression
Ex. Study to identify predictors of dental skill dev’t -
whether commonly examined fine motor ability tests
(steadiness tester, mirror trace test) and maturational
tests (hand length, index finger length, wrist width )
were associated with early scaling and root-planning
skills in 120 dental students (Wilson, Waldman and McDonald 1991)
40. COMMOMLY USED PARAMETRIC TESTS
USES APPROPRIATE TESTS
Determining the differences
among groups
Between 2 related or matched T-test for paired data
groups
Between 2 independent T-test for independent groups
groups
Among 3 or more groups ANOVA (1 dependent variable)
ANCOVA (1 dep variable; quasi-exptl design)
MANOVA ( 2/> dep variables)
Determining the relationship Pearson product moment correlation
among variables in a group coeficient
Regression
41. NONPARAMETRIC TESTS
- nominal and ordinal variables
- when underlying assumptions for
parametric tests are not met
- for small sample size
42. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference
between TWO groups
(1) McNemar Change test
(2) Wilcoxon matched-pairs
signed- ranks test
(3) Permutation test for paired
replicates
43. NONPARAMETRIC TESTS
(4) Fischer exact test for 2 X 2 table
(5) Chi-square test (X2 test)
(6) Wilcoxon-Mann-Whitney test
(7) Robust rank-order test
(8) Kolmogorov-Smirnov two-
sample test
(9) Permutation test for two
independent samples
44. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(1) McNemar Change test
- For 2 related/ matched nominal variable
- (Ex. Responses of a group of students on which 2 types
of instructional methods they prefer when asked before
and after being exposed to such methods)
- Observed frequencies of students’ preferred instructional method
Preferred instructional method Preferred instructional method Total
before exposure before exposure
Method A Method B
Method A
Method B
Total
45. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(2) Wilcoxon matched-pairs signed-
ranks test
- For 2 related samples; ordinal data
- Determines the direction of
differences within pairs or related
samples and relative magnitude of
those differences
46. NONPARAMETRIC TESTS
(2) Wilcoxon matched-pairs signed- ranks
test
Ex. To determine whether there is a significant difference
in perceptions of graduates on their degree of
preparedness in various aspects of training during their
clinical fellowship and degree of importance in clinical
practice of those same aspects. (Atienza 2001)
Perceived degree of preparedness and importance of graduates
Graduates Perceived degree of Perceived degree of Difference
preparedness importance
Graduates A
Graduates B
etc.
47. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(3) Permutation test for paired
replicates
- one of most powerful tests for
paired observation
- variables on interval scale
- small sample size
48. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(4) Fischer exact test for 2 X 2 table
- nominal or ordinal data
- two independent samples
- sample size in small (n< 2)
- subjects fall in one of two classes
Variable Group Combined
- Number of students who passed and failed
I II
Pass
Fail
49. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(5) Chi-square test (X2 test)
- nominal or ordinal data
- to determine the difference between
2 independent groups ( n > 20 ; each
of the expected frequencies is 5/> )
- for examining the differences among
3/> groups and
- for testing association between 2/>
categorical variables
50. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(5) Chi-square test (X2 test)
Ex. Cross sectional survey of 545 doctors to
examine young physicians’ views on
professional issues (professional regulation,
multidisciplinary teamwork, priority setting,
clinical autonomy, private practice)
These variables were tested against
demographic variables like sex.
Specialty choice revealed marked sex
bias
51. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(6) Wilcoxon-Mann-Whitney test
- One of most powerful tests for data in
ordinal scale
- alternative to t-test
- Used to predict the difference between 2
independent samples from same
population or from populations with the
same/equal variances
52. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(7) Robust rank-order test
- Does not assume that the 2
independent samples come from
the same population
- Does not require equal variances
for the 2 populations from which the
sample was taken
53. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(8) Kolmogorov-Smirnov two-sample
test
- 2 independent samples drawn from
the same population or populations
with the same distributions
- Powerful for small samples
54. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
a. Tests to determine the difference between TWO groups
(9) Permutation test for two
independent samples
- Powerful for testing the difference
between the means of two
independent sample when their
sample sizes are small
- Requires interval measurement
- No special assumptions about the
distributions of the populations
55. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
b. Tests to determine the difference
between THREE or more groups
(1) Cochran Q test
(2) Friedman two-way analysis of
variance by ranks
(3) Kruskal-Wallis one-way analysis
of variance
56. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
b. Tests to determine the difference between THREE or more
groups
(1) Cochran Q test
- Extension of McNemar test used for
2/> related samples (nominal
variables)
- Used to analyze responses to a test
or questionnaire
57. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
b. Tests to determine the difference between THREE or more
groups
(2) Friedman two-way analysis of
variance by ranks
- For ordinal data
- to test if a number of repeated
measures or matched groups come
from the same population or
populations with the same median
58. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
b. Tests to determine the difference between THREE or more
groups
(2) Friedman two-way analysis of
variance by ranks
Three groups of subjects in four conditions
Group Conditions / Variables
Variable A Variable A Variable A Variable A
Group I
Group II
Group III
59. NONPARAMETRIC TESTS
1. TESTS TO COMPARE GROUPS
b. Tests to determine the difference between THREE or more
groups
(3) Kruskal-Wallis one-way analysis
of variance
- for testing 3 or more independent
groups for ordinal data
- Ex. Testing for significant differences of
socioeconomic scores or attitudinal
scores based on specified criteria of
students from different regions in the
country
60. NONPARAMETRIC TESTS
2. MEASURES OF ASSOCIATION
(a) Pearson product moment correlation
coefficient (interval, ratio)
(b) Phi coefficient (nominal)
(c) Kappa coefficient of agreement
(nominal)
(d) Spearmen rank-order correlation
coefficient (ordinal)
(e) Kendall coefficient (ordinal)
(f) Gamma statistic (ordinal)
61. USES LEVEL OF MEASUREMENT
NOMINAL ORDINAL INTERVAL
Determining the
difference among
groups
Between 2 related/ McNemar change Wilcoxon signed ranks Permutation test for
matched groups test test paired replicates
Between 2 Fischer exact test Wilcoxon-Mann- Permutation test for
independent groups for 2X2 table Whitney test 2 independent
Chi-square test Robust rank order test samples
Komogorov-Smirnov
two-sample test
Among 3/> related Cochran Q test Friedman 2-way
groups analysis of variance by
ranks
Among 3/> Chi-square test Kruskall-Wallis one-way
independent groups analysis of variance
Determining Cramer coefficient Spearman rank-order correlation coefficient
association Phi coefficient Gamma statistic
Kappa confidence
of agreement
So we have already collected our data. But all these are just raw information. For it to be of any use to us we have to apply analytic methods to the data.What is our main consideration? Of course, it is the OBJECTIVES of our study
- have respondents answered all questions? - are there any inconsistencies? - verify identification numbers - systematize your coding system
CODING - Assigning numerical values to research variablesAfter coding you are ready to enter the RAW data into your computer softwareEncoding research data into a computer facilitates computation of statistical testing (through selected software)
This is an example of the data encoding using the 2004 study by Salvacion on the stress profile of students in the UP College of Dentistry The researcher used questionnaires, tests, and inventoriesThe questionnaire asked 149 students basic demographic data, like !D#, year level, sex, civil status and residence These were some of the variables the researcher hypothesized to be related to the stress profile of the studentsSince ID #s and year level are real #s they could be entered into EXCEL without any codinSex can be coded as 1 for male, 2 for female ; Civil status is coded as 1 for single, 2 for maried, etc.These numbers can now be entered in the excel spreadsheetsheetSo lets take entry for respondent with ID # 1 who is a 3rdyr student, male, single and lives in a dormitory within Ermita.
QUANTITATIVE VARIABLES categories can be measured and ordered according to quantity/amount;values can be expressed NUMERICALLY (discrete -whole #s or continuous- fractions and decimals)QUALITATIVE VARIABLES - categories are used as labels to distinguish one group from another (not a basis for saying that one group is greater or less, higher or lower, better or worse than another)
It is important to distinguish the type of variable one is dealing with- major determinant of the type of statistical technique applied to the data- It also determines the type of graph that can be constructed as well as the- statistical measure that can be computed from a given set of data
In the next slidewe will review the formula for getting the variance for a population and for a sample
MEAN (X) – Computed by dividing the sum of all scores by the number of scoresMEDIAN (Md)- 50th percentile. Point above and below which half of the scores fallMODE (Mo) – most frequently occurring score in the distributionRANGE – difference between the highest and lowest scores plus 1STANDARD DEVIATION - indicates how much scores are spread around the mean - Square root of the variance
n – 1 because we are using a sample not a population
Normaly distributed population
b. T-test for paired data - identify statistically significant changes in a single group (e.g. pre-test and post-test) - or between matched groups ( e.g. pre-test scores of matched members of 2 groups, experimental and comparison)
- to determine significant difference among 3 or more group means (1 variable) Ex. Posttest scores of students to compare effectiveness of 3 instructional strategies
Randomized post-test only control design N= 168 2ndyr med students + 23 2nd yr physician assistant students randomly divided into 4 grps given the different instructional methods Students’ knowledge and skills were assessed after instruction to determine any significant difference among the groups through ONE-WAY ANOVA
Ex. When Pre-test means of groups are significantly different from each other ANCOVA can be used to adjust pretest scores so they can be treated as identical
Ex. Survey of pharmacists to determine work patterns and whether other factors (age, gender, # years in work force) affected the work patterns (Knapp et al. 1992)
Quasi-experimental design
For nominal and ordinal variablesApplicable when underlying assumptions for parametric tests are not metPARAMETRIC tests – for interval and ratio variables assuming that: - sample was drawn from a normally distributed population - if two groups are to be analyzed they have the same varianceUseful for small sample size