THIS IS MY PH.D., VIVA VOCE POWERPOINT. MY THESIS TITLE IS "EFFECTIVENESS OF E-LEARNING MODULES IN TEACHING MATHEMATICS AMONG SECONDARY TEACHER EDUCATION LEVEL"
Factor analysis was conducted on 8 variables related to reasons for choosing a university. The analysis revealed 2 key factors:
1) Academic Quality captured variables like quality of education, experts/labs, and international recognition.
2) Campus Life captured variables like campus/security, years operating, graduates employed, and accommodations.
The analysis showed that the reasons for university choice could be understood in terms of these two underlying dimensions or factors of Academic Quality and Campus Life experience. This provides useful insight into how to segment and target prospective students.
- A hypothesis is a proposed explanation for a phenomenon that can be tested. A research hypothesis makes a specific, testable prediction about the relationship between variables in a scientific study.
- Hypotheses can be simple, predicting the relationship between one independent and one dependent variable, or complex, predicting relationships between multiple variables. They can also be directional, predicting a direction of relationship, or non-directional.
- The null hypothesis proposes no relationship or difference between variables and is tested against the alternative or research hypothesis, which proposes an expected relationship. Statistical analysis determines whether to reject or fail to reject the null hypothesis.
This document discusses various statistical concepts including descriptive statistics, inferential statistics, univariate analysis, bivariate analysis, and multivariate analysis. It provides examples of common statistical tests like t-tests, correlation analysis, ANOVA, and discusses how to identify independent and dependent variables and choose appropriate parametric or non-parametric tests based on the nature of the variables and data. Key topics covered include descriptive versus inferential analysis, different types of correlations, using ANOVA to compare multiple groups, and how to determine assumptions and select the correct statistical test for different research questions and study designs.
This document provides an overview of regression analysis, including:
- Regression analysis estimates relationships between dependent and independent variables and can be used for prediction and assessing variable influence.
- It mathematically determines which factors impact an outcome and which can be ignored.
- Simple and multiple linear regression fit linear equations to relate one or more explanatory variables to an outcome variable.
- Assumptions include linearity, homoscedasticity, independence, and normality.
- Regression analysis can infer causal relationships and is widely used for forecasting.
This document defines and provides examples of several non-parametric tests: Mann-Whitney U test, Kruskal-Wallis H test, Mood's median test, and Friedman test. The Mann-Whitney U test compares two independent groups on a continuous or ordinal dependent variable. The Kruskal-Wallis H test compares three or more independent groups. Mood's median test compares the medians of two independent groups. The Friedman test compares three or more related groups on a continuous or ordinal dependent variable. Examples of using each test are also provided.
DATA PROCESSING AND STATISTICAL TREATMENTAdolf Odani
This document discusses various statistical concepts and techniques for data processing and analysis. It covers levels of measurement, descriptive statistics like frequency counts and percentages, averages, spreads, and inferential statistics including parametric tests like z-tests, t-tests, F-tests and non-parametric tests like chi-square. Correlation techniques such as Pearson product-moment correlation coefficient and Spearman rank-order correlation coefficient are also summarized. Common statistical tests for comparison including t-tests, F-tests, ANOVA, ANCOVA and chi-square are briefly explained.
THIS IS MY PH.D., VIVA VOCE POWERPOINT. MY THESIS TITLE IS "EFFECTIVENESS OF E-LEARNING MODULES IN TEACHING MATHEMATICS AMONG SECONDARY TEACHER EDUCATION LEVEL"
Factor analysis was conducted on 8 variables related to reasons for choosing a university. The analysis revealed 2 key factors:
1) Academic Quality captured variables like quality of education, experts/labs, and international recognition.
2) Campus Life captured variables like campus/security, years operating, graduates employed, and accommodations.
The analysis showed that the reasons for university choice could be understood in terms of these two underlying dimensions or factors of Academic Quality and Campus Life experience. This provides useful insight into how to segment and target prospective students.
- A hypothesis is a proposed explanation for a phenomenon that can be tested. A research hypothesis makes a specific, testable prediction about the relationship between variables in a scientific study.
- Hypotheses can be simple, predicting the relationship between one independent and one dependent variable, or complex, predicting relationships between multiple variables. They can also be directional, predicting a direction of relationship, or non-directional.
- The null hypothesis proposes no relationship or difference between variables and is tested against the alternative or research hypothesis, which proposes an expected relationship. Statistical analysis determines whether to reject or fail to reject the null hypothesis.
This document discusses various statistical concepts including descriptive statistics, inferential statistics, univariate analysis, bivariate analysis, and multivariate analysis. It provides examples of common statistical tests like t-tests, correlation analysis, ANOVA, and discusses how to identify independent and dependent variables and choose appropriate parametric or non-parametric tests based on the nature of the variables and data. Key topics covered include descriptive versus inferential analysis, different types of correlations, using ANOVA to compare multiple groups, and how to determine assumptions and select the correct statistical test for different research questions and study designs.
This document provides an overview of regression analysis, including:
- Regression analysis estimates relationships between dependent and independent variables and can be used for prediction and assessing variable influence.
- It mathematically determines which factors impact an outcome and which can be ignored.
- Simple and multiple linear regression fit linear equations to relate one or more explanatory variables to an outcome variable.
- Assumptions include linearity, homoscedasticity, independence, and normality.
- Regression analysis can infer causal relationships and is widely used for forecasting.
This document defines and provides examples of several non-parametric tests: Mann-Whitney U test, Kruskal-Wallis H test, Mood's median test, and Friedman test. The Mann-Whitney U test compares two independent groups on a continuous or ordinal dependent variable. The Kruskal-Wallis H test compares three or more independent groups. Mood's median test compares the medians of two independent groups. The Friedman test compares three or more related groups on a continuous or ordinal dependent variable. Examples of using each test are also provided.
DATA PROCESSING AND STATISTICAL TREATMENTAdolf Odani
This document discusses various statistical concepts and techniques for data processing and analysis. It covers levels of measurement, descriptive statistics like frequency counts and percentages, averages, spreads, and inferential statistics including parametric tests like z-tests, t-tests, F-tests and non-parametric tests like chi-square. Correlation techniques such as Pearson product-moment correlation coefficient and Spearman rank-order correlation coefficient are also summarized. Common statistical tests for comparison including t-tests, F-tests, ANOVA, ANCOVA and chi-square are briefly explained.
Commonly Used Statistics in Survey ResearchPat Barlow
This is a version of our "commonly used statistics" presentation that has been modified to address the commonly used statistics in survey research and analysis. It is intended to give an *overview* of the various uses of these tests as they apply to survey research questions rather than the point-and-click calculations involved in running the statistics.
The document discusses various topics related to classroom testing including high-stakes testing, criterion vs norm-referenced tests, teacher responsibilities under IDEA, instructional objectives, test blueprints, essay scoring, test reliability and validity, and assigning grades. It provides details on how these different elements are used to develop effective classroom tests and assessments.
This document discusses parametric and nonparametric statistical tests. Parametric tests like the t-test and ANOVA assume a normal distribution of data and compare population means. Nonparametric tests do not assume a normal distribution and can be used when sample sizes are small or distributions are unknown. Specific parametric tests covered include the t-test for comparing two groups, one-way ANOVA for comparing three or more groups on one factor, and two-way ANOVA for examining two factors. Examples of how and when to use these various tests are provided.
This presentation is related to tools of Educational Research. This presentation slides deals various tools of educational research likes rating scale, opionnaire, checklist, aptitude test, inventory, observation, interview, schedule etc. This presentation slides also describe the item analysis, steps for item analysis and online survey tools.
Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting numerical data. It helps make better decisions by extracting information from data. There are two main types: descriptive statistics which describe data through methods like averages and distributions, and inferential statistics which make estimates, predictions, or generalizations about a population based on a sample. Key concepts in statistics include populations, samples, parameters which describe populations, and statistics which describe samples. The level of measurement of data, such as nominal, ordinal, interval, or ratio, determines what calculations and tests can be done.
This document provides an overview of various statistical techniques for analyzing data, including descriptive statistics, inferential statistics, and different types of tests. It discusses scales of measurement, variables, and statistical methods like t-tests, ANOVA, ANCOVA, MANOVA, and regression. It also covers topics like degrees of freedom, interpretation of results based on table values and p-values, and use of SPSS for conducting analyses like independent and paired t-tests, one-way ANOVA, and post-hoc tests. The document aims to define key statistical concepts and summarize different analytical procedures for working with univariate, bivariate and multivariate data.
This document provides an overview of statistical tests of significance used to analyze data and determine whether observed differences could reasonably be due to chance. It defines key terms like population, sample, parameters, statistics, and hypotheses. It then describes several common tests including z-tests, t-tests, F-tests, chi-square tests, and ANOVA. For each test, it outlines the assumptions, calculation steps, and how to interpret the results to evaluate the null hypothesis. The goal of these tests is to determine if an observed difference is statistically significant or could reasonably be expected due to random chance alone.
This document discusses parametric and non-parametric statistical tests. It begins by defining different types of data and the standard normal distribution curve. It then covers hypothesis testing, including the different types of errors. Both parametric and non-parametric tests are examined. Parametric tests discussed include z-tests, t-tests, and ANOVA, while non-parametric tests include chi-square, sign tests, McNemar's test, and Fischer's exact test. Examples are provided to illustrate several of the tests.
Deciphering the dilemma of parametric and nonparametric testsRamachandra Barik
This document discusses the differences between parametric and nonparametric statistical tests and provides guidance on selecting the appropriate test. Parametric tests make assumptions about the population distribution, while nonparametric tests make fewer assumptions. The key factors in deciding which test to use are the scale of measurement, population distribution, homogeneity of variances, and independence of samples. Although nonparametric tests are more flexible, parametric tests often have more statistical power. The document provides examples and guidelines to help researchers select the right test for their data and research questions.
The document discusses different types of variables in experimental research:
- Independent variable: Factor manipulated by researcher to determine its effect
- Dependent variable: Factor observed and measured to determine effect of independent variable
- Moderator variable: Factor that modifies relationship between independent and dependent variables
- Control variable: Factors controlled by researcher to neutralize their effects
- Intervening variable: Factor that theoretically affects phenomena but cannot be directly observed
It also discusses data types, central tendency measures, data variability measures, and statistical techniques like correlation analysis, t-tests, ANOVA that are used for quantitative analysis.
Vergoulas Choosing the appropriate statistical test (2019 Hippokratia journal)Vaggelis Vergoulas
This document provides a step-by-step guide for choosing the appropriate statistical test for data analysis. It outlines 7 key steps: 1) determining if the analysis is univariate or multivariable, 2) identifying if the study examines differences or correlations, 3) determining if the data is paired or independent, 4) characterizing the type of outcome variable, 5) assessing the normality of distribution for continuous variables, 6) identifying the number of groups for independent variables, and 7) selecting valid statistical tests that match the characteristics identified in the previous steps, such as t-tests, ANOVA, regression analyses. Examples of applying this process are provided.
This document provides an overview of analysis of variance (ANOVA) techniques, including one-way and two-way ANOVA. It defines key terms like factors, interactions, F distribution, and multiple comparison tests. For one-way ANOVA, it explains how to test if three or more population means are equal. For two-way ANOVA, it notes you must first test for interactions between two factors before testing their individual effects. The Tukey test is introduced for identifying specifically which group means differ following rejection of a one-way ANOVA null hypothesis.
This document provides guidance on choosing appropriate statistical tests based on characteristics of the data and research questions. It outlines initial questions to consider, such as the number of samples, whether the data is parametric or non-parametric, and the number and independence of groups. Tables show which tests are suited for different data scales (nominal, ordinal, interval/ratio) and sample configurations (one sample, two independent samples, two related samples, more than two samples). The assumptions of various statistical tests like t-tests, ANOVA, chi-square, and correlation analyses are also reviewed.
This document provides an overview of non-parametric statistics. It begins by explaining that non-parametric statistics do not specify conditions about a population's parameters and can be used when distributions are not normal or variables are measured nominally or ordinally. Some common non-parametric tests are then described for comparing independent groups (like the Mann-Whitney U test) or dependent groups (like the Wilcoxon matched-pairs test). The document concludes by contrasting parametric and non-parametric tests, noting that non-parametric tests are less powerful but also have fewer assumptions.
This document provides an overview of parametric and nonparametric statistical methods. It defines key concepts like standard error, degrees of freedom, critical values, and one-tailed versus two-tailed hypotheses. Common parametric tests discussed include t-tests, ANOVA, ANCOVA, and MANOVA. Nonparametric tests covered are chi-square, Mann-Whitney U, Kruskal-Wallis, and Friedman. The document explains when to use parametric versus nonparametric methods and how measures like effect size can quantify the strength of relationships found.
This document provides an overview of parametric statistical tests, including the z-test, t-tests, chi-square test, F-test, and Bartlett's test. It discusses the history and development of the Student's t-test, including its creation by William Gosset under the pseudonym "Student." The t-test is used to compare means between two samples or between a sample and a theoretical population. The document outlines the assumptions, calculations, and interpretations of one-sample, unpaired, and paired t-tests.
types of variables in research, Dependent independent, moderator,quantitative qualitative,continuous discontinuous,demographic,extraneous, confounding,intervening, control
This presentation describes the concept of One Sample t-test, Independent Sample t-test and Paired Sample t-test. This presentation also deals about the procedure to do the t-test through SPSS.
This document discusses various statistical techniques used for inferential statistics, including parametric and non-parametric techniques. Parametric techniques make assumptions about the population and can determine relationships, while non-parametric techniques make few assumptions and are useful for nominal and ordinal data. Commonly used parametric tests are t-tests, ANOVA, MANOVA, and correlation analysis. Non-parametric tests mentioned include Chi-square, Wilcoxon, and Friedman tests. Examples are provided to illustrate the appropriate uses of each technique.
This document discusses different types of statistical analysis techniques. It begins by defining descriptive analysis as studying distributions of one variable and bivariate/multivariate analysis as studying relationships between two or more variables. It then discusses various types of statistical analyses including correlation analysis, causal analysis, multiple regression analysis, multiple discriminant analysis, multivariate ANOVA, and canonical analysis. It also covers inferential analysis, characteristics and importance of statistical methods, assumptions of parametric tests, examples of parametric and non-parametric tests, and provides details on the chi-square test.
Methods of Statistical Analysis & Interpretation of Data..pptxheencomm
The document discusses various statistical analysis techniques for making sense of numerical data, including descriptive statistics like measures of central tendency and dispersion to describe basic features of data, and inferential statistics to make predictions about a larger population based on a sample. Common inferential techniques covered are correlation, regression analysis, analysis of variance, and hypothesis testing to compare data against assumptions. The goal of these statistical methods is to derive meaningful insights from research data.
This document provides an introduction to biostatistics and key concepts. It defines biostatistics as the development and application of statistical techniques to scientific research relating to human life and health. Some key terms discussed include:
- Population, which is the totality of individuals of interest
- Sample, which is a subset of a population
- Variables, which can be qualitative (non-numerical) or quantitative (numerical)
- Levels of measurement for variables, including nominal, ordinal, interval, and ratio scales
- Descriptive methods for qualitative data, including frequency distributions
Biostatistics plays an important role in modern medicine, including determining disease burden, finding new drug treatments, planning resource allocation, and measuring
Commonly Used Statistics in Survey ResearchPat Barlow
This is a version of our "commonly used statistics" presentation that has been modified to address the commonly used statistics in survey research and analysis. It is intended to give an *overview* of the various uses of these tests as they apply to survey research questions rather than the point-and-click calculations involved in running the statistics.
The document discusses various topics related to classroom testing including high-stakes testing, criterion vs norm-referenced tests, teacher responsibilities under IDEA, instructional objectives, test blueprints, essay scoring, test reliability and validity, and assigning grades. It provides details on how these different elements are used to develop effective classroom tests and assessments.
This document discusses parametric and nonparametric statistical tests. Parametric tests like the t-test and ANOVA assume a normal distribution of data and compare population means. Nonparametric tests do not assume a normal distribution and can be used when sample sizes are small or distributions are unknown. Specific parametric tests covered include the t-test for comparing two groups, one-way ANOVA for comparing three or more groups on one factor, and two-way ANOVA for examining two factors. Examples of how and when to use these various tests are provided.
This presentation is related to tools of Educational Research. This presentation slides deals various tools of educational research likes rating scale, opionnaire, checklist, aptitude test, inventory, observation, interview, schedule etc. This presentation slides also describe the item analysis, steps for item analysis and online survey tools.
Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting numerical data. It helps make better decisions by extracting information from data. There are two main types: descriptive statistics which describe data through methods like averages and distributions, and inferential statistics which make estimates, predictions, or generalizations about a population based on a sample. Key concepts in statistics include populations, samples, parameters which describe populations, and statistics which describe samples. The level of measurement of data, such as nominal, ordinal, interval, or ratio, determines what calculations and tests can be done.
This document provides an overview of various statistical techniques for analyzing data, including descriptive statistics, inferential statistics, and different types of tests. It discusses scales of measurement, variables, and statistical methods like t-tests, ANOVA, ANCOVA, MANOVA, and regression. It also covers topics like degrees of freedom, interpretation of results based on table values and p-values, and use of SPSS for conducting analyses like independent and paired t-tests, one-way ANOVA, and post-hoc tests. The document aims to define key statistical concepts and summarize different analytical procedures for working with univariate, bivariate and multivariate data.
This document provides an overview of statistical tests of significance used to analyze data and determine whether observed differences could reasonably be due to chance. It defines key terms like population, sample, parameters, statistics, and hypotheses. It then describes several common tests including z-tests, t-tests, F-tests, chi-square tests, and ANOVA. For each test, it outlines the assumptions, calculation steps, and how to interpret the results to evaluate the null hypothesis. The goal of these tests is to determine if an observed difference is statistically significant or could reasonably be expected due to random chance alone.
This document discusses parametric and non-parametric statistical tests. It begins by defining different types of data and the standard normal distribution curve. It then covers hypothesis testing, including the different types of errors. Both parametric and non-parametric tests are examined. Parametric tests discussed include z-tests, t-tests, and ANOVA, while non-parametric tests include chi-square, sign tests, McNemar's test, and Fischer's exact test. Examples are provided to illustrate several of the tests.
Deciphering the dilemma of parametric and nonparametric testsRamachandra Barik
This document discusses the differences between parametric and nonparametric statistical tests and provides guidance on selecting the appropriate test. Parametric tests make assumptions about the population distribution, while nonparametric tests make fewer assumptions. The key factors in deciding which test to use are the scale of measurement, population distribution, homogeneity of variances, and independence of samples. Although nonparametric tests are more flexible, parametric tests often have more statistical power. The document provides examples and guidelines to help researchers select the right test for their data and research questions.
The document discusses different types of variables in experimental research:
- Independent variable: Factor manipulated by researcher to determine its effect
- Dependent variable: Factor observed and measured to determine effect of independent variable
- Moderator variable: Factor that modifies relationship between independent and dependent variables
- Control variable: Factors controlled by researcher to neutralize their effects
- Intervening variable: Factor that theoretically affects phenomena but cannot be directly observed
It also discusses data types, central tendency measures, data variability measures, and statistical techniques like correlation analysis, t-tests, ANOVA that are used for quantitative analysis.
Vergoulas Choosing the appropriate statistical test (2019 Hippokratia journal)Vaggelis Vergoulas
This document provides a step-by-step guide for choosing the appropriate statistical test for data analysis. It outlines 7 key steps: 1) determining if the analysis is univariate or multivariable, 2) identifying if the study examines differences or correlations, 3) determining if the data is paired or independent, 4) characterizing the type of outcome variable, 5) assessing the normality of distribution for continuous variables, 6) identifying the number of groups for independent variables, and 7) selecting valid statistical tests that match the characteristics identified in the previous steps, such as t-tests, ANOVA, regression analyses. Examples of applying this process are provided.
This document provides an overview of analysis of variance (ANOVA) techniques, including one-way and two-way ANOVA. It defines key terms like factors, interactions, F distribution, and multiple comparison tests. For one-way ANOVA, it explains how to test if three or more population means are equal. For two-way ANOVA, it notes you must first test for interactions between two factors before testing their individual effects. The Tukey test is introduced for identifying specifically which group means differ following rejection of a one-way ANOVA null hypothesis.
This document provides guidance on choosing appropriate statistical tests based on characteristics of the data and research questions. It outlines initial questions to consider, such as the number of samples, whether the data is parametric or non-parametric, and the number and independence of groups. Tables show which tests are suited for different data scales (nominal, ordinal, interval/ratio) and sample configurations (one sample, two independent samples, two related samples, more than two samples). The assumptions of various statistical tests like t-tests, ANOVA, chi-square, and correlation analyses are also reviewed.
This document provides an overview of non-parametric statistics. It begins by explaining that non-parametric statistics do not specify conditions about a population's parameters and can be used when distributions are not normal or variables are measured nominally or ordinally. Some common non-parametric tests are then described for comparing independent groups (like the Mann-Whitney U test) or dependent groups (like the Wilcoxon matched-pairs test). The document concludes by contrasting parametric and non-parametric tests, noting that non-parametric tests are less powerful but also have fewer assumptions.
This document provides an overview of parametric and nonparametric statistical methods. It defines key concepts like standard error, degrees of freedom, critical values, and one-tailed versus two-tailed hypotheses. Common parametric tests discussed include t-tests, ANOVA, ANCOVA, and MANOVA. Nonparametric tests covered are chi-square, Mann-Whitney U, Kruskal-Wallis, and Friedman. The document explains when to use parametric versus nonparametric methods and how measures like effect size can quantify the strength of relationships found.
This document provides an overview of parametric statistical tests, including the z-test, t-tests, chi-square test, F-test, and Bartlett's test. It discusses the history and development of the Student's t-test, including its creation by William Gosset under the pseudonym "Student." The t-test is used to compare means between two samples or between a sample and a theoretical population. The document outlines the assumptions, calculations, and interpretations of one-sample, unpaired, and paired t-tests.
types of variables in research, Dependent independent, moderator,quantitative qualitative,continuous discontinuous,demographic,extraneous, confounding,intervening, control
This presentation describes the concept of One Sample t-test, Independent Sample t-test and Paired Sample t-test. This presentation also deals about the procedure to do the t-test through SPSS.
This document discusses various statistical techniques used for inferential statistics, including parametric and non-parametric techniques. Parametric techniques make assumptions about the population and can determine relationships, while non-parametric techniques make few assumptions and are useful for nominal and ordinal data. Commonly used parametric tests are t-tests, ANOVA, MANOVA, and correlation analysis. Non-parametric tests mentioned include Chi-square, Wilcoxon, and Friedman tests. Examples are provided to illustrate the appropriate uses of each technique.
This document discusses different types of statistical analysis techniques. It begins by defining descriptive analysis as studying distributions of one variable and bivariate/multivariate analysis as studying relationships between two or more variables. It then discusses various types of statistical analyses including correlation analysis, causal analysis, multiple regression analysis, multiple discriminant analysis, multivariate ANOVA, and canonical analysis. It also covers inferential analysis, characteristics and importance of statistical methods, assumptions of parametric tests, examples of parametric and non-parametric tests, and provides details on the chi-square test.
Methods of Statistical Analysis & Interpretation of Data..pptxheencomm
The document discusses various statistical analysis techniques for making sense of numerical data, including descriptive statistics like measures of central tendency and dispersion to describe basic features of data, and inferential statistics to make predictions about a larger population based on a sample. Common inferential techniques covered are correlation, regression analysis, analysis of variance, and hypothesis testing to compare data against assumptions. The goal of these statistical methods is to derive meaningful insights from research data.
This document provides an introduction to biostatistics and key concepts. It defines biostatistics as the development and application of statistical techniques to scientific research relating to human life and health. Some key terms discussed include:
- Population, which is the totality of individuals of interest
- Sample, which is a subset of a population
- Variables, which can be qualitative (non-numerical) or quantitative (numerical)
- Levels of measurement for variables, including nominal, ordinal, interval, and ratio scales
- Descriptive methods for qualitative data, including frequency distributions
Biostatistics plays an important role in modern medicine, including determining disease burden, finding new drug treatments, planning resource allocation, and measuring
Parametric and non-parametric statistical tests are used to analyze data and test hypotheses. Parametric tests assume the data is normally distributed, while non-parametric tests do not. Common parametric tests include t-tests, ANOVA, and correlation tests. Common non-parametric tests include the Wilcoxon rank-sum test, Kruskal-Wallis test, chi-square test, Friedman test, and Spearman's rank correlation. Choosing the appropriate test depends on the research question, type of data, and whether the assumptions of parametric tests are met.
When to use, What Statistical Test for data Analysis modified.pptxAsokan R
This document discusses choosing the appropriate statistical test for data analysis. It begins by defining key terminology like independent and dependent variables. It then discusses the different types of variables, including quantitative, categorical, and their subtypes. Hypothesis testing and its key steps are explained. The document outlines assumptions that statistical tests make and categorizes common parametric and non-parametric tests. It provides guidance on choosing a test based on the research question, data structure, variable type, and whether the data meets necessary assumptions. Specific statistical tests are matched to questions about differences between groups, association between variables, and agreement between assessment techniques.
This document provides an overview of hypothesis testing and choosing the appropriate statistical test. It discusses types of data, research questions, and common statistical tests such as t-tests, ANOVA, regression, and their applications. The key steps in hypothesis testing are to determine the null hypothesis, state it, choose a statistical test, and use the results to either support or reject the null hypothesis. Resources for determining the right statistical test for different study designs are also provided.
1. The document discusses quantitative research methods, including comparing groups, examining relationships between variables, different types of data and levels of measurement, sampling techniques, and common statistical tools.
2. Key statistical tools covered include t-tests, ANOVA, correlation analysis, chi-square tests, and non-parametric equivalents for comparing groups and examining relationships.
3. The purpose of quantitative research is to systematically investigate phenomena through collecting and analyzing numerical data.
ANOVA STATISTICAL ANALYSIS USING SPSS AND ITS IMPACT IN SOCIETYsaran2011
This document discusses various statistical analysis techniques used in SPSS, including ANOVA, MANOVA, and ANCOVA. It defines one-way and two-way ANOVA as comparing mean differences between three or more groups with a single continuous dependent variable. One-way ANOVA compares a single factor while two-way compares two factors. MANOVA extends ANOVA to assess the effect of one or more independent variables on two or more dependent variables. ANCOVA is similar to ANOVA but includes a continuous covariate. The document provides examples and outlines of how to apply these techniques.
Chapter 13 Data Analysis Inferential Methods and Analysis of Time SeriesInternational advisers
This document discusses inferential statistics and time series analysis. It defines inferential statistics as ways to generalize statistics from a sample to a larger population. Common inferential methods include correlation, linear regression, ANOVA, and time series analysis. Correlation measures relationships between variables while regression predicts outcomes. ANOVA compares group means. Time series analysis models trends, seasonality, and irregular patterns over time.
Evaluation Unit 4
Statistics in the View point of Evaluation
Unit 4 Syllabus-
4.2.1- Measuring Scales- Meaning and Statistical Use
4.2.2- Conversion and interpretation of Test Score
4.2.3- Normal Probability Curve
4.2.4- Central Tendency and its importance in Evaluation.
4.2.5- Dimensions of Deviation
The Unit 4 is all about Statistics…
Statistics is the study of the collection, analysis, interpretation, presentation, and organization of data.
In other words, it is a mathematical discipline to collect, summarize data.
Also, we can say that statistics is a branch of applied mathematics.
Statistics is simply defined as the study and manipulation of data. As we have already discussed in the introduction that statistics deals with the analysis and computation of numerical data.
Projective methods of Evaluation through Statistics-
Measurement is a process of assigning numbers to individuals or their characteristics according to specific rules.” (Eble and Frisbie, 1991, p.25).
This is very common and simple definition of the term ‘measurement’.
You can say that measurement is a quantitative description of one’s performance. Gay (1991) further simplified the term as a process of quantifying the degree to which someone or something possessed a given trait, i.e., quality, characteristics, or features.
Measurement assigns a numeral to quantify certain aspects of human and non-human beings.
It is numerical description of objects, traits, attributes, characteristics or behaviours.
Measurement is not an end in itself but definitely a means to evaluate the abilities of a person in education and other fields as well.
Measurement Scale-
Whenever we measure anything, we assign a numerical value. This numerical value is known as scale of measurement. A scale is a system or scheme for assigning values or scores to the characteristics being measured (Sattler, 1992). Like for measuring any aspect of the human being we assign a numeral to quantify it, further we can provide an order to it if we know the similar type of measurement of other members of the group, we can also make groups considering equal interval scores within the group.
Psychologist Stanley Stevens developed the four common scales of measurement:
Nominal
Ordinal
Interval &
Ratio
Each scale of measurement has properties that determine how to properly analyze the data.
Nominal scale-
In nominal scale, a numeral or label is assigned for characterizing the attribute of the person or thing.
That caters no order to define the attribute as high-low, more-less, big-small, superior-inferior etc.
In nominal scale, assigning a numeral is purely an individual matter.
It is nothing to do with the group scores or group measurement.
Statistics such as frequencies, percentages, mode, and chi-square tests are used in nominal measurement.
Examples include gender (male, female), colors (red, blue, green), or types of fruit (apple, banana, orange).
Ordinal scale-
Ordinal scale is synonymous to ranking or g
Here are the steps to solve this hypothesis testing problem:
1. State the null and alternative hypotheses:
H0: There is no significant difference between the means under stress and no stress conditions.
H1: There is a significant difference between the means under stress and no stress conditions.
2. Choose the level of significance: Given as α = 0.01
3. Select the appropriate statistical test: Since this involves comparing the means of two independent groups, use a two-sample t-test.
4. Compute the test statistic and p-value: Follow the t-test formula and calculation.
5. Make a decision: Reject H0 if p-value < α, fail to reject H0 if
1) The document discusses parametric tests and the t-test/Student's t-test. It provides examples of different types of parametric tests and explains what assumptions are made.
2) There are several types of t-tests that are used to compare means, including independent samples t-tests, paired samples t-tests, and one-sample t-tests. The t-test calculates a t-value to determine if there is a significant difference between group means.
3) The assumptions of the independent samples t-test include independent observations, normally distributed data, equal variances between groups, and random sampling. The paired t-test assumes independence of differences and a normal distribution of differences.
April Heyward Research Methods Class Session - 8-5-2021April Heyward
This document provides an overview of key concepts in research methods for public administration, including:
1. Levels of measurement for variables, including nominal, ordinal, interval, and ratio levels. Examples are provided for each level.
2. Common research designs such as experimental, quasi-experimental, cross-sectional, and longitudinal designs.
3. Quantitative data analysis techniques including descriptive statistics, inferential statistics like ANOVA and regression, and correlation analysis. Frequency distributions, measures of central tendency and variability are covered.
4. Confidence intervals and how they are used to estimate population parameters more accurately than point estimates, by providing a probability assessment through setting a confidence level. Common confidence levels like 90%, 95%,
This document discusses different scales of measurement used in research including nominal, ordinal, interval, and ratio scales. It provides examples and characteristics of each scale. Nominal scales involve categories without order, ordinal scales involve ordered categories without defined intervals, interval scales have equal intervals but an arbitrary zero point, and ratio scales have an absolute zero point and allow calculations such as proportions. The document also covers topics such as questionnaire design, open-ended and closed-ended question types, and methods of administering questionnaires.
This document discusses key concepts in biostatistics used in biomedical research. It covers topics like types of variables, measures of central tendency and dispersion, distributions of data, statistical tests for different situations, hypotheses testing and errors, measures of association, diagnostic tests, and regression analysis. Understanding biostatistics is important for evidence-based medicine and improving patient lives through rigorous research. Sample size, confidence intervals, and avoiding bias and confounding are important considerations in study design and interpretation.
This document provides an overview of quantitative research design and methods. It discusses quantitative research as aiming to discover how many people think, act or feel in a specific way using large sample sizes and standardized questions. The summary then describes quantitative research designs as descriptive (measuring subjects once) or experimental (measuring subjects before and after treatment). It also summarizes key aspects of quantitative data analysis including descriptive statistics, inferential statistics, and some common parametric and non-parametric statistical tests.
This document provides an outline and definitions for key concepts in statistics. It begins by defining statistics as a branch of applied mathematics dealing with collecting, organizing, analyzing, and interpreting quantitative data. It then distinguishes between descriptive statistics, which summarizes data, and inferential statistics, which makes predictions based on data analysis. It defines variables, scales of measurement, populations and samples, and parameters. The last section discusses common methods for collecting data, including interviews, questionnaires, observation, tests, and mechanical devices.
The document discusses quantitative research design and methodology. It describes different quantitative research methods such as surveys, interviews, and physical counts. It explains that quantitative research aims to discover how many people think, act, or feel in a certain way by using large sample sizes. The document also summarizes different quantitative research designs like descriptive, experimental, correlational, and quasi-experimental designs. It provides details on data analysis methods in quantitative research including descriptive and inferential statistics.
This document provides an overview of various statistical tests used for hypothesis testing, including parametric and non-parametric tests. It defines key terms like population, sample, mean, median, mode, and standard deviation. It explains the stages of hypothesis testing including creating the null and alternative hypotheses, determining the significance level, and deciding which statistical test to use based on the type of data and number of samples. Specific tests covered include the z-test, t-test, ANOVA, chi-square test, Wilcoxon signed-rank test, Mann-Whitney U test, Kruskal-Wallis test, and Friedman test.
The document is a PowerPoint presentation created by Amerie Mae M. David from GSIS Village Elementary School on February 18, 2018. It consists of multiple slides with the date "2/18/2018 AMDavid" repeated on each slide, indicating it is a work in progress.
Generation Z, born in the late 1990s or early 2000s, grew up familiar with digital technology, the internet, and social media from a young age. Coding skills are becoming increasingly important for Generation Z. The traditional school punishment of holding one's earlobes while bending and standing was called "Ketuk Ketampi" in Malaysia. A scientific study found that this technique synchronizes both sides of the brain and stimulates acupressure points in the earlobes to improve memory and intelligence.
This document discusses recommendations for improving reading comprehension through interventions. It recommends providing explicit vocabulary instruction, direct comprehension strategy instruction, opportunities for discussion of texts, increasing student motivation, and intensive interventions for struggling readers. Specific strategies are discussed, such as teaching vocabulary words and their meanings, asking and answering different levels of questions about texts, identifying main ideas, and using graphic organizers. Effective comprehension instruction should be explicit and direct, provide guided practice, and help students understand how to independently use strategies to comprehend texts.
The document provides an overview of the Bible. It discusses what the Bible is, how it is divided into the Old and New Testaments, the different canons, who wrote it, how many books it contains, and the various sections and sources of the books. Key points include that the Bible is the inspired word of God, it was written over many years by various human authors under the guidance of the Holy Spirit, and it is divided into the Catholic canon of 73 books and the Protestant canon of 66 books.
Global Developmental Delay_sped 306.pptxRachelle Bisa
The document provides signs and symptoms that parents should watch for in their children's development between ages 3 months to 2 years. It lists behaviors that should prompt contacting a doctor, such as not babbling by 4 months or not speaking 15 words by 2 years. It also discusses genetic and environmental factors that can increase risk of developmental delays. Common screening tests are mentioned to identify children needing further evaluation.
Reading class is about to start. Students are asked to say a prayer before the lesson begins. They are instructed to ask the holy spirit to guide them during the reading.
This document provides summaries of various non-fiction text features including a title page, table of contents, index, glossary, headings, keywords, hyperlinks, bullets, cutaways, graphs, illustrations, photographs, captions, diagrams, labels, text boxes, maps, charts, icons, timelines, and sidebars. Each feature is represented by an example poster that defines and demonstrates the feature.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
Basic stat tools
1. BASIC STATISTICAL
TOOLS IN RESEARCH
Mr. Jerome L. Buhay
Mathematics and Statistics Department
DLSU-Dasmariñas
2. Objectives
At the end of this webinar the participants will be
able to:
• Identify and describe some basic terms in
Statistics
• Differentiate parametric and non-parametric
tests
• Demonstrate the use of different statistical tests
• Interpret statistical result
3. Basic Terms
1. Population is the set of all individuals or entities
under consideration or study.
2. Variable is a characteristic of interest measurable
in everyone in the population that varies. It may
change from group to group, person to person, or
even within one person over time.
Types of Variables
Qualitative Variable – consists of categories or
attributes, which have non-numerical characteristics.
Quantitative Variable – consists of numbers
representing counts or measurement.
4. Basic Terms
3. Sample is a part of the population or a sub-
collection of elements drawn from a
population.
4. Parameter is a numerical measurement
describing some characteristic of a population
5. Statistics is a numerical measurement
describing some characteristic of a sample.
5. Basic Terms
6. Survey is often conducted to gather opinions or
feedback about a variety of topics.
- Census Survey, referred as census, is conducted
to gather information from the entire population.
- Sampling Survey, referred as survey, is
conducted to gather information only from a part of
the population.
6. Basic Terms
7. Hypothesis is a statement or a tentative theory that
is assumed to be true. Usually tested using sample
data.
Null hypothesis – the null hypothesis is denoted by Ho; it is
the hypothesis of “no difference” and is the hypothesis that is
being tested. -
Alternative hypothesis – the alternative hypothesis is
denoted by Ha or H1. This is the hypothesis that contradicts
the null hypothesis. Is assumed to be true when the Ho is
rejected.
7. Identify whether the statement is a null or
alternative hypothesis.
▪ Drug X is not effective in treating COVID19.
Ans. Ho
▪ There is a significant difference between the academic performance of
male and female students.
Ans. Ha
▪ The monthly salary of factory workers is dependent of their
educational attainment.
Ans. Ha
▪ There is no a significant relationship between patients age and number
of days of recovery to COVID19.
Ans. Ho
▪ There is no a significant difference among the mathematics
performance of students under different learning modalities?
Ans. Ho
8. Measurement Scales/Levels
The Nominal Scale
•simply represents qualitative difference in the
variable measured
•can only tell us that difference exists without
the possibility of telling the direction or
magnitude of the difference
•e.g. Program in college, race, gender,
occupation, religion, etc.
9. Measurement Scales/Levels
The Ordinal Scale
•the categories that make up an ordinal scale
form an ordered sequence
•can tell us the direction of the difference but
not the magnitude
•e.g. coffee cup sizes, socioeconomic class, T-
shirt sizes, food preferences
10. Measurement Scales/Levels
The Interval Scale
•categories on an interval scale are organized
sequentially, and all categories are numerically
measured
•we can determine the direction and the magnitude
of a difference
•May have an arbitrary zero (convenient point of
reference) but has no true zero point
e.g. temperature in Fahrenheit, time in seconds
11. Measurement Scales/Levels
The Ratio Scale
•consists of equal, ordered categories anchored by a
zero point that is not arbitrary but meaningful
(representing absence of a variable
•allows us to determine the direction, the magnitude,
and the ratio of the difference
•e.g. reaction time, number of errors on a test, scores
in a test, speed of cars, weight loss, etc
12. Classification of Data Analytic Methods
Dependence Method
The dependence methods test for the presence of
or absence of relationship between two sets of
variables – the dependent and independent
variables. Common dependence methods are t-test,
ANOVA, ANCOVA, regression analysis, chi-
square test, MANOVA, discriminant analysis and,
logistic regression.
13. Interdependence methods
When data sets do exist for which it is impossible
to conceptually designate one set of variables as
dependent and another set of variables as
independent. For these types of data sets the
objectives are to identify how and why the
variables are related among themselves. Common
examples are correlation analysis, principal
component analysis, and factor analysis.
Classification of Data Analytic Methods
16. Parametric VS Non-Parametric
Test
Parametric Tests Non-Parametric
•Independent Observations
•Normal Distribution
•Interval / Ratio Scale Data
•Independent Observations
•Easy to use and understand
•Free Distribution
•Ordinal/Nominal Scale Data
17. Interpreting Statistical Result
Important Terms
✓ The test statistic is a value computed from the sample data,
and it is used in making the decision about the rejection of
the null hypothesis.
✓ The critical region (or rejection region) is the set of all
values of the test statistic that cause us to reject the null
hypothesis. It is decided by Critical Value.
✓ The significance level (denoted by ) is the probability that
the test statistic will fall in the critical region when the null
hypothesis is actually true. Common choices for are 0.05,
0.01, and 0.10.
18. Interpreting Statistical Result
➢ The statement of the problem/hypothesis is the
basis for interpreting results.
➢ The null hypothesis is either rejected or not to be
rejected
➢ Significant result is met when the null hypothesis
is rejected. Not significant when the null
hypothesis is not rejected.
19. Interpreting Statistical Result
Significance can mean any of the following:
– There is a relationship.
– There is an association between or among
variables.
– There is an effect.
– The treatment is effective.
– A variable is dependent on the other variable/s.
– There is a difference/different effect.
21. Traditional method
➢ Reject H0 if the test statistic falls within the critical region.
➢ Fail to reject H0 if the test statistic does not fall within the
critical region.
Critical
Value
Critical
Value
22. P-value method
➢Reject H0 if P-value (where is the
significance level, such as 0.05).
➢Fail to reject H0 if P-value > .
24. T- test
• T-test is a parametric test that is commonly used
to test difference between 2 group means. Means
may be from independent or dependent groups
• A dependence method, usually a univariate tests
and is most effective to use when the independent
variable is non-metric.
Example: testing the relationship between level of
job satisfaction and gender.
25. One-sample T-test
➢Used to test single population mean
➢Usually compare the mean to existing
population mean or to the standard norm
➢Example is comparing the performance in
the board exam of a certain school to the
national result
26. Sample SPSS output
T-Test
N Mean
Std.
Deviation
Std. Error
Mean
Time to effect 200 4.366 2.68660 0.18997
Lower Upper
Time to effect -3.337 199 0.001 -0.63400 -1.0086 -0.2594
One-Sample Statistics
One-Sample Test
Test Value = 5
t df
Sig. (2-
tailed)
Mean
Difference
95% Confidence Interval of
the Difference
27. T-test for Independent Samples
✓ Also called the two sample t-test for independent
samples
✓ Assumptions maybe equal or unequal variances
✓ It intends to test whether there is a significant
difference between the means of two unrelated
groups
✓ It is use to test the null hypothesis:
𝜇1 = 𝜇2
28. Sample SPSS output
T-Test
N Mean
Std.
Deviation
Std.
Error
Mean
Female 101 4.620 2.820 0.281
Male 99 4.107 2.531 0.254
Lower Upper
Equal variances
assumed
2.651 0.105 1.352 198 0.178 0.513 0.379 -0.235 1.260
Equal variances not
assumed
1.354 196.491 0.177 0.513 0.379 -0.234 1.260
Time to
effect
Group Statistics
Gender
Time to
effect
Independent Samples Test
Levene's Test for
Equality of
Variances t-test for Equality of Means
F Sig. t df
Sig. (2-
tailed)
Mean
Difference
Std. Error
Difference
95% Confidence
Interval of the
Difference
29. Sample SPSS output
T-Test
N Mean
Std.
Deviation
Std.
Error
Truck 40 19.70 3.107 0.491
Automobile 114 25.30 3.646 0.341
Lower Upper
Equal variances
assumed
0.004 0.948 -8.664 152 0.000 -5.597 0.646 -6.874 -4.321
Equal variances not
assumed
-9.356 79.405 0.000 -5.597 0.598 -6.788 -4.407
Fuel
efficiency
Levene's Test for
Equality of
t-test for Equality of Means
F Sig. t df
Sig. (2-
tailed)
Mean
Difference
Std. Error
Difference
95% Confidence
Interval of the
Difference
Group Statistics
Vehicle type
Fuel
efficiency
Independent Samples Test
30. T-test for dependent samples
➢Also called the paired t-test
➢It intends to test whether there is a
significant difference between the means
from the same group.
➢Mostly used in comparing pre-test and post-
test results
➢It is use to test the null hypothesis:
𝜇 𝑏𝑒𝑓𝑜𝑟𝑒 = 𝜇 𝑎𝑓𝑡𝑒𝑟
31. Sample SPSS Output
T-Test
Mean N
Std.
Deviation
Std. Error
Mean
Triglyceride 138.44 16 29.040 7.260
Final triglyceride 124.38 16 29.412 7.353
Weight 198.38 16 33.472 8.368
Final weight 190.31 16 33.508 8.377
Lower Upper
Pair 1
Triglyceride - Final
triglyceride
14.063 46.875 -10.915 39.040 1.200 15 0.249
Pair 2 Weight - Final weight 8.063 2.886 6.525 9.600 11.175 15 0.000
Paired Samples Test
Paired Differences
t df
Sig. (2-
tailed)Mean
Std.
Deviation
95% Confidence Interval
of the Difference
Paired Samples Statistics
Pair 1
Pair 2
32. ANOVA – Analysis of Variance
➢ It is an appropriate technique for estimating the
parameters of a linear model, Y = α + βx + ε, when the
independent variables are nominal or categorical.
➢ In practice, it is used to test significant differences
among group means (more than 2 groups)
➢ Mostly use in experimental research, esp. when design
of experiment is applied.
➢ Example: Consider the case where a medical
researcher is interested about the effect of occupation
on cholesterol level. The independent variable,
occupation, is nominal.
33. Sample SPSS Output
Descriptives
SEXUALITY
RELIGION 1 50 2.441 0.765
RELIGION 2 50 2.129 0.677
RELIGION 3 50 1.993 0.467
RELIGION 4 50 2.313 0.534
Total 200 2.219 0.640
SEXUALITY
Levene Statistic df1 df2 Sig.
5.175 3 196 0.002
SEXUALITY
Sum of
Squares df
Mean
Square F Sig.
Between
Groups
5.868 3 1.956 5.062 0.002
Within Groups 75.735 196 0.386
Total 81.603 199
Test of Homogeneity of Variances
ANOVA
N Mean
Std.
Deviation
35. Correlation Analysis
❖Correlation is a measure of the direction
and strength of linear relationship between
two variables.
➢Direction means positive or negative.
➢Strength can be perfect, strong or high,
moderate, low or zero or no correlation.
❖Correlation between two variables does not
prove X causes Y or Y causes X.
36. – Degree/Strength and Direction of Relationship
❖ How well do the data fit a specific form?
❖ Typically look for how well data fit a straight line.
❖ Scatter diagram is an illustrative way to determine
the strength and direction of relationship.
❖Pearson Correlation Coefficient is a numerical
measure that can also be used to determine
strength and direction of relationship.
What is correlation?
38. Pearson correlation coefficient r
Pearson Correlation coefficient is a numerical
value that measures strength and direction of
linear relationship
Symbol: r
✓ r can range from -1.0 to +1.0
✓ Sign (+/-) indicates “direction”
✓ Value indicates “strength”
✓ Measures a “linear” relationship only
✓ Significance of the Pearson r can be tested using t-
test
39. Pearson correlation coefficient r
Illustration:
•
-1
•
1
•
0
Perfect
Negative
Correlation
Perfect
Positive
Correlation
No/Zero
Correlation
➢Closer to 0 = weaker
➢Closer to 1.0 = stronger
➢r close to 1.0 perfect
➢r 0 could mean many things:
❖No correlation at all between X & Y
❖Non-linear relationship between X & Y
❖Restricted range on X and/or Y
❖Outlier may be causing problems
40. Activity: Interpret the following r coefficient
1) r = 0.85
2) r = -0.69
3) r = -0.37
4) r = -0.11
5) r = 0.09
6) r = 0.32
7) r = -0.92
8) r = 0.75
41. Activity: Interpret the following r coefficient
1) r = 0.85 Ans.: Very Strong Positive
2) r = -0.69 Ans.: Moderate/Strong Negative
3) r = -0.37 Ans.: Weak Negative
4) r = -0.11 Ans.: No/Very weak
5) r = 0.09 Ans.: No/Very weak
6) r = 0.29 Ans.: Weak Positive
7) r = -0.92 Ans.: Very Strong Negative
8) r = 0.75 Ans.: Strong Positive
42. Interpreting r
r Verbal Interpretation
-1 Perfect Negative Correlation
-0.8 to -0.99 Very Strong Negative Correlation
-0.6 to -0.79 Strong Negative Correlation
-0.4 to -0.59 Moderate Negative Correlation
-0.2 to -0.39 Weak Negative Correlation
-0.01 to -0.19 Very Weak Negative Correlation
0 No Correlation
0.01 to 0.19 Very Weak Positive Correlation
0.2 to 0.39 Weak Positive Correlation
0.4 to 0.59 Moderate Positive Correlation
0.6 to 0.79 Strong Positive Correlation
0.8 to 0.99 Very Strong Positive Correlation
1 Perfect Positive Correlation
Interpreting Correlation (Evans, 1996)
43. Sample SPSS Output
RELATIONSHIP
TOWARDS
ADMINISTRATO
RS
RELATIONSHI
P TOWARDS
FELLOW
EMPLOYEES
ATTITUDE
TOWARDS
WORK
PROFESSIONA
LISM
PUBLIC
RELATIONS
Pearson Correlation 1 -0.093 0.191 0.222 .574**
Sig. (2-tailed) 0.610 0.278 0.207 0.005
N 34 34 34 34 34
Pearson Correlation -0.093 1 .518* 0.327 .429*
Sig. (2-tailed) 0.610 0.004 0.059 0.011
N 34 34 34 34 34
Pearson Correlation 0.191 .518* 1 .665**
.794**
Sig. (2-tailed) 0.278 0.004 0.000 0.000
N 34 34 34 34 34
Pearson Correlation 0.222 0.327 .665** 1 .687**
Sig. (2-tailed) 0.207 0.059 0.000 0.000
N 34 34 34 34 34
Pearson Correlation .574**
.429*
.794**
.687** 1
Sig. (2-tailed) 0.005 0.011 0.000 0.000
N 34 34 34 34 34
PROFESSIONALISM
PUBLIC RELATIONS
**. Correlation is significant at the 0.01 level (2-tailed).
*. Correlation is significant at the 0.05 level (2-tailed).
Correlations
RELATIONSHIP
TOWARDS
ADMINISTRATORS
RELATIONSHIP
TOWARDS FELLOW
EMPLOYEES
ATTITUDE TOWARDS
WORK
44. Common Nonparametric Tests
Chi-square Test
Wilcoxon Signed rank Test
Wilcoxon Rank-Sum Test
Kruskal-Wallis Test
Wilcoxon-Mann-Whitney Test
Spearman Rank-order Correlation
45. Chi-Square Test
The Chi-Square test is known as the test of
goodness of fit and Chi-Square test of
Independence. In the Chi-Square test of
Independence, the frequency of one nominal
variable is compared with different values of the
second nominal variable.
The Chi-square test of Independence is used
when we want to test associations between two
categorical variables.
47. Wilcoxon Signed Rank Test
The Wilcoxon signed rank test is a frequently
used nonparametric test for paired data (e.g.,
consisting of pre- and post treatment
measurements) based on independent units of
analysis.
A nonparametric alternative to the paired t-test
It is a test about the median or known as the
median test.
48. Wilcoxon Rank-Sum Test
The Wilcoxon rank-sum test is a
nonparametric alternative to the two
sample t-test which is based solely on the
order in which the observations from the
two samples fall.
49. Kruskal –Wallis Test
the Kruskal–Wallis one-way analysis of
variance by ranks is a non-parametric method for
testing equality of population medians among
groups.
It is identical to a one-way analysis of variance
with the data replaced by their ranks.
50. Wilcoxon-Mann-Whitney Test
The Wilcoxon-Mann-Whitney test uses the ranks
of data to test the hypothesis that two samples
of sizes m and n might come from the same
population
The Mann-Whitney test is nonparametric : it does
not rest on any assumption concerning the
underlying distributions. It is therefore more
widely applicable than the t-test.
51. Spearman Rank-Order Correlation
➢ Spearman's Rank Correlation is a technique used
to test the direction and strength of the relationship
between two variables. In other words, its a device
to show whether any one set of numbers is
correlated to another set of numbers.
➢ It uses the statistic Rs which falls between -1 and
+1.
➢ It is a test identical to Pearson correlation r.
•Back
52. Summary of Parametric and
Nonparametric Test
Nonparametric tests Parametric tests
Nominal data Ordinal data Interval, ratio data
One group Chi square
goodness of fit
Wilcoxon signed rank
test
One group t-test
Two unrelated
groups
Chi square Wilcoxon rank sum
test,
Mann-Whitney test
Student’s t-test
Two related
groups
McNemar’s test Wilcoxon signed rank
test
Paired Student’s t-test
K-unrelated
groups
Chi square test Kruskal -Wallis one-
way analysis of
variance
ANOVA
K-related groups Friedman matched
samples
ANOVA with repeated
measurements
53. References
Altares, P. 2012. Elementary statistics with computer applications. (2nd ed., Vol. xii).
Manila(PH): Rex Bookstore.
Anderson DR, Sweeney DJ. Statistics for Business and Economics. Boston: MA:
Cengage Learning; 2018.
Anderson DR, Sweeney DJ. Essentials of Modern Business Statistics with Microsoft
Excel. Boston: MA: Cengage Learning; 2016.
Bluman, A. 2013. Elementary Statistics.6th ed., Vol. 1. Singapore (SG): McGraw-
Hill Education.Cuesta H. Practical Data Analysis. Birmingham: Packt
Publishing; 2016.
Dando P. Say It with Data: A Concise Guide to Making Your Case and Getting
Results. ALA ed. Chicago; 2014.
Levin, J. A., Fox, J. A., & Forde, D. R. 2009. Elementary statistics in social
research: the essentials .11th ed., Vol. xiv. Singapore (SG): Pearson
Education South Asia Pte.