The document summarizes key concepts from chapters 6 and 7 of a statistics textbook. Chapter 6 discusses sampling and calculating standard error for infinite and finite populations. Chapter 7 introduces estimation, including interval estimates and point estimates. It provides examples of calculating standard error and confidence intervals. The document also lists SPSS tips for t-tests.
This presentation will address the issue of sample size determination for social sciences. A simple example is provided for every to understand and explain the sample size determination.
This presentation is on using repeated measures design in the area of social sciences, behavioural sciences, management, sports, physical education etc.
This document discusses sample size estimation and determination. It defines key terms like population, statistic, parameter, sampling error, and confidence interval. It explains that sample size determination is calculating the number of subjects needed in a study to make inferences about a reference population. An appropriate sample size allows for valid analysis and the desired level of accuracy. The sample should be representative of the population and large enough to minimize errors and bias. Both too large and too small samples have disadvantages. Common methods for calculating sample size are using formulas, ready-made tables, nomograms, and computer software. Formulas are provided for estimating proportions, differences in proportions, means, and differences in means.
The statistical Confidence Level (C.L.) is the probability that the corresponding confidence interval covers the true ( but unknown ) value of a population parameter. Such confidence interval is often used as a measure of uncertainty about estimates of population parameters
This document discusses sample size determination and calculation. It defines sample size as the subset of a population chosen for a study to make inferences about the total population. The key factors in determining sample size are the desired level of accuracy, allowing for appropriate analysis, and validity of significance tests. The document provides formulas and methods for calculating sample size for different study designs and populations, including using formulas, readymade tables, nomograms, and computer software. Accurately determining sample size is essential for research.
The document provides an overview of analysis of variance (ANOVA). It defines ANOVA and discusses its key concepts, including how it was developed by Ronald Fisher. It also covers one-way and two-way ANOVA, describing their techniques and providing examples. The uses, advantages and limitations of ANOVA are outlined.
univariate and bivariate analysis in spss Subodh Khanal
this slide will help to perform various tests in spss targeting univariate and bivariate analysis along with the way of entering and analyzing multiple responses.
This document provides an overview of analysis of variance (ANOVA) techniques. It discusses one-way ANOVA, which evaluates differences between three or more population means. Key aspects covered include partitioning total variation into between- and within-group components, assumptions of normality and equal variances, and using the F-test to test for differences. Randomized block ANOVA and two-factor ANOVA are also introduced as extensions to control for additional variables. Post-hoc tests like Tukey and Fisher's LSD are described for determining specific mean differences.
This presentation will address the issue of sample size determination for social sciences. A simple example is provided for every to understand and explain the sample size determination.
This presentation is on using repeated measures design in the area of social sciences, behavioural sciences, management, sports, physical education etc.
This document discusses sample size estimation and determination. It defines key terms like population, statistic, parameter, sampling error, and confidence interval. It explains that sample size determination is calculating the number of subjects needed in a study to make inferences about a reference population. An appropriate sample size allows for valid analysis and the desired level of accuracy. The sample should be representative of the population and large enough to minimize errors and bias. Both too large and too small samples have disadvantages. Common methods for calculating sample size are using formulas, ready-made tables, nomograms, and computer software. Formulas are provided for estimating proportions, differences in proportions, means, and differences in means.
The statistical Confidence Level (C.L.) is the probability that the corresponding confidence interval covers the true ( but unknown ) value of a population parameter. Such confidence interval is often used as a measure of uncertainty about estimates of population parameters
This document discusses sample size determination and calculation. It defines sample size as the subset of a population chosen for a study to make inferences about the total population. The key factors in determining sample size are the desired level of accuracy, allowing for appropriate analysis, and validity of significance tests. The document provides formulas and methods for calculating sample size for different study designs and populations, including using formulas, readymade tables, nomograms, and computer software. Accurately determining sample size is essential for research.
The document provides an overview of analysis of variance (ANOVA). It defines ANOVA and discusses its key concepts, including how it was developed by Ronald Fisher. It also covers one-way and two-way ANOVA, describing their techniques and providing examples. The uses, advantages and limitations of ANOVA are outlined.
univariate and bivariate analysis in spss Subodh Khanal
this slide will help to perform various tests in spss targeting univariate and bivariate analysis along with the way of entering and analyzing multiple responses.
This document provides an overview of analysis of variance (ANOVA) techniques. It discusses one-way ANOVA, which evaluates differences between three or more population means. Key aspects covered include partitioning total variation into between- and within-group components, assumptions of normality and equal variances, and using the F-test to test for differences. Randomized block ANOVA and two-factor ANOVA are also introduced as extensions to control for additional variables. Post-hoc tests like Tukey and Fisher's LSD are described for determining specific mean differences.
Statistical tests can be used to analyze data in two main ways: descriptive statistics provide an overview of data attributes, while inferential statistics assess how well data support hypotheses and generalizability. There are different types of tests for comparing means and distributions between groups, determining if differences or relationships exist in parametric or non-parametric data. The appropriate test depends on the question being asked, number of groups, and properties of the data.
Estimation and hypothesis testing 1 (graduate statistics2)Harve Abella
This document discusses two main areas of statistical inference: estimation and hypothesis testing. It provides details on point estimation and confidence interval estimation when estimating population parameters. It also explains the key concepts involved in hypothesis testing such as the null and alternative hypotheses, types of errors, critical regions, test statistics, and p-values. Examples are provided to illustrate estimating population means and proportions as well as conducting hypothesis tests.
1. The document discusses hypothesis testing using a one-sample t-test when the population variance is unknown.
2. It provides examples of when to use a z-test or t-test, and walks through the steps of conducting a one-sample t-test including stating hypotheses, determining critical values, computing test statistics, and making conclusions.
3. An example problem demonstrates these steps, testing if a therapy reduces test anxiety below a population mean of 20, finding the sample mean is significantly lower.
Sampling Variability And The Precision Of A Sample by Dr Sindhu Almas copy.pptxDrSindhuAlmas
What use is all this stuff about variability?
Sampling – the big idea
Sampling In Practice
Sampling – the big idea
Need For Sampling
Disadvantages Of Sampling
Types Of Sampling
Factors Affecting Sample Size
Sampling Distribution
Calculating A Confidence Interval Using Software
1) The document discusses commonly used statistical tests in research such as descriptive statistics, inferential statistics, hypothesis testing, and tests like t-tests, ANOVA, chi-square tests, and normal distributions.
2) It provides examples of how to determine sample sizes needed for adequate power in hypothesis testing and how to perform t-tests to analyze sample means.
3) Key statistical concepts covered include parameters, statistics, measurement scales, type I and II errors, and interpreting results of hypothesis tests.
Standard error is used in the place of deviation. it shows the variations among sample is correlate to sampling error. list of formula used for standard error for different statistics and applications of tests of significance in biological sciences
1. The document discusses Granger causality testing within the context of bivariate analysis of stationary time series.
2. It defines Granger causality as when one time series can better predict another by including information from its own past, and describes three main tests for Granger causality between two stationary time series: the direct Granger test, Sims test, and modified Sims test.
3. The direct Granger test involves regressing each variable on lagged values of itself and the other variable, and using an F-test to examine if including lags of the other variable improves predictions compared to only using own lags.
This document provides an overview of parametric statistical tests, including the z-test, t-tests, chi-square test, F-test, and Bartlett's test. It discusses the history and development of the Student's t-test, including its creation by William Gosset under the pseudonym "Student." The t-test is used to compare means between two samples or between a sample and a theoretical population. The document outlines the assumptions, calculations, and interpretations of one-sample, unpaired, and paired t-tests.
ANOVA is a statistical technique used to determine whether the means of groups are statistically different from each other. It can be used to establish cause-and-effect relationships with a certain degree of certainty. There are different types of ANOVA for different study designs. The basic parts of an ANOVA include sums of squares, degrees of freedom, mean squares, and the F-statistic. ANOVA can be performed in Excel using the data analysis tool. An example shows how ANOVA was used to analyze measurement data from multiple inspectors.
Research method ch08 statistical methods 2 anovanaranbatn
1) The document discusses various statistical methods including one-way ANOVA, repeated measures ANOVA, and ANCOVA.
2) One-way ANOVA is used to compare the means of three or more independent groups when you have one independent variable with three or more categories and one continuous dependent variable.
3) Repeated measures ANOVA is used when the same subjects are measured under different conditions to assess for main effects and interactions while accounting for the dependency of measurements within subjects.
Cluster sampling refers to a method where the population is divided into groups called clusters. A simple random sample of these clusters is selected, and then all or a subset of elements within the selected clusters are included in the final sample. It is cheaper than simple random sampling but has a higher chance of sampling error. The key aspects are that the population is divided into clusters, a random sample of clusters is taken, and then data is collected from elements within those clusters.
The presentation discusses null and alternative hypotheses. The null hypothesis expresses no difference or inequality between variables, while the alternative hypothesis expresses a difference or conflict. The null hypothesis is what researchers expect will definitely happen, while the alternative hypothesis is what researchers want to test. Examples are provided of null hypotheses stating children who eat oily fish do not show higher IQ increases than others, and that extroverts and introverts are equally healthy. The alternative hypotheses are that children eating oily fish will show higher IQ increases, and that introverts are not healthier than extroverts.
The document defines key concepts in hypothesis testing such as critical value, significance level, p-value, type I and type II errors, and power. It states that the critical value divides the normal distribution into regions for rejecting or failing to reject the null hypothesis. The significance level corresponds to the critical region. A p-value less than 0.05 indicates the result is statistically significant. Type I error occurs when the null hypothesis is rejected when it is true, while type II error is failing to reject a false null hypothesis. Power is defined as 1 - β, where β is the probability of a type II error.
This document discusses nested case-control studies, case-cohort studies, and case-crossover studies. It provides examples and discusses the advantages and disadvantages of each study design. Nested case-control studies select controls from within a prospective cohort study. Case-cohort studies select a random subcohort of controls from the entire cohort. Case-crossover studies use individuals as their own controls by comparing exposure during case periods to control periods.
This document discusses sample size estimation and determination. It begins by defining what a sample is and why sample size is important. It describes factors that affect sample size, such as desired level of accuracy and precision. Several methods for calculating sample size are presented, including formulas for cross-sectional, case-control, and comparative studies using both qualitative and quantitative variables. Considerations like power, effect size, and study design are discussed. Examples are provided to demonstrate how to use formulas and tables to estimate sample size for different study designs.
Survival analysis is a branch of statistics used to analyze time-to-event data, such as time until death or failure. It estimates the probability that an individual survives past a given time and compares survival times between groups. Objectives include estimating survival probabilities, comparing survival between groups, and assessing how covariates relate to survival time. Survival data can be complete or censored. The Kaplan-Meier estimator is used to estimate survival when there is censoring. The log-rank test compares survival curves between treatment groups, and Cox regression incorporates covariates to predict survival probabilities.
The document defines a sampling distribution of sample means as a distribution of means from random samples of a population. The mean of sample means equals the population mean, and the standard deviation of sample means is smaller than the population standard deviation, equaling it divided by the square root of the sample size. As sample size increases, the distribution of sample means approaches a normal distribution according to the Central Limit Theorem.
This document provides an overview of the Z test for two sample means. It defines the Z test, outlines when it is used, and provides the formula and steps to conduct a hypothesis test using the Z test. An example problem is included that tests if there is a significant difference in average monthly family incomes between two neighborhoods using census data from random samples of 100 families each.
This document provides an overview of randomized control trials (RCTs). It discusses key aspects of RCT design including types of RCTs based on interventions evaluated (explanatory vs pragmatic), participants exposed (parallel vs crossover), number of participants (from n-of-1 trials to mega-trials), blinding of investigators/participants, and accounting for participant preferences. It also covers randomization techniques and their advantages, sample size calculations, and references for further information.
This document provides an overview of receiver operating characteristic (ROC) curves. It defines an ROC curve as a graphical plot that illustrates the performance of a binary classifier system by varying its discrimination threshold. An ROC curve plots the true positive rate against the false positive rate. The area under the ROC curve (AUC) provides a single measure of classifier performance, where an AUC of 1 represents a perfect classifier and 0.5 represents a random classifier. The document discusses how ROC curves can be used to compare multiple classifiers and select optimal threshold values to balance sensitivity and specificity.
The document provides an overview of hypothesis testing with one sample. It introduces key concepts such as the null and alternative hypotheses, types of errors, level of significance, test statistics, p-values, and the nature of hypothesis tests. Examples are provided to demonstrate how to state hypotheses based on a claim, identify types of errors, and determine if a test is left-tailed, right-tailed, or two-tailed. The document serves as an introduction for students to the basic framework and terminology of hypothesis testing with one sample.
This document outlines how to perform hypothesis tests to compare the means of two independent samples. It discusses using a two-sample z-test when samples are large and normally distributed, and a two-sample t-test when samples are small. The key steps are to state the null and alternative hypotheses, calculate the test statistic, find the critical value, make a decision to reject or fail to reject the null hypothesis, and interpret the results. Examples are provided to demonstrate these tests.
Statistical tests can be used to analyze data in two main ways: descriptive statistics provide an overview of data attributes, while inferential statistics assess how well data support hypotheses and generalizability. There are different types of tests for comparing means and distributions between groups, determining if differences or relationships exist in parametric or non-parametric data. The appropriate test depends on the question being asked, number of groups, and properties of the data.
Estimation and hypothesis testing 1 (graduate statistics2)Harve Abella
This document discusses two main areas of statistical inference: estimation and hypothesis testing. It provides details on point estimation and confidence interval estimation when estimating population parameters. It also explains the key concepts involved in hypothesis testing such as the null and alternative hypotheses, types of errors, critical regions, test statistics, and p-values. Examples are provided to illustrate estimating population means and proportions as well as conducting hypothesis tests.
1. The document discusses hypothesis testing using a one-sample t-test when the population variance is unknown.
2. It provides examples of when to use a z-test or t-test, and walks through the steps of conducting a one-sample t-test including stating hypotheses, determining critical values, computing test statistics, and making conclusions.
3. An example problem demonstrates these steps, testing if a therapy reduces test anxiety below a population mean of 20, finding the sample mean is significantly lower.
Sampling Variability And The Precision Of A Sample by Dr Sindhu Almas copy.pptxDrSindhuAlmas
What use is all this stuff about variability?
Sampling – the big idea
Sampling In Practice
Sampling – the big idea
Need For Sampling
Disadvantages Of Sampling
Types Of Sampling
Factors Affecting Sample Size
Sampling Distribution
Calculating A Confidence Interval Using Software
1) The document discusses commonly used statistical tests in research such as descriptive statistics, inferential statistics, hypothesis testing, and tests like t-tests, ANOVA, chi-square tests, and normal distributions.
2) It provides examples of how to determine sample sizes needed for adequate power in hypothesis testing and how to perform t-tests to analyze sample means.
3) Key statistical concepts covered include parameters, statistics, measurement scales, type I and II errors, and interpreting results of hypothesis tests.
Standard error is used in the place of deviation. it shows the variations among sample is correlate to sampling error. list of formula used for standard error for different statistics and applications of tests of significance in biological sciences
1. The document discusses Granger causality testing within the context of bivariate analysis of stationary time series.
2. It defines Granger causality as when one time series can better predict another by including information from its own past, and describes three main tests for Granger causality between two stationary time series: the direct Granger test, Sims test, and modified Sims test.
3. The direct Granger test involves regressing each variable on lagged values of itself and the other variable, and using an F-test to examine if including lags of the other variable improves predictions compared to only using own lags.
This document provides an overview of parametric statistical tests, including the z-test, t-tests, chi-square test, F-test, and Bartlett's test. It discusses the history and development of the Student's t-test, including its creation by William Gosset under the pseudonym "Student." The t-test is used to compare means between two samples or between a sample and a theoretical population. The document outlines the assumptions, calculations, and interpretations of one-sample, unpaired, and paired t-tests.
ANOVA is a statistical technique used to determine whether the means of groups are statistically different from each other. It can be used to establish cause-and-effect relationships with a certain degree of certainty. There are different types of ANOVA for different study designs. The basic parts of an ANOVA include sums of squares, degrees of freedom, mean squares, and the F-statistic. ANOVA can be performed in Excel using the data analysis tool. An example shows how ANOVA was used to analyze measurement data from multiple inspectors.
Research method ch08 statistical methods 2 anovanaranbatn
1) The document discusses various statistical methods including one-way ANOVA, repeated measures ANOVA, and ANCOVA.
2) One-way ANOVA is used to compare the means of three or more independent groups when you have one independent variable with three or more categories and one continuous dependent variable.
3) Repeated measures ANOVA is used when the same subjects are measured under different conditions to assess for main effects and interactions while accounting for the dependency of measurements within subjects.
Cluster sampling refers to a method where the population is divided into groups called clusters. A simple random sample of these clusters is selected, and then all or a subset of elements within the selected clusters are included in the final sample. It is cheaper than simple random sampling but has a higher chance of sampling error. The key aspects are that the population is divided into clusters, a random sample of clusters is taken, and then data is collected from elements within those clusters.
The presentation discusses null and alternative hypotheses. The null hypothesis expresses no difference or inequality between variables, while the alternative hypothesis expresses a difference or conflict. The null hypothesis is what researchers expect will definitely happen, while the alternative hypothesis is what researchers want to test. Examples are provided of null hypotheses stating children who eat oily fish do not show higher IQ increases than others, and that extroverts and introverts are equally healthy. The alternative hypotheses are that children eating oily fish will show higher IQ increases, and that introverts are not healthier than extroverts.
The document defines key concepts in hypothesis testing such as critical value, significance level, p-value, type I and type II errors, and power. It states that the critical value divides the normal distribution into regions for rejecting or failing to reject the null hypothesis. The significance level corresponds to the critical region. A p-value less than 0.05 indicates the result is statistically significant. Type I error occurs when the null hypothesis is rejected when it is true, while type II error is failing to reject a false null hypothesis. Power is defined as 1 - β, where β is the probability of a type II error.
This document discusses nested case-control studies, case-cohort studies, and case-crossover studies. It provides examples and discusses the advantages and disadvantages of each study design. Nested case-control studies select controls from within a prospective cohort study. Case-cohort studies select a random subcohort of controls from the entire cohort. Case-crossover studies use individuals as their own controls by comparing exposure during case periods to control periods.
This document discusses sample size estimation and determination. It begins by defining what a sample is and why sample size is important. It describes factors that affect sample size, such as desired level of accuracy and precision. Several methods for calculating sample size are presented, including formulas for cross-sectional, case-control, and comparative studies using both qualitative and quantitative variables. Considerations like power, effect size, and study design are discussed. Examples are provided to demonstrate how to use formulas and tables to estimate sample size for different study designs.
Survival analysis is a branch of statistics used to analyze time-to-event data, such as time until death or failure. It estimates the probability that an individual survives past a given time and compares survival times between groups. Objectives include estimating survival probabilities, comparing survival between groups, and assessing how covariates relate to survival time. Survival data can be complete or censored. The Kaplan-Meier estimator is used to estimate survival when there is censoring. The log-rank test compares survival curves between treatment groups, and Cox regression incorporates covariates to predict survival probabilities.
The document defines a sampling distribution of sample means as a distribution of means from random samples of a population. The mean of sample means equals the population mean, and the standard deviation of sample means is smaller than the population standard deviation, equaling it divided by the square root of the sample size. As sample size increases, the distribution of sample means approaches a normal distribution according to the Central Limit Theorem.
This document provides an overview of the Z test for two sample means. It defines the Z test, outlines when it is used, and provides the formula and steps to conduct a hypothesis test using the Z test. An example problem is included that tests if there is a significant difference in average monthly family incomes between two neighborhoods using census data from random samples of 100 families each.
This document provides an overview of randomized control trials (RCTs). It discusses key aspects of RCT design including types of RCTs based on interventions evaluated (explanatory vs pragmatic), participants exposed (parallel vs crossover), number of participants (from n-of-1 trials to mega-trials), blinding of investigators/participants, and accounting for participant preferences. It also covers randomization techniques and their advantages, sample size calculations, and references for further information.
This document provides an overview of receiver operating characteristic (ROC) curves. It defines an ROC curve as a graphical plot that illustrates the performance of a binary classifier system by varying its discrimination threshold. An ROC curve plots the true positive rate against the false positive rate. The area under the ROC curve (AUC) provides a single measure of classifier performance, where an AUC of 1 represents a perfect classifier and 0.5 represents a random classifier. The document discusses how ROC curves can be used to compare multiple classifiers and select optimal threshold values to balance sensitivity and specificity.
The document provides an overview of hypothesis testing with one sample. It introduces key concepts such as the null and alternative hypotheses, types of errors, level of significance, test statistics, p-values, and the nature of hypothesis tests. Examples are provided to demonstrate how to state hypotheses based on a claim, identify types of errors, and determine if a test is left-tailed, right-tailed, or two-tailed. The document serves as an introduction for students to the basic framework and terminology of hypothesis testing with one sample.
This document outlines how to perform hypothesis tests to compare the means of two independent samples. It discusses using a two-sample z-test when samples are large and normally distributed, and a two-sample t-test when samples are small. The key steps are to state the null and alternative hypotheses, calculate the test statistic, find the critical value, make a decision to reject or fail to reject the null hypothesis, and interpret the results. Examples are provided to demonstrate these tests.
This document discusses the concepts of reliability and validity in measurement. Reliability refers to the consistency of a measurement and is assessed through stability and equivalence. Stability looks at consistency over repeated measurements using test-retest reliability and parallel forms. Equivalence examines consistency between two equivalent test forms using split-half reliability. Validity refers to how accurately an instrument measures a construct and is assessed through predictive validity, concurrent validity, and content validity.
The document discusses an investment strategy that utilizes quantitative techniques to generate alpha from multiple uncorrelated signals. It examines factors like valuations, momentum, and reversions across equities to construct a market neutral portfolio. The strategy aims to maximize returns while minimizing risks by optimizing weights between the various alpha signals. It takes a rules-based approach to ranking stocks and implementing the portfolio.
Level of Measurement, Frequency Distribution,Stem & Leaf Qasim Raza
This document discusses multivariate data analysis and techniques. It begins by defining qualitative and quantitative data, and the different levels of measurement - nominal, ordinal, interval, and ratio. It then discusses frequency distributions, stem and leaf plots, and demonstrates their use in SPSS. Finally, it defines multivariate data analysis as involving two or more variables, and provides examples of multivariate techniques such as multiple regression, discriminant analysis, MANOVA, and their appropriate uses depending on the level of measurement of the variables.
The document discusses various concepts related to measurement and error including:
- Defining accuracy as closeness to the true value and precision as reproducibility of measurements.
- Types of errors such as determinate/systematic errors which can be corrected and indeterminate/random errors which average out with multiple trials.
- Assessing total error by treating a reference standard as a sample and calculating differences from the reference value.
- Expressing accuracy and precision using terms like mean, percent error, range, standard deviation, and percent coefficient of variation.
Lecture 3 measurement, reliability and validity (La Islaa
This document discusses measurement, reliability, and validity in the context of selection assessments. It defines key concepts like measurement, scores, and correlation. It explains the importance of standardized, objective measures and discusses different types of reliability like test-retest and inter-rater reliability. The document also defines validity as the degree to which a measure assesses the intended attribute. It describes types of validity like criterion-related, content, and construct validity and how validity is determined through validation studies.
This document outlines key concepts for constructing confidence intervals for a population mean when sample sizes are large or small. It discusses how to find point estimates and margins of error, and how to construct confidence intervals using z-scores or t-statistics depending on sample size. Examples are provided to demonstrate how to calculate critical values, margins of error, and minimum sample sizes needed to estimate population means within a given level of confidence.
This document discusses the importance of reliability and validity in psychological measurement. Reliability refers to the consistency and repeatability of measurements. It is influenced by measurement error from factors like a participant's mood or fatigue. Validity indicates how well a measure assesses the intended construct. There are several types of validity including face validity, construct validity, convergent validity, discriminant validity, and criterion-related validity. Reliability is necessary for validity and can be estimated using methods like test-retest reliability, internal consistency reliability, and inter-rater reliability. Validity compares a measure to other related and unrelated constructs to determine if it is measuring what it intends to measure.
Identify variable and measurement of scaleJanisha Gandhi
This document discusses variables, scales of measurement, and key concepts in research methods. It defines variables as factors that can change or take on different values. There are three main types of variables: independent variables which are manipulated by researchers, dependent variables which are measured to assess the effect of independent variables, and control variables which are kept constant. It also describes four scales of measurement - nominal, ordinal, interval, and ratio - which differ in the types of comparisons permitted between values. Key variables and their relationships are illustrated with examples from experimental studies.
Factor analysis is a statistical method used to describe variability among observed correlated variables in terms of a potentially lower number of unobserved variables called factors. It identifies patterns of correlations between observed variables and groups variables that are highly correlated into factors. There are two main types: exploratory factor analysis, which is used to uncover the underlying structure of a relatively large set of variables without making prior assumptions, and confirmatory factor analysis, which tests whether measures of a construct load on factors as expected based on pre-existing theories. Factor analysis involves calculating factor loadings, eigenvalues, rotation methods, and determining the number of factors to extract.
The document discusses the history of chocolate, describing how it originated from cacao beans grown by the Olmecs and Mayans in Mexico and Central America. It then explains how Spanish conquistadors brought cacao back to Europe in the 16th century, where it eventually became popular as a drink among the elite. Over time, chocolate became widely consumed in powder and solid forms across Europe and North America.
This document outlines a course on multivariate data analysis. It introduces key topics that will be covered, including matrix algebra, the multivariate normal distribution, principal component analysis, factor analysis, cluster analysis, discriminant analysis, and canonical correlations. The course workload consists of 40% theory and 60% practice, including a group project and weekly presentations. R will be the main software used. Examples of multivariate data and applications in various fields like business, health, and education are also provided.
The document discusses the history and development of chocolate over centuries. It details how cocoa beans were first used as currency by the Maya and Aztecs before being introduced to Europe in the 16th century. Chocolate became popularized as a drink in Europe in the 17th century and the first chocolate factory was opened in England in the late 1600s.
There are two main types of errors in measurement: systematic errors, which always produce results in the same direction, and random errors, which occur unpredictably due to various factors. The accuracy of a measurement indicates how close it is to the accepted value, while the precision refers to the agreement between multiple measurements of the same quantity. Taking the average of repeated measurements reduces the impact of random errors, but the uncertainty in any measurement must be reported using plus-and-minus values to indicate the possible variance.
The document discusses different types of errors that can occur in measurement. It describes gross errors, systematic errors like instrumental errors and environmental errors, and random errors. It also defines key terms used to analyze errors like limit of reading, greatest possible error, and discusses analyzing measurement data using statistical methods like the mean, standard deviation, variance and histograms. Measurement errors can occur due to issues like parallax, calibration, limits of the measuring device, and are analyzed statistically.
This document provides an overview of Chapter 14 from a microeconomics textbook. The chapter discusses monopoly and antitrust policy. It begins with definitions of monopoly and explores the four main reasons monopolies can arise: from government entry barriers, control of a key resource, network externalities, or large economies of scale. The chapter then examines how a monopoly chooses price and output by equating marginal revenue to marginal cost. It uses graphs to illustrate how a monopoly reduces economic efficiency through deadweight loss compared to perfect competition. The chapter concludes by covering government antitrust laws and enforcement policies aimed at promoting competition.
Overviews non-parametric and parametric approaches to (bivariate) linear correlation. See also: http://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology/Lectures/Correlation
This document provides an overview of multivariate analysis techniques, including dependency techniques like multiple regression, discriminant analysis, and MANOVA, as well as interdependency techniques like factor analysis, cluster analysis, and multidimensional scaling. It describes the uses and processes for each technique, such as using multiple regression to predict values, discriminate analysis to classify groups, and factor analysis to reduce variables. The document is signed off with warm wishes from the owner of Power Group.
This document discusses factors that influence the selection of data analysis strategies and provides a classification of statistical techniques. It notes that the previous research steps, known data characteristics, statistical technique properties, and researcher background all impact strategy selection. Statistical techniques can be univariate, analyzing single variables, or multivariate, analyzing relationships between multiple variables simultaneously. Multivariate techniques are further classified as dependence techniques, with identifiable dependent and independent variables, or interdependence techniques examining whole variable sets. The document provides examples of common univariate and multivariate techniques.
Nonparametric statistics show up in all sorts of places with fuzzy, ranked, or labeled data. The techniques allow handling messy data with more robust results than assuming normality. This talk describes the basics of nonparametric analysis and shows some examples with the Kolomogrov-Smirnov test, one of the most commonly used.
Data analysis is the process of bringing order, structure and meaning to the mass of collected data. It is a messy, ambiguous, time-consuming, creative, and fascinating process. It does not proceed in a linear fashion; it is not neat. Qualitative data analysis is a search for general statements about relationships among categories of data
Capstone Project - Nicholas Imholte - Final DraftNick Imholte
This document summarizes a capstone project analyzing how to optimize a baseball lineup to maximize runs scored given a fixed payroll. The analysis uses regression to model how each event impacts runs scored. Clustering is then used to group players into types based on their hitting abilities. Optimization determines the optimal arrangement of hitter clusters for different payrolls. A simulation complements the analysis by comparing results to the optimization approach.
This document provides an overview of Chapter 8 in a statistics textbook. The chapter covers statistical inference for estimating parameters of single populations, including: point and interval estimation, estimating the population mean when the standard deviation is known or unknown, estimating the population proportion, estimating the population variance, and estimating sample size. Key concepts introduced include confidence intervals, the t-distribution, chi-square distribution, and determining necessary sample size. The chapter outline and learning objectives are also summarized.
The document discusses sampling and how samples can be used to represent populations. It explains the difference between parameters and statistics, and introduces the central limit theorem which states that the distribution of sample means will approach a normal distribution as the sample size increases. Several examples are provided to illustrate concepts like determining the probability that a sample mean differs from the population mean based on the standard error.
This document provides guidance on using statistical tests to determine which process inputs (X's) are critical and impact the process output (Y). It outlines common statistical tests for continuous and discrete data, including tests for normality, 1-sample t-tests, and 1-sample sign tests. Steps are provided to gather input data, apply appropriate hypothesis tests to verify which X's are critical, and list the critical X's.
This document provides guidance on using statistical tests to determine which process inputs (X's) are critical and influence outcomes (Y's). It outlines common statistical tests for continuous and discrete data, including tests for normality, one-sample t-tests to compare a mean to a target, and one-sample sign tests to compare a median when data is not normal. Examples are provided to illustrate how to use Minitab to conduct these tests and interpret the results.
The document provides information on using SPSS and PSPP statistical software to analyze data and conduct statistical tests. It includes 4 lessons:
1. How to define and input data into the software.
2. How to generate descriptive statistics like measures of central tendency and variability to describe data.
3. How to examine relationships between variables using correlation, regression, and graphs.
4. How to perform statistical inference tests for means using one-sample t-tests, independent two-sample t-tests, and paired t-tests. Examples of hypotheses testing and interpreting results are provided.
This document provides an overview of key concepts in sampling, including population, sample, sampling frame, probability sampling, and non-probability sampling. It discusses the qualities of a probability sample, including how findings from a random sample can be generalized to the population. It also covers sample size considerations and different types of error in sampling, such as sampling error and non-sampling error.
The document provides an overview of key concepts related to estimation in statistics, including:
- Estimation involves using sample data to estimate unknown population parameters. Common estimators include the sample mean, proportion, and standard deviation.
- There are two main types of estimates - point estimates and interval estimates. Point estimates are single values while interval estimates specify a range.
- The process of estimation involves identifying the parameter, selecting a random sample, choosing an estimator, and calculating the estimate.
- Estimates can differ from the true population value due to sampling error and non-sampling error. Bias occurs when the expected value of the estimate differs from the true parameter value.
This document discusses the normal distribution and related concepts. It begins with an introduction to the normal distribution and its properties. It then covers the probability density function and cumulative distribution function of the normal distribution. The rest of the document discusses key properties like the 68-95-99.7 rule, using the standard normal distribution, and how to determine if a data set follows a normal distribution including using a normal probability plot. Examples are provided throughout to illustrate the concepts.
Lecture 4 Applied Econometrics and Economic Modelingstone55
The document discusses different methods for selecting random samples from a population, including simple random sampling, stratified sampling, cluster sampling, and systematic sampling. It provides examples of how to generate random samples in Excel and calculate summary statistics. The central limit theorem is also introduced, showing how the distribution of sample means approaches a normal distribution as sample size increases.
This document covers various statistical hypothesis tests, including:
- Small-sample tests for a population mean using the Student's t-distribution.
- Tests for the difference between two population means.
- Tests for a population proportion using the binomial distribution.
- Tests for the difference between two population proportions.
Examples of each type of test are provided and the R functions for conducting the tests are outlined.
Making Statistics Work For Us: Item Bias, Decision Making, and Data-Driven Si...Quinn Lathrop
This talk is about a common problem of imposing a minimum sample size in a business context in order to make decisions. By using Bayesian approaches, we can drastically increase the speed to decision making.
This quote cited in the talk sum it up well:
"An ironic property about effect estimates with relatively large standard errors is that they are more likely to produce effect estimates that are larger in magnitude than effect estimates with relatively smaller standard errors.... There is a tendency sometimes towards downplaying a large standard error (which might increase the p-value of their estimate) by pointing out that, however, the magnitude of the estimate is quite large. In fact, this 'large effect' is likely a byproduct of this standard error.”
This chapter discusses sampling distributions and their properties. It covers the sampling distribution of the mean and the proportion. The key points are:
- A sampling distribution describes the distribution of a statistic like the mean from random samples of a population.
- The Central Limit Theorem states that as sample size increases, the sampling distribution of the mean will approach a normal distribution, even if the population is not normal.
- For the mean, the sampling distribution has a mean equal to the population mean and standard deviation that decreases as sample size increases.
- For a proportion, the sampling distribution can be approximated as normal if sample size n and np or n(1-p) are large enough.
Accurate Campaign Targeting Using Classification - PosterJieming Wei
This document summarizes research on using machine learning algorithms to classify potential donors for fundraising campaigns. The researchers built a binary classification model using neural networks to identify likely donors. They found that a neural network approach had the lowest false positive rate compared to other models. Testing different thresholds, they determined that a threshold of -0.1 achieved the most cost-effective balance between identifying donors and minimizing mailing costs.
NTU DBME5028 Week5 Introduction to Machine Learning Sean Yu
This document provides an introduction and overview of machine learning. It discusses the core idea of machine learning, the general workflow of the machine learning process including defining the problem, collecting and cleaning data, selecting and building a model, evaluating key metrics, and creating presentations. It also covers common machine learning tasks like classification, data normalization, addressing overfitting, hyperparameter tuning, and evaluation metrics. The document uses examples from medical imaging to illustrate machine learning concepts and processes.
The document discusses population distributions, sampling distributions, and key concepts related to sampling. Some main points:
- A population distribution shows the probability of each possible value in the entire population. A sampling distribution shows the probability of getting each sample statistic value, such as the mean, from random samples of a given size.
- The mean of the sampling distribution of the sample mean is always equal to the population mean. The standard deviation of the sampling distribution decreases as sample size increases.
- For large samples from a normally distributed population, the sampling distribution of the mean will be normally distributed. For large samples from non-normal populations, the central limit theorem implies the sampling distribution will be approximately normal.
-
This document contains a PowerPoint presentation on inductive statistics covering topics like probability distributions, sampling distributions, estimation, hypothesis testing for means and proportions, and two-sample hypothesis tests. It provides an overview of the chapters that will be covered, examples of hypothesis tests for means and proportions when the population standard deviation is known and unknown, and examples of independent and dependent two-sample hypothesis tests for differences in means and proportions with both large and small sample sizes. Step-by-step explanations are given for conducting hypothesis tests.
The document provides an overview of topics to be covered in Chapter 16 on time series and forecasting, including using trend equations to forecast future periods and develop seasonally adjusted forecasts, determining and interpreting seasonal indexes, and deseasonalizing data using a seasonal index. It also includes examples of calculating seasonal indices and adjusting sales data to remove seasonal variation. The document is a lecture outline and review for a class on international business taught by Dr. Ning Ding at Hanze University of Applied Sciences Groningen.
Here are the steps to solve this problem:
1) Code the year as t = 1 for 1999, t = 2 for 2000, etc.
2) Calculate the sums: Σt = 15, ΣY = 211.9, Σt2 = 30, ΣtY = 332.5
3) b = (ΣtY - ΣtΣY/n) / (Σt2 - Σt2/n) = 6.55
4) a = Y - bX = 29.4 - 6.55(1) = 22.85
5) Ŷ = 22.85 + 6.55t
To estimate vending sales
This document provides an overview of simple linear regression and correlation. It discusses key concepts such as dependent and independent variables, scatter diagrams, regression analysis, the least-squares estimating equation, and the coefficients of determination and correlation. Scatter diagrams are used to determine the nature and strength of relationships between variables. Regression analysis finds relationships of association but not necessarily of cause and effect. The least-squares estimating equation models the dependent variable as a function of the independent variable.
This document provides an overview of central tendency measures that will be covered in Chapter 3-A, including the mean, mode, and median for both ungrouped and grouped data. It also includes examples of calculating the mean, weighted mean, and mode. The document reviews key concepts such as the difference between parameters and statistics. Overall, the document previews and reviews important concepts related to measures of central tendency that will be covered in the upcoming chapter.
Lesson 06 chapter 9 two samples test and Chapter 11 chi square testNing Ding
This document is a PowerPoint presentation about hypothesis testing for two samples and chi-square tests. It covers topics like independent and dependent sample tests, testing differences between proportions, one-tailed and two-tailed tests. Examples are provided to demonstrate how to perform two-sample t-tests, tests of proportions, and chi-square tests using contingency tables with 2 rows and 3 rows. Step-by-step instructions and formulas are given. Key chapters from the textbook are reviewed.
This document provides an outline and overview of topics covered in a course on inductive statistics, including probability distributions, sampling distributions, estimation, and hypothesis testing. Key topics discussed include interval estimation for means and proportions, using t-distributions when sample sizes are small and variances are unknown, and the basics of hypothesis testing such as null and alternative hypotheses. Examples are provided to illustrate concepts like confidence intervals for means, proportions, and hypothesis testing.
This document provides an overview and summary of topics covered in a research methods course. It discusses reviewing concepts from prior lectures, including different types of research and variables. Today's lecture will cover instrumentation, validity and reliability, and threats to internal validity. Instrumentation discusses how to collect and measure data. Validity and reliability refer to the accuracy and consistency of measurements. Threats to internal validity could interfere with determining the true effect of independent variables on dependent variables.
This document provides an overview of content covered in Statistics 2, including a review of chapter 5 on sampling distributions. It includes examples of questions from quizzes on topics like the normal distribution and binomial approximation. The document also provides tips on using SPSS for descriptive statistics, such as inputting and defining variable data, and analyzing frequencies.
This document summarizes a course on research methods and techniques. It outlines the structure and requirements of the course, including reading a textbook and attending lectures. It discusses different types of research and variables. The document covers defining research problems, formulating hypotheses, research ethics, and instrumentation. Self-check exercises are provided to help students understand key concepts.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
1. Statistics 2 Dr. Ning DING IBS I.007 [email_address] You’d better use the full-screen mode to view this PPT file.
2. Table of Contents Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
3. Sampling and Sampling Distribution Population = all items chosen for study Sample = a portion chosen from the population Parameter Statistic Greek or capital letters Lowercase Roman letters Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
4. Sampling and Sampling Distribution Population Sample Parameter Statistic N = number μ = mean σ = standard deviation n = number X = mean SD = standard deviation Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
5. Sampling Distribution Mean Mean Mean Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
6. Standard Error = Standard deviation of the distribution of a sample statistic Larger Standard Error Smaller Standard Error Which one is better? Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
7. Standard Error Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
8. Standard Error Sample size Dispersion of sample means Standard Error Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
9. Standard Error µ = 100 σ = 25 =95 =106 =101 Population Range=80~240 Sample Range=90~120 Standard Error of mean Standard Deviation of population ____ Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
10. Calculating the Standard Error Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test Sample means frequency
11. Calculating the Standard Error individual savings accounts µ= $2000 σ = $600 Sample= 100 accounts the probability that the sample mean lies betw. $1900~$2050 ? Standard Error of the mean Population standard deviation Sample size Example: Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
12. Calculating the Standard Error the probability that the sample mean lies betw. $1900~$2050 ? Sample mean Population mean Standard error of the mean 0.4525 0.2967 + = 0.7492 74.92% of our sample means lies betw. $1900~$2050 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
13. Calculating the Standard Error 6-30 Chapter 6, No. 6-30 P.321 Known: Normal distribution, μ =375 σ =48 P=95% n = ? Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
14. Calculating the Standard Error 6-30 Chapter 6, No. 6-30 P.321 Known: Normal distribution, μ =375 σ =48 P=95% n=? Step 1: P=P z1 +P z2 =0.950 z 1 =-1.96 z 2 =1.96 370< <380 -1.96< z <1.96 Step 2: 1.96= Step 3: n=354.04 The sample size is at least 355 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
15. Calculating the Standard Error Infinite population Finite population Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
16. The Finite Population Multiplier population size sample size Finite population multiplier F.P.M. Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
17. The Finite Population Multiplier 1) N= 20 n=5 0.888 2) N= 20 n=19 0.229 3) N= 20 n=20 0 4) N= 1000 n=20 0.99 When to use F.P.M.? If Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
18. Calculating the Standard Error SC 6-7a Chapter 6, SC No. 6-7 P.327 Known: N=125 n=64 μ =105 σ =17 =? Step 1: n/N=64/125= 0.512 >0.05 Yes, it is allowed to use F.P.M. Step 2: = = = Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
19. Calculating the Standard Error SC 6-7b Chapter 6, SC No. 6-7 P.327 Known: N=125 n=64 μ =105 σ =17 =1.4904 P(107.5<Xmean<109) = ? Step 1: visualize and calculate z scores = = P 1 =0.4535 P 2 =0.4963 P=0.4963-0.4535=0.0428 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
20. Calculating the Standard Error SC 6-8 Chapter 6, SC No. 6-8 P.327 Known: n=36 μ =? σ =1.25 pounds What is the probability that the sample mean is within one-half pound of the population mean? = = Step 1: Visualize and calculate z scores Step 2: Calculate the standard error of sample means Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
21. Calculating the Standard Error SC 6-8 Chapter 6, SC No. 6-8 P.327 Known: n=36 μ =? σ =1.25 pounds What is the probability that the sample mean is within one-half pound of the population mean? = = Step 2: Calculate the standard error of sample means Step 3: Calculate the z scores P z1 =0.4918 P z2 =0.4918 + = 0.9836 Step 4: convert to P value Step 5: Finalize your answer Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
22. Chapter 7 Introduction of Estimation confidence level confidence interval Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
23. Types of Estimates Interval Estimates Point Estimates Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
24. Interval Estimates: Basic Concepts P.354 Interval Estimates standard deviation is 10 interviewed 200 person according to them, the mean is 36 months Stardard error of the mean from an infinite population standard deviation of the population sample size 36+0.707=36.707 36-0.707=35.293 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
25. Interval Estimates: Basic Concepts P.354 Interval Estimates standard deviation is 10 interviewed 200 person according to them, the mean is 36 months z =1.0 P=0.3413 68.3% of the actual mean lie between 35.293 and 36.707 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
26. Interval Estimates: Basic Concepts P.354 Interval Estimates standard deviation is 10 interviewed 200 person according to them, the mean is 36 months z =2.0 P=0.4775 95.5% of the actual mean lie between _______ and_________ 95.5% of the actual mean lie between 34.586 and 37.414 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
27. Interval Estimates: Basic Concepts P.354 Interval Estimates standard deviation is 10 interviewed 200 person according to them, the mean is 36 months z =3.0 P=0.4987 99.7% of the actual mean lie between 33.879 and 38.121 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
28. Interval Estimates: Basic Concepts P.354 Interval Estimates Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
29. Interval Estimates: Basic Concepts Interval Estimates Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test Interval =
30. Interval Estimates: Basic Concepts Interval Estimates EX 7-27a Chapter 7, No. 7-27 P. 365 Known: n=40 EX 7-27b P=90% z=1.645 Upper limit =1416+7.8029 Lower limit=1416-7.8029 =1424 =1408 90% confident that our population mean lies between 1408 and 1424. Interval Estimates Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test = Interval =
31. Interval Estimates: Basic Concepts Interval Estimates If the σ is unknown , ? Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test Estimated standard error of mean Estimated standard error of proportion P.368 Interval = Interval = Interval =
32. Interval Estimates: Basic Concepts Interval Estimates EX 7-35a EX 7-35b z=2.33 Answer: 0.01 ~0.09 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test Chapter 7, No. 7-35 P. 369 known: n=200 p=0.05 q=0.95 known: n=200 p=0.05 q=0.95 P=98% Interval =
33. Interval Estimates: Basic Concepts Interval Estimates If the sample size is =< 30, AND σ is unknown, ? t - distribution You can read the t value from Appendix Table 2 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
34. Interval Estimates: Basic Concepts Interval Estimates How to read the t-table ? t - distribution e.g. n=10 df= 9 P=90% 0.05 0.05 Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test
35. Interval Estimates: Basic Concepts Interval Estimates How to use t value ? t - distribution Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates SPSS Tips for t-test interval = interval =
36. Summary Chapter 6 Sampling - Review: Sampling and Standard Error - Calculating Standard Error-Infinite Population - Calculating Stanard Error-Finite Population Chapter 7 Introduction of Estimation - Types of Estimates - Interval Estimates
37. The Normal Distribution SPSS Tip: t-test The data can be downloaded from: Blackboard – Inductive Statsitics STA2—SPSS-- Week 3
38. The Normal Distribution SPSS Tip: t-test 3 types of t-test One Sample t-test Paired-Samples t-test Independent Samples t-test test whether the population mean is different from a constant test whether the population mean of differences between paired scores is equal to zero test the relationship between two categories and a quantitative variable
39. The Normal Distribution SPSS Tip: t-test One Sample t-test Example: A researcher wants to evaluate whether customers believe price change is more a function of natural fluctuations in inflation or due to effects caused by human interventions. Thirty customers are assessed on the Price Change Attitude Scale , which yields scores that range from 0 ( due solely to natural fluctuations in inflation ) to 100 ( due solely to human interventions ). A score of 50 is the test value and represents an equal contribution of the two effects. The data can be downloaded from: Blackboard – Inductive Statsitics STA2—SPSS-- Week 3 One-Sample t-test.sav Variable Description PCAS Price Change Attitude Scale
40. The Normal Distribution SPSS Tip: t-test One Sample t-test Example: A researcher wants to evaluate whether customers believe price change is more a function of natural fluctuations in inflation or due to effects caused by human interventions. Thirty customers are assessed on the Price Change Attitude Scale , which yields scores that range from 0 ( due solely to natural fluctuations in inflation ) to 100 ( due solely to human interventions ). A score of 50 is the test value and represents an equal contribution of the two effects. Null Hypothesis: The population mean is equal to 50. Variable Description PCAS Price Change Attitude Scale
41. The Normal Distribution SPSS Tip: t-test Step 1: Choose Analyze--> Compare Means --> One-Sample T Test One Sample t-test
42. The Normal Distribution SPSS Tip: t-test Step 2: Move the variable you want to test into the box ”Test Variable(s)”. Enter the value in the box “Test Value”. In this example, the PCAS middle value is 50. Click OK and you will see a popup window. One Sample t-test
43. The Normal Distribution SPSS Tip: t-test One Sample t-test Read the next slide to know how to interpret it !
44.
45. The Normal Distribution SPSS Tip: t-test Example: A researcher is interested in determining whether customers’ satisfaction with DOVE body lotion improves when exposed to a new TV commercial. Thirty customers are assessed on the Satisfaction Scale for Customers (SSC) prior to and after the new TV commercial. The data can be downloaded from: Blackboard – Inductive Statsitics STA2—SPSS-- Week 3 Paired-Sample t-test.sav Paired-Samples t-test Variable Description Pre_SSC Percent correct on the Sales Scale for Customers prior to the new TV commercial Post_SSC Percent correct on the Sales Scale for Customers after the new TV commercial
46. The Normal Distribution SPSS Tip: t-test Example: A researcher is interested in determining whether customers’ satisfaction with DOVE body lotion improves when exposed to a new TV commercial. Thirty customers are assessed on the Satisfaction Scale for Customers (SSC) prior to and after the new TV commercial. Paired-Samples t-test Null Hypothesis: The population means’ difference is zero. Variable Description Pre_SSC Percent correct on the Sales Scale for Customers prior to the new TV commercial Post_SSC Percent correct on the Sales Scale for Customers after the new TV commercial
47. The Normal Distribution SPSS Tip: t-test Paired-Samples t-test Step 1: Choose Analyze--> Compare Means --> Paired-Samples T Test
48. The Normal Distribution SPSS Tip: t-test Paired-Samples t-test Step 2: Move the first variable the box “Paired Variables”, to the location Variable 1, and the second variable to Variable 2. Click OK and you will see a popup window.
49. The Normal Distribution SPSS Tip: t-test Paired-Samples t-test Read the next slide to know how to interpret it !