Histograms and Descriptive Statistics Scoring Guide
CRITERIA
NON-PERFORMANCE
BASIC
PROFICIENT
DISTINGUISHED
Apply the appropriate SPSS procedures for creating histograms to generate relevant output.
Does not provide SPSS output.
Provides SPSS output with errors.
Applies the appropriate SPSS procedures for creating histograms to generate relevant output.
Analyzes the histogram output, demonstrating insight and understanding of relevant data.
Interpret histogram results, including concepts of skew, kurtosis, outliers, symmetry, and modality.
Does not provide an interpretation of histogram results.
Provides an interpretation of histogram results.
Interprets histogram results, including concepts of skew, kurtosis, outliers, symmetry, and modality.
Evaluates histogram results, including concepts of skew, kurtosis, outliers, symmetry, and modality.
Analyze the strengths and limitations of examining a distribution of scores with a histogram.
Does not identify the strengths and limitations of examining a distribution of scores with a histogram.
Identifies the strengths and limitations of examining a distribution of scores with a histogram.
Analyzes the strengths and limitations of examining a distribution of scores with a histogram.
Evaluates the strengths and limitations of examining a distribution of scores with a histogram. Demonstrates insight and understanding of relevant data.
Apply the appropriate SPSS procedure for generating descriptive statistics to generate relevant output.
Does not provide SPSS output.
Includes some, but not all, of the required output. Numerous errors in SPSS output.
Applies the appropriate SPSS procedure for generating descriptive statistics to generate relevant output.
Applies the appropriate SPSS procedure for generating descriptive statistics to generate relevant output. Includes all relevant output; no irrelevant output is included. No errors in SPSS output.
Analyze meaningful versus meaningless variables reported in descriptive statistics.
Does not identify meaningful versus meaningless variables reported in descriptive statistics.
Identifies meaningful versus meaningless variables reported in descriptive statistics.
Analyzes meaningful versus meaningless variables reported in descriptive statistics.
Evaluates meaningful versus meaningless variables reported in descriptive statistics.
Interpret descriptive statistics for meaningful variables.
Does not identify meaningful variables.
Identifies meaningful variables.
Interprets descriptive statistics for meaningful variables.
Evaluates descriptive statistics for meaningful variables.
Apply the appropriate SPSS procedures for creating z scores and descriptive statistics to generate relevant output.
Does not provide SPSS output.
Provides SPSS output with errors.
Applies the appropriate SPSS procedures for creating z scores and descriptive statistics to generate relevant output.
Analyzes the z scores and descriptive statistics output, demonstrating insight and understand ...
This document provides an overview of descriptive statistics and inferential statistics. Descriptive statistics are used to describe basic features of data through simple summaries, while inferential statistics are used to make inferences about populations based on samples. Examples of descriptive statistics include measures of central tendency, dispersion, frequency distributions and contingency tables. Inferential statistics allow for comparisons between groups and populations through techniques like t-tests, analysis of variance, regression analysis, and other general linear models.
This document provides an overview of descriptive statistics and inferential statistics. Descriptive statistics are used to describe basic features of data through simple summaries, while inferential statistics are used to make generalizations beyond the sample data. Key concepts covered include measures of central tendency and dispersion, the general linear model, dummy variables, experimental and quasi-experimental designs, analysis of variance, analysis of covariance, and regression analysis.
The document discusses descriptive statistics and inferential statistics. Descriptive statistics are used to describe basic features of data through simple summaries, while inferential statistics are used to make inferences beyond the sample data to general populations. Some common descriptive statistics are measures of central tendency, dispersion, frequency, and contingency tables. Inferential statistics allow for comparisons between groups and determining the probability of observed differences occurring by chance. Regression analysis is also discussed as a technique used to model relationships between dependent and independent variables and understand how changes in independent variables impact the dependent variable.
1. Descriptive statistics provide a simple summary of data through measures of central tendency, frequency, and variability.
2. Common measures include the mean, median, mode, standard deviation, and outliers.
3. Inferential statistics allow researchers to make generalizations about populations based on analyses of samples. They include t-tests, ANOVA, correlation, and regression.
ANOVA is a hypothesis testing technique used to compare the equali.docxjustine1simpson78276
ANOVA is a hypothesis testing technique used to compare the equality of means for two or more groups; for example, it can be used to test that the mean number of computer chips produced by a company on each of the day, evening, and night shifts is the same. Give an example of an application of ANOVA in an industrial, operations, or manufacturing setting that is different from the examples provided in the overview. Discuss and share this information with your classmates.
In responding to your peers, select responses that use an ANOVA application that is different from your own. Are the results of the ANOVA application statistically significant? Why are the results significant or not significant? Explain your reasoning. Consider how ANOVA could be applied to the final project case study.
Support your initial posts and response posts with scholarly sources cited in APA style.
https://statistics4beginners.wordpress.com/2015/02/18/how-to-calculate-anova-in-excel-2013/
PLEASE GIVE A 1-2 PARAGRAPH RESPONSE TO THE FOLLOWING:
1.
In this module, our goal is to learn the statistical process of comparing several population means through a procedure called "analysis of variance", or ANOVA. ANOVA uses the variance from the mean of 2 or more sample populations to see if there is a statistically significant difference between them (Sharpe, DeVeaux, Velleman, 2016). We've learned that this is a valuable tool in all sorts of areas of study, including automotive, chemical, and medical industries.
There are many practical examples of ANOVA throughout business. As previously mentioned, the medical field can benefit from the use of this statistics tool. For example, a drug company may be interested in the results of clinical trials for a few new drugs they plan to release. Medicine A, B, and C are all now in the clinical testing phase, so the instances in which each cures a specific ailment can be summed up using ANOVA. Each of the individual drugs, through the course of multiple trials, will have a number of "cured" patients. The following is an example of what the results may be, in table format:
A B C
Trial 1 4 9 2
2 5 8 7
3 7 1 6
4 6 1 5
5 6 4 9
Using ANOVA to evaluate the variance from the mean for each trial, the ultimate goal would be to compare each trial to one another. By comparing the variance, we can say, with statistical confidence, that one medicine may be more effect.
This document provides instructions for answering questions about a dataset examining differences in client functioning (GAF) and satisfaction (Satisfaction) between public and private mental health agencies. The researcher wants to know if type of agency (independent variable) is related to GAF and Satisfaction scores (dependent variables). The assistant screened the data, found some missing values and outliers, and determined one variable violated the normality assumption.
Assessment 3 – Hypothesis, Effect Size, Power, and t Tests.docxcargillfilberto
Assessment 3 – Hypothesis, Effect Size, Power, and
t
Tests
Complete the following problems within this Word document. Do not submit other files. Show your work for problem sets that require calculations. Ensure that your answer to each problem is clearly visible. You may want to highlight your answer or use a different type color to set it apart.
Hypothesis, Effect Size, and Power
Problem Set 3.1: Sampling Distribution of the Mean Exercise
Criterion:
Interpret population mean and variance.
Instructions:
Read the information below and answer the questions.
Suppose a researcher wants to learn more about the mean attention span of individuals in some hypothetical population. The researcher cites that the attention span (the time in minutes attending to some task) in this population is normally distributed with the following characteristics: 20
36
. Based on the parameters given in this example, answer the following questions:
1. What is the population mean (μ)? __________________________
2. What is the population variance
? __________________________
3. Sketch the distribution of this population. Make sure you draw the shape of the distribution and label the mean plus and minus three standard deviations.
Problem Set 3.2: Effect Size and Power
Criterion:
Explain effect size and power.
Instructions:
Read each of the following three scenarios and answer the questions.
Two researchers make a test concerning the effectiveness of a drug use treatment. Researcher A determines that the effect size in the population of males is
d
= 0.36; Researcher B determines that the effect size in the population of females is
d
= 0.20. All other things being equal, which researcher has more power to detect an effect? Explain. ______________________________________________________________________
Two researchers make a test concerning the levels of marital satisfaction among military families. Researcher A collects a sample of 22 married couples (
n
= 22); Researcher B collects a sample of 40 married couples (
n
= 40). All other things being equal, which researcher has more power to detect an effect? Explain. ______________________________________________________________________
Two researchers make a test concerning standardized exam performance among senior high school students in one of two local communities. Researcher A tests performance from the population in the northern community, where the standard deviation of test scores is 110 (
); Researcher B tests performance from the population in the southern community, where the standard deviation of test scores is 60 (
). All other things being equal, which researcher has more power to detect an effect? Explain. ______________________________________________________________________
Problem Set 3.3: Hypothesis, Direction, and Population Mean
Criterion:
Explain the relationship between hypothesis, tests, and population mean.
Instructions:
Read the following and answer the questions.
Ashford 2: - Week 1 - Instructor Guidance
Week Overview:
The following video series: Against All Odds Inside Statistics is helpful if you would like to watch it.
http://www.learner.org/resources/series65.html?pop=yes&pid=3138
For this week, we’ll learn that statistics is the science of collecting, organizing, presenting, analyzing, and interpreting numerical data to assist in making more effective decisions.
In today’s world, numerical information is everywhere. Statistical techniques are used to make decisions that affect our daily lives. The knowledge of statistical methods will help you understand how decisions are made and give you a better understanding of how they affect you. No matter what line of work you select, you will find yourself faced with decisions where an understanding of data analysis is helpful.
The concepts introduced this week include levels of measurement, measurements of center, variations, etc. Normal distribution and calculations are introduced in this week.
Measurements
You should be able to distinguish among the nominal, ordinal, interval, and ratio levels of measurement.
Nominal level - data that is classified into categories and cannot be arranged in any particular order.
EXAMPLES: eye color, gender, religious affiliation.
Ordinal level – data arranged in some order, but the differences between data values cannot be determined or are meaningless.
EXAMPLE: During a taste test of 4 soft drinks, Mellow Yellow was ranked number 1, Sprite number 2, Seven-up number 3, and Orange Crush number 4.
Interval level - similar to the ordinal level, with the additional property that meaningful amounts of differences between data values can be determined. There is no natural zero point.
EXAMPLE: Temperature on the Fahrenheit scale.
Ratio level - the interval level with an inherent zero starting point. Differences and ratios are meaningful for this level of measurement.
EXAMPLES: Monthly income of surgeons, or distance traveled by manufacturer’s representatives per month.
Why do you need to know the level of measurement of a data? This is because the level of measurement of the data dictates the calculations that can be done to summarize and present the data. It also determines the statistical tests that should be performed on the data.
Probability
PROBABILITY is a value between zero and one, inclusive, describing the relative possibility (chance or likelihood) an event will occur.
There are three ways of assigning probability:
1. Classical Probability
This is based on the assumption that the outcomes of an experiment are equally likely.
2. Empirical Probability
The probability of an event happening is the fraction of the time similar events happened in the past.
Example: On February 1, 2003, the Space Shuttle Columbia exploded. This was the second disaster in 113 space missions for NASA. On the basis of this information, what is the probability that a future mission is successfully completed?
Probability of successful flight ...
This document provides an overview of descriptive statistics and inferential statistics. Descriptive statistics are used to describe basic features of data through simple summaries, while inferential statistics are used to make inferences about populations based on samples. Examples of descriptive statistics include measures of central tendency, dispersion, frequency distributions and contingency tables. Inferential statistics allow for comparisons between groups and populations through techniques like t-tests, analysis of variance, regression analysis, and other general linear models.
This document provides an overview of descriptive statistics and inferential statistics. Descriptive statistics are used to describe basic features of data through simple summaries, while inferential statistics are used to make generalizations beyond the sample data. Key concepts covered include measures of central tendency and dispersion, the general linear model, dummy variables, experimental and quasi-experimental designs, analysis of variance, analysis of covariance, and regression analysis.
The document discusses descriptive statistics and inferential statistics. Descriptive statistics are used to describe basic features of data through simple summaries, while inferential statistics are used to make inferences beyond the sample data to general populations. Some common descriptive statistics are measures of central tendency, dispersion, frequency, and contingency tables. Inferential statistics allow for comparisons between groups and determining the probability of observed differences occurring by chance. Regression analysis is also discussed as a technique used to model relationships between dependent and independent variables and understand how changes in independent variables impact the dependent variable.
1. Descriptive statistics provide a simple summary of data through measures of central tendency, frequency, and variability.
2. Common measures include the mean, median, mode, standard deviation, and outliers.
3. Inferential statistics allow researchers to make generalizations about populations based on analyses of samples. They include t-tests, ANOVA, correlation, and regression.
ANOVA is a hypothesis testing technique used to compare the equali.docxjustine1simpson78276
ANOVA is a hypothesis testing technique used to compare the equality of means for two or more groups; for example, it can be used to test that the mean number of computer chips produced by a company on each of the day, evening, and night shifts is the same. Give an example of an application of ANOVA in an industrial, operations, or manufacturing setting that is different from the examples provided in the overview. Discuss and share this information with your classmates.
In responding to your peers, select responses that use an ANOVA application that is different from your own. Are the results of the ANOVA application statistically significant? Why are the results significant or not significant? Explain your reasoning. Consider how ANOVA could be applied to the final project case study.
Support your initial posts and response posts with scholarly sources cited in APA style.
https://statistics4beginners.wordpress.com/2015/02/18/how-to-calculate-anova-in-excel-2013/
PLEASE GIVE A 1-2 PARAGRAPH RESPONSE TO THE FOLLOWING:
1.
In this module, our goal is to learn the statistical process of comparing several population means through a procedure called "analysis of variance", or ANOVA. ANOVA uses the variance from the mean of 2 or more sample populations to see if there is a statistically significant difference between them (Sharpe, DeVeaux, Velleman, 2016). We've learned that this is a valuable tool in all sorts of areas of study, including automotive, chemical, and medical industries.
There are many practical examples of ANOVA throughout business. As previously mentioned, the medical field can benefit from the use of this statistics tool. For example, a drug company may be interested in the results of clinical trials for a few new drugs they plan to release. Medicine A, B, and C are all now in the clinical testing phase, so the instances in which each cures a specific ailment can be summed up using ANOVA. Each of the individual drugs, through the course of multiple trials, will have a number of "cured" patients. The following is an example of what the results may be, in table format:
A B C
Trial 1 4 9 2
2 5 8 7
3 7 1 6
4 6 1 5
5 6 4 9
Using ANOVA to evaluate the variance from the mean for each trial, the ultimate goal would be to compare each trial to one another. By comparing the variance, we can say, with statistical confidence, that one medicine may be more effect.
This document provides instructions for answering questions about a dataset examining differences in client functioning (GAF) and satisfaction (Satisfaction) between public and private mental health agencies. The researcher wants to know if type of agency (independent variable) is related to GAF and Satisfaction scores (dependent variables). The assistant screened the data, found some missing values and outliers, and determined one variable violated the normality assumption.
Assessment 3 – Hypothesis, Effect Size, Power, and t Tests.docxcargillfilberto
Assessment 3 – Hypothesis, Effect Size, Power, and
t
Tests
Complete the following problems within this Word document. Do not submit other files. Show your work for problem sets that require calculations. Ensure that your answer to each problem is clearly visible. You may want to highlight your answer or use a different type color to set it apart.
Hypothesis, Effect Size, and Power
Problem Set 3.1: Sampling Distribution of the Mean Exercise
Criterion:
Interpret population mean and variance.
Instructions:
Read the information below and answer the questions.
Suppose a researcher wants to learn more about the mean attention span of individuals in some hypothetical population. The researcher cites that the attention span (the time in minutes attending to some task) in this population is normally distributed with the following characteristics: 20
36
. Based on the parameters given in this example, answer the following questions:
1. What is the population mean (μ)? __________________________
2. What is the population variance
? __________________________
3. Sketch the distribution of this population. Make sure you draw the shape of the distribution and label the mean plus and minus three standard deviations.
Problem Set 3.2: Effect Size and Power
Criterion:
Explain effect size and power.
Instructions:
Read each of the following three scenarios and answer the questions.
Two researchers make a test concerning the effectiveness of a drug use treatment. Researcher A determines that the effect size in the population of males is
d
= 0.36; Researcher B determines that the effect size in the population of females is
d
= 0.20. All other things being equal, which researcher has more power to detect an effect? Explain. ______________________________________________________________________
Two researchers make a test concerning the levels of marital satisfaction among military families. Researcher A collects a sample of 22 married couples (
n
= 22); Researcher B collects a sample of 40 married couples (
n
= 40). All other things being equal, which researcher has more power to detect an effect? Explain. ______________________________________________________________________
Two researchers make a test concerning standardized exam performance among senior high school students in one of two local communities. Researcher A tests performance from the population in the northern community, where the standard deviation of test scores is 110 (
); Researcher B tests performance from the population in the southern community, where the standard deviation of test scores is 60 (
). All other things being equal, which researcher has more power to detect an effect? Explain. ______________________________________________________________________
Problem Set 3.3: Hypothesis, Direction, and Population Mean
Criterion:
Explain the relationship between hypothesis, tests, and population mean.
Instructions:
Read the following and answer the questions.
Ashford 2: - Week 1 - Instructor Guidance
Week Overview:
The following video series: Against All Odds Inside Statistics is helpful if you would like to watch it.
http://www.learner.org/resources/series65.html?pop=yes&pid=3138
For this week, we’ll learn that statistics is the science of collecting, organizing, presenting, analyzing, and interpreting numerical data to assist in making more effective decisions.
In today’s world, numerical information is everywhere. Statistical techniques are used to make decisions that affect our daily lives. The knowledge of statistical methods will help you understand how decisions are made and give you a better understanding of how they affect you. No matter what line of work you select, you will find yourself faced with decisions where an understanding of data analysis is helpful.
The concepts introduced this week include levels of measurement, measurements of center, variations, etc. Normal distribution and calculations are introduced in this week.
Measurements
You should be able to distinguish among the nominal, ordinal, interval, and ratio levels of measurement.
Nominal level - data that is classified into categories and cannot be arranged in any particular order.
EXAMPLES: eye color, gender, religious affiliation.
Ordinal level – data arranged in some order, but the differences between data values cannot be determined or are meaningless.
EXAMPLE: During a taste test of 4 soft drinks, Mellow Yellow was ranked number 1, Sprite number 2, Seven-up number 3, and Orange Crush number 4.
Interval level - similar to the ordinal level, with the additional property that meaningful amounts of differences between data values can be determined. There is no natural zero point.
EXAMPLE: Temperature on the Fahrenheit scale.
Ratio level - the interval level with an inherent zero starting point. Differences and ratios are meaningful for this level of measurement.
EXAMPLES: Monthly income of surgeons, or distance traveled by manufacturer’s representatives per month.
Why do you need to know the level of measurement of a data? This is because the level of measurement of the data dictates the calculations that can be done to summarize and present the data. It also determines the statistical tests that should be performed on the data.
Probability
PROBABILITY is a value between zero and one, inclusive, describing the relative possibility (chance or likelihood) an event will occur.
There are three ways of assigning probability:
1. Classical Probability
This is based on the assumption that the outcomes of an experiment are equally likely.
2. Empirical Probability
The probability of an event happening is the fraction of the time similar events happened in the past.
Example: On February 1, 2003, the Space Shuttle Columbia exploded. This was the second disaster in 113 space missions for NASA. On the basis of this information, what is the probability that a future mission is successfully completed?
Probability of successful flight ...
This document discusses statistical models and inferential statistics. It defines statistical modeling as using mathematical tools and statistical conclusions to understand real-life situations. There are three main types of statistical models: parametric models which have known parameters; nonparametric models which have flexible parameters; and semi-parametric models which are a blend of the two. Inferential statistics are used to draw conclusions about populations based on samples, while descriptive statistics describe sample characteristics. Common inferential statistics techniques include hypothesis testing, regression analysis, z-tests, t-tests, f-tests, and confidence intervals.
This document provides information about getting fully solved assignments for MBA students. It details contact information for an assignment help service via email or phone call, and provides a sample assignment question document. The sample assignment covers topics in statistics, including probability, sampling, hypothesis testing, analysis of variance, index numbers, and includes 6 questions with sub-questions and evaluation criteria. Students are instructed to answer all questions, with approximately 400 word answers for 10 mark questions.
This document discusses using multiple regression analysis to predict real estate sale prices. Several independent variables are considered as predictors, including floor height, distance from elevator, ocean view, whether it is an end unit, and whether furniture is included. The analysis finds some variables like ocean view and floor height are statistically significant in predicting sale price, while others like the interaction between distance from elevator and ocean view are also important. The regression model provides insight into how real estate businesses can focus their resources based on which factors most influence prices.
MELJUN CORTES research lectures_evaluating_data_statistical_treatmentMELJUN CORTES
This document discusses the importance of statistics in research and the proper treatment of data. It notes that statistics are the backbone of research and help organize data in tables and graphs to guide meaningful interpretations. The document outlines the data analysis process and different levels of measurement for variables. It provides a matrix for statistical treatment of different types of data and describes common statistical operations like measures of central tendency, variance, correlation, and statistical tests. Dangers of misusing statistics are also discussed.
This document provides an overview of a presentation on statistical hypothesis testing using the t-test. It discusses what a t-test is, how to perform a t-test, and provides an example of a t-test comparing spelling test scores of two groups that received different teaching strategies. The document outlines the six steps for conducting statistical hypothesis testing using a t-test: 1) stating the hypotheses, 2) choosing the significance level, 3) determining the critical values, 4) calculating the test statistic, 5) comparing the test statistic to the critical values, and 6) writing a conclusion.
Assignment 2 Tests of SignificanceThroughout this assignment yo.docxrock73
Assignment 2: Tests of Significance
Throughout this assignment you will review mock studies. You will needs to follow the directions outlined in the section using SPSS and decide whether there is significance between the variables. You will need to list the five steps of hypothesis testing (as covered in the lesson for Week 6) to see how every question should be formatted. You will complete all of the problems. Be sure to cut and past the appropriate test result boxes from SPSS under each problem and explain what you will do with your research hypotheses. All calculations should be coming from your SPSS. You will need to submit the SPSS output file to get credit for this assignment. This file will save as a .spv file and will need to be in a single file. In other words, you are not allowed to submit more than one output file for this assignment.
The five steps of hypothesis testing when using SPSS are as follows:
1. State your research hypothesis (H1) and null hypothesis (H0).
2. Identify your confidence interval (.05 or .01)
3. Conduct your analysis using SPSS.
4. Look for the valid score for comparison. This score is usually under ‘Sig 2-tail’ or ‘Sig. 2’. We will call this “p”.
5. Compare the two and apply the following rule:
a. If “p” is < or = confidence interval, than you reject the null.
Be sure to explain to the reader what this means in regards to your study. (Ex: will you recommend counseling services?)
* Be sure that your answers are clearly distinguishable. Perhaps you bold your font or use a different color.
ASSIGNMENT 2(200) WORD MINIUM
1. They allow us to see if our relationship is "statistically significant". (Remember that this only shows us that there is or is not a relationship but does NOT show us if it is big, small, or in-between.)
2. It let's us know if our findings can be generalized to the population which our sample was selected from and represents.
This week you will decide which test of significance you will use for your project. For this class your choices for tests will include one of the following:
· Chi-square
· t Test
· ANOVA
We will be using a process for hypothesis testing which outlines five steps researchers can follow to complete this process:
1. Write your research hypothesis (H1) and your null hypothesis (H0).
2. Identify and record your confidence interval. These are usually .05 (95%) or .01 (99%).
3. Complete the test using SPSS.
4. Identify the number under Sig. (2-tail). This will be represented by "p".
5. Compare the numbers in steps 2 and 4 and apply the following rule:
1. If p < or = confidence interval, than you reject the null hypothesis
Determine what to do with your null and explain this to your reader. Be sure to go beyond the phrase "reject or fail to reject the null" and explain how that impacts your research and best describes the relationship between variables.
TEST QUESTIONS-NEED FULL ANSWERS
Q1
Make up and discuss research examples corresponding to the various ...
Statistical analysis involves investigating trends, patterns, and relationships using quantitative data. It requires careful planning from the start, including specifying hypotheses and designing the study. After collecting sample data, descriptive statistics summarize and organize the data, while inferential statistics are used to test hypotheses and make estimates about populations. Key steps in statistical analysis include planning hypotheses and research design, collecting a sufficient sample, summarizing data with measures of central tendency and variability, and testing hypotheses or estimating parameters with techniques like regression, comparison tests, and confidence intervals. The results must be interpreted carefully in terms of statistical significance, effect sizes, and potential decision errors.
This document provides an overview of descriptive statistics, inferential statistics, and regression analysis using PASW Statistics software. It discusses topics such as frequency analysis, measures of central tendency, hypothesis testing, t-tests, ANOVA, chi-square tests, correlation, and linear regression. The document is divided into multiple parts that cover opening and manipulating data files, descriptive statistics, tests of significance, regression analysis, and chi-square/ANOVA. It also discusses importing/exporting data and using scripts in PASW Statistics.
SPSS (Statistical Package for the Social Sciences) is statistical software used for data management and analysis. It allows users to process questionnaires, report data in tables and graphs, and analyze data through various tests like means, chi-square, and regression. Originally called SPSS Inc., it is now owned by IBM and known as IBM SPSS Statistics. The document provides an introduction to SPSS and outlines how to define variables, enter data, select cases, run descriptive statistics like frequencies and crosstabs, and manipulate output files.
This document discusses statistical analysis using SPSS. It describes descriptive statistics, which present data in a usable form by describing frequency, central tendency, and dispersion. Inferential statistics make broader generalizations from samples to populations using hypothesis testing. Hypothesis testing involves research hypotheses, null hypotheses, levels of significance, and type I and II errors. Choosing an appropriate statistical test depends on the hypothesis and measurement levels of the variables. SPSS is a comprehensive system for statistical analysis that can analyze many file types and generate reports and statistics.
Between Black and White Population1. Comparing annual percent .docxjasoninnes20
Between Black and White Population
1. Comparing annual percent of Medicare enrollees having at least one ambulatory visit between B and W
2. Comparing average annual percent of diabetic Medicare enrollees age 65-75 having hemoglobin A1c between B and W
3. Comparing average annual percent of diabetic Medicare enrollees age 65-75 having eye examination between B and W
4. Comparing average annual percent of diabetic Medicare enrollees age 65-75 having
Students will develop an analysis report, in five main sections, including introduction, research method (research questions/objective, data set, research method, and analysis), results, conclusion and health policy recommendations. This is a 5-6 page individual project report.
Here are the main steps for this assignment.
Step 1: Students require to submit the topic using topic selection discussion forum by the end of week 1 and wait for instructor approval.
Step 2: Develop the research question and
Step 3: Run the analysis using EXCEL (RStudio for BONUS points) and report the findings using the assignment instruction.
The Report Structure:
Start with the
1.Cover page (1 page, including running head).
Please look at the example http://www.apastyle.org/manual/related/sample-experiment-paper-1.pdf (you can download the file from the class) and http://www.umuc.edu/library/libhow/apa_tutorial.cfm to learn more about the APA style.
In the title page include:
· Title, this is the approved topic by your instructor.
· Student name
· Class name
· Instructor name
· Date
2.Introduction
Introduce the problem or topic being investigated. Include relevant background information, for example;
· Indicates why this is an issue or topic worth researching;
· Highlight how others have researched this topic or issue (whether quantitatively or qualitatively), and
· Specify how others have operationalized this concept and measured these phenomena
Note: Introduction should not be more than one or two paragraphs.
Literature Review
There is no need for a literature review in this assignment
3.Research Question or Research Hypothesis
What is the Research Question or Research Hypothesis?
***Just in time information: Here are a few points for Research Question or Research Hypothesis
There are basically two kinds of research questions: testable and non-testable. Neither is better than the other, and both have a place in applied research.
Examples of non-testable questions are:
How do managers feel about the reorganization?
What do residents feel are the most important problems facing the community?
Respondents' answers to these questions could be summarized in descriptive tables and the results might be extremely valuable to administrators and planners. Business and social science researchers often ask non-testable research questions. The shortcoming with these types of questions is that they do not provide objective cut-off points for decision-makers.
In order to overcome this problem, researchers often seek to answer o ...
httpswww.azed.govoelaselpsUse this to see the English Lang.docxpooleavelina
https://www.azed.gov/oelas/elps/
Use this to see the English Language Proficiency Standards of Arizona-Pick a grade level
https://cms.azed.gov/home/GetDocumentFile?id=54de1d88aadebe14a87070f0
http://www.corestandards.org/ELA-Literacy/introduction/how-to-read-the-standards/
how to read standards
Week 04
Acquisition and Customer Lifetime Value (CLV)
https://www.smh.com.au/politics/federal/nbn-customers-face-higher-prices-or-poorer-internet-connection-audit-warns-20190813-p52go7.html
Customer Relationship Management?
CRM is the process of carefully managing detailed information about individual
customers and all customer touch points to maximize customer loyalty.
Now closely associated with data warehousing and mining
Relationship
Relationship
Identifying good customers: RFM Model
Recency
Frequency
Monetary Value
Time/purchase occasions since the last purchase
Number of purchase occasions since first purchase
Amount spent since the first purchase
R
F
M
Total RFM Score: R Score + F score + M Score
CASE: Database for BookBinders Book Club
Predict response to a mailing for the book, Art History of Florence, based on the
following variables accumulated in the database and the responses to a test mailing:
Gender
Amount purchased
Months since first purchase
Months since last purchase
Frequency of purchase
Past purchases of art books
Past purchases of children’s books
Past purchases of cook books
Past purchases of DIY books
Past purchases of youth books
Recency
Frequency
Monetary
Example: RFM Model Scoring Criteria
R
Months from last
purchase
13-max 10-12 7-9 3-6 0-2
Score 5pts 10 15 20 25
F
Frequency > 30 21-30 16-20 11-15 0-10
Score 25pts 20 15 10 5
M
Amount
purchased
> 400 301-400 201-300 101- 200 100
Score 50 45 30 15 10
Implement using Nested If statements in Excel
Decile Classification
• Standard Assessment Method
• Apply the results of approach and
calculate the “score” of each individual
• Order the customers based on “score”
from the highest to the lowest
• Divide into deciles
• Calculate profits per deciles
Customer 1 Score 1.00
Customer 2 Score 0.99
….
Customer 230 Score 0.92
Customer 2300 Score 0.00
Decile1
Decile10
…
..
…
..
Output for Bookbinders club
Decile Score RFM No. of Mailings Cost of mailing RFM Units sold RFM Profit
10 17.6% 5000 $3,250 783 $4,733
20 34.8% 10000 $6,500 1,543 $9,243
30 46.1% 15000 $9,750 2,043 $11,093
40 53.4% 20000 $13,000 2,370 $11,170
50 65.2% 25000 $16,250 2,891 $13,241
60 77.9% 30000 $19,500 3,457 $15,757
70 83.3% 35000 $22,750 3,696 $14,946
80 91.7% 40000 $26,000 4,065 $15,465
90 97.5% 45000 $29,250 4,326 $14,876
100 100.0% 50000 $32,500 4,435 $12,735
Note: Market Potential = 4435 units and margin = $10.20
Leaky bucket
New customer
acquisition
Purchase increase by
current customers
Purchase decrease by
current customers
Lost customers
Lost customers
Credit Card Rewards Program ...
The 30 June 2019 local elections in Albania took place in a context of deep political polarization and crisis. The main opposition parties boycotted the elections and called on voters to abstain. As a result, many mayoral races were uncontested. The elections suffered from a lack of trust in the impartiality of the election administration due to its unbalanced composition. While voting and counting were carried out efficiently on election day, the broader process failed to provide voters with a genuine choice between political alternatives. The elections did not resolve the underlying political disputes and the country remained in a state of political uncertainty.
httpfmx.sagepub.comField Methods DOI 10.117715258.docxpooleavelina
http://fmx.sagepub.com
Field Methods
DOI: 10.1177/1525822X04269550
2005; 17; 30 Field Methods
Don A. Dillman and Leah Melani Christian
Survey Mode as a Source of Instability in Responses across Surveys
http://fmx.sagepub.com/cgi/content/abstract/17/1/30
The online version of this article can be found at:
Published by:
http://www.sagepublications.com
can be found at:Field Methods Additional services and information for
http://fmx.sagepub.com/cgi/alerts Email Alerts:
http://fmx.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://fmx.sagepub.com/cgi/content/refs/17/1/30 Citations
at SAGE Publications on September 9, 2009 http://fmx.sagepub.comDownloaded from
http://fmx.sagepub.com/cgi/alerts
http://fmx.sagepub.com/subscriptions
http://www.sagepub.com/journalsReprints.nav
http://www.sagepub.com/journalsPermissions.nav
http://fmx.sagepub.com/cgi/content/refs/17/1/30
http://fmx.sagepub.com
10.1177/1525822X04269550FIELD METHODSDillman, Christian / SURVEY MODE AS SOURCE OF INSTABILITY
Survey Mode as a Source of Instability
in Responses across Surveys
DON A. DILLMAN
LEAH MELANI CHRISTIAN
Washington State University
Changes in survey mode for conducting panel surveys may contribute significantly to
survey error. This article explores the causes and consequences of such changes in
survey mode. The authors describe how and why the choice of survey mode often
causes changes to be made to the wording of questions, as well as the reasons that
identically worded questions often produce different answers when administered
through different modes. The authors provide evidence that answers may change as a
result of different visual layouts for otherwise identical questions and suggest ways
to keep measurement the same despite changes in survey mode.
Keywords: survey mode; questionnaire; panel survey; measurement; survey error
Most panel studies require measurement of the same variables at different
times. Often, participants are asked questions, several days, weeks, months,
or years apart to measure change in some characteristics of interest to the
investigation. These characteristics might include political attitudes, satis-
faction with a health care provider, frequency of a behavior, ownership of
financial resources, or level of educational attainment. Whatever the charac-
teristic of interest, it is important that the question used to ascertain it perform
the same across multiple data collections.
In addition, declining survey response rates, particularly for telephone
surveys, have encouraged researchers to use multiple modes of data collec-
tion during the administration of a single cross-sectional survey. Encouraged
by the availability of more survey modes than in the past and evidence that a
change in modes produces higher response rates (Dillman 2002), surveyors
This is a revision of a paper presented at t ...
https://iexaminer.org/fake-news-personal-responsibility-must-trump-intellectual-laziness/
Fake news: Personal responsibility must trump intellectual laziness
By Matt Chan January 4, 2017
Where do you get your news? That question has become incredibly important given the results of our Presidential Election. How many times have you heard, “I read a news story on Facebook and …” The problem: Facebook is not a news service; it’s a “social media” site whose purpose is to connect like-minded friends and family, to provide you with social connections, and online entertainment.
For Asian Americans social media provides an important and useful way of connecting socially and in some cases politically, but there is a downside. The downside is how social media actually works. These sites employ elaborate algorithms to track and analyze your posts, likes, and dislikes to provide you with a custom experience unique to you. The truth is you are being marketed to, not informed. What looks like news, is not really news, it’s personal validation. All in an attempt to keep you on the site longer, to click a few more things, to make you feel good about what you’re reading. It makes it seem like most people agree with you because you’re only fed information and stories that validate your worldview.
On the other hand, real news is hard work. Its fact-based information presented by people who have checked, researched, and documented what they are presenting as the truth. Real news can be verified.
“Fake News” is, well, fake, often times entirely made-up or containing a hint of truth. Social media was largely responsible for pushing “fake news” stories that were entirely made up to drive clicks on websites. These clicks in turn generated money for the people promoting the stories. The more outrageous the story, the more clicks, the more revenue. When you factor in the algorithms that feed you what you like, you can clearly see the more “fake news” you consume on social media, the more is pushed your way. There’s an abundance of pseudo news sites that merely re-post and curate existing stories, adding their bias to validate their audience’s beliefs, no matter how crazy or mainstream. It is curated solely for you. Now factor in that nearly 44% of Americans obtain some or most of their news from social media and you have a very toxic mix.
The mainstream news media has also fallen into this validation trap. You have one news network that solely reflects the right wing, others that take the view of the left-center leaning, and what is lost are the facts and context, the balance we need to evaluate, learn, and understand the world. People seeking fact-based journalism lose, because the more extreme the media becomes to entice consumers with provocative headlines and click-bait to earn more money, the less their news is fact-based and becomes more opinion driven.
There was a time when fact-based reporting was required of broadcast news. It was called “The Fairness Doctrin ...
http1500cms.comBECAUSE THIS FORM IS USED BY VARIOUS .docxpooleavelina
http://1500cms.com/
BECAUSE THIS FORM IS USED BY VARIOUS GOVERNMENT AND PRIVATE HEALTH PROGRAMS, SEE SEPARATE INSTRUCTIONS ISSUED BY
APPLICABLE PROGRAMS.
NOTICE: Any person who knowingly files a statement of claim containing any misrepresentation or any false, incomplete or misleading information may
be guilty of a criminal act punishable under law and may be subject to civil penalties.
REFERS TO GOVERNMENT PROGRAMS ONLY
MEDICARE AND CHAMPUS PAYMENTS: A patient’s signature requests that payment be made and authorizes release of any information necessary to process
the claim and certifies that the information provided in Blocks 1 through 12 is true, accurate and complete. In the case of a Medicare claim, the patient’s signature
authorizes any entity to release to Medicare medical and nonmedical information, including employment status, and whether the person has employer group health
insurance, liability, no-fault, worker’s compensation or other insurance which is responsible to pay for the services for which the Medicare claim is made. See 42
CFR 411.24(a). If item 9 is completed, the patient’s signature authorizes release of the information to the health plan or agency shown. In Medicare assigned or
CHAMPUS participation cases, the physician agrees to accept the charge determination of the Medicare carrier or CHAMPUS fiscal intermediary as the full charge,
and the patient is responsible only for the deductible, coinsurance and noncovered services. Coinsurance and the deductible are based upon the charge
determination of the Medicare carrier or CHAMPUS fiscal intermediary if this is less than the charge submitted. CHAMPUS is not a health insurance program but
makes payment for health benefits provided through certain affiliations with the Uniformed Services. Information on the patient’s sponsor should be provided in those
items captioned in “Insured”; i.e., items 1a, 4, 6, 7, 9, and 11.
BLACK LUNG AND FECA CLAIMS
The provider agrees to accept the amount paid by the Government as payment in full. See Black Lung and FECA instructions regarding required procedure and
diagnosis coding systems.
SIGNATURE OF PHYSICIAN OR SUPPLIER (MEDICARE, CHAMPUS, FECA AND BLACK LUNG)
I certify that the services shown on this form were medically indicated and necessary for the health of the patient and were personally furnished by me or were furnished
incident to my professional service by my employee under my immediate personal supervision, except as otherwise expressly permitted by Medicare or CHAMPUS
regulations.
For services to be considered as “incident” to a physician’s professional service, 1) they must be rendered under the physician’s immediate personal supervision
by his/her employee, 2) they must be an integral, although incidental part of a covered physician’s service, 3) they must be of kinds commonly furnished in physician’s
offices, and 4) the services of nonphysicians must be included on the physician’s bills.
For CHA ...
https://www.medicalnewstoday.com/articles/323444.php
https://ascopubs.org/doi/full/10.1200/JCO.2008.16.0333
https://journals.lww.com/co-hematology/Abstract/2007/03000/Influence_of_new_molecular_prognostic_markers_in.5.aspx
Influence of new molecular prognostic markers in patients with karyotypically normal acute myeloid leukemia: recent advances
Mrózek, Krzysztofa; Döhner, Hartmutb; Bloomfield, Clara Da
Current Opinion in Hematology: March 2007 - Volume 14 - Issue 2 - p 106–114
doi: 10.1097/MOH.0b013e32801684c7
Myeloid disease
Purpose of review Molecular study of cytogenetically normal acute myeloid leukemia is among the most active areas of leukemia research. Despite having the same normal karyotype, adults with de-novo cytogenetically normal acute myeloid leukemia who constitute the largest cytogenetic group of acute myeloid leukemia, are very diverse with respect to acquired gene mutations and gene expression changes. These genetic alterations affect clinical outcome and may assist in selection of proper treatment. Herein we critically summarize recent clinically relevant molecular genetic studies of cytogenetically normal acute myeloid leukemia.
Recent findings NPM1 gene mutations causing aberrant cytoplasmic localization of nucleophosmin have been demonstrated to be the most frequent submicroscopic alterations in cytogenetically normal acute myeloid leukemia and to confer improved prognosis, especially in patients without a concomitant FLT3 gene internal tandem duplication. Overexpressed BAALC, ERG and MN1 genes and expression of breast cancer resistance protein have been shown to confer poor prognosis. A gene-expression signature previously suggested to separate cytogenetically normal acute myeloid leukemia patients into prognostic subgroups has been validated on a different microarray platform, although gene-expression signature-based classifiers predicting outcome for individual patients with greater accuracy are still needed.
Summary The discovery of new prognostic markers has increased our understanding of leukemogenesis and may lead to improved prognostication and generation of novel risk-adapted therapies.
http://www.bloodjournal.org/content/127/1/53?sso-checked=true
An update of current treatments for adult acute myeloid leukemia
Hervé Dombret and Claude Gardin
Abstract
Recent advances in acute myeloid leukemia (AML) biology and its genetic landscape should ultimately lead to more subset-specific AML therapies, ideally tailored to each patient's disease. Although a growing number of distinct AML subsets have been increasingly characterized, patient management has remained disappointingly uniform. If one excludes acute promyelocytic leukemia, current AML management still relies largely on intensive chemotherapy and allogeneic hematopoietic stem cell transplantation (HSCT), at least in younger patients who can tolerate such intensive treatments. Nevertheless, progress has been made, notably in terms of standard drug dose in ...
httpstheater.nytimes.com mem theater treview.htmlres=9902e6.docxpooleavelina
https://theater.nytimes.com/ mem/ theater/ treview.html?res=9902e6db1639f931a25753c1a962948260
THEATER: WILSON'S 'MA RAINEY'S' OPENS
By FRANK RICH
Published: October 12, 1984, Friday
LATE in Act I of ''Ma Rainey's Black Bottom,'' a somber, aging band trombonist (Joe Seneca) tilts his head heavenward to sing the blues. The setting is a dilapidated Chicago recording studio of 1927, and the song sounds as old as time. ''If I had my way,'' goes the lyric, ''I would tear this old building down.''
Once the play has ended, that lyric has almost become a prophecy. In ''Ma Rainey's Black Bottom,'' the writer August Wilson sends the entire history of black America crashing down upon our heads. This play is a searing inside account of what white racism does to its victims - and it floats on the same authentic artistry as the blues music it celebrates. Harrowing as ''Ma Rainey's'' can be, it is also funny, salty, carnal and lyrical. Like his real-life heroine, the legendary singer Gertrude (Ma) Rainey, Mr. Wilson articulates a legacy of unspeakable agony and rage in a spellbinding voice.
The play is Mr. Wilson's first to arrive in New York, and it reached here, via the Yale Repertory Theater, under the sensitive hand of the man who was born to direct it, Lloyd Richards. On Broadway, Mr. Richards has honed ''Ma Rainey's'' to its finest form. What's more, the director brings us an exciting young actor - Charles S. Dutton - along with his extraordinary dramatist. One wonders if the electricity at the Cort is the same that audiences felt when Mr. Richards, Lorraine Hansberry and Sidney Poitier stormed into Broadway with ''A Raisin in the Sun'' a quarter-century ago.
As ''Ma Rainey's'' shares its director and Chicago setting with ''Raisin,'' so it builds on Hansberry's themes: Mr. Wilson's characters want to make it in white America. And, to a degree, they have. Ma Rainey (1886-1939) was among the first black singers to get a recording contract - albeit with a white company's ''race'' division. Mr. Wilson gives us Ma (Theresa Merritt) at the height of her fame. A mountain of glitter and feathers, she has become a despotic, temperamental star, complete with a retinue of flunkies, a fancy car and a kept young lesbian lover.
The evening's framework is a Paramount-label recording session that actually happened, but whose details and supporting players have been invented by the author. As the action swings between the studio and the band's warm-up room - designed by Charles Henry McClennahan as if they might be the festering last- chance saloon of ''The Iceman Cometh'' - Ma and her four accompanying musicians overcome various mishaps to record ''Ma Rainey's Black Bottom'' and other songs. During the delays, the band members smoke reefers, joke around and reminisce about past gigs on a well-traveled road stretching through whorehouses and church socials from New Orleans to Fat Back, Ark.
The musicians' speeches are like improvised band solos - variously fiz ...
https://fitsmallbusiness.com/employee-compensation-plan/
The puzzle of motivation | Dan Pink [Video file]. Retrieved from https://www.youtube.com/watch?v=rrkrvAUbU9Y
Refining the total rewards package through employee input at MillerCoors [Video file]. Retrieved from https://www.youtube.com/watch?v=_I7nv0B4_NU&feature=youtu.be
How to design an employee compensation plan [SlideShare slides]. Retrieved from http://www.slideshare.net/FitSmallBusiness/how-to-design-a-compensation-plan-dave?ref=http://fitsmallbusiness.com/how-to-pay-employees/
Compensation strategies [Video file]. Retrieved from https://youtu.be/U2wjvBigs7w
· Expectations for Power Point Presentations in Units IV and V
I would like to provide information about what needs to be included in presentations. Please review the rubric prior to submitting any assignment. If you don't know where to find this, please contact me.
1. You need a title slide.
2. You need an overview of the presentation slide (slide after the title slide). This is how you would organize a presentation if you were presenting it at work.
3. You need a summary slide (before the reference slide); same reason as above.
4. Please do not forget to cite on slides where you are writing about something related to what you have read. Please consider each slide a paragraph. You can cite on the slides or in the notes. If you do not cite, you will not get credit for the slide.
- Direct quotes should not be used in this presentation as they are not analysis.
5. Remember, all I can evaluate is what you submit, so please consider using notes to explain what you are writing in further detail. Bullets are great and you can use these but then provide more detail in the notes.
6. Graphics - Please include graphics/charts/graphs as this is evaluated in the rubric (quality of the presentation).
7. References - For all references, you need citations. For all citations, you need references. They must match. All must be formatted using APA requirements. Please review the Quick Reference Guide that was posted in the announcements.
Please never hesitate to email me with any questions. If you need further clarification about feedback or if you do not agree with any of the feedback, please contact me. My door is always open.
Assignment 1
Positioning Statement and Motto
Use the provided information, as well as your own research, to assess one (1) of the stated brands (Tesla, SmoothieKing, Suave, or Nintendo) by completing the questions below with an ORIGINAL response to each. At the end of the worksheet, be sure to develop a new ORIGINAL positioning statement and motto for the brand you selected. Submit the completed template in the Week 4 assignment submission link.
Name:
Professor’s Name:
Course Title:
Date:
Company/Brand Selected (Tesla, SmoothieKing, Suave or Nintendo):
1. Target Customers/Users
Who are the target customers for the company/brand? Make sure you tell why you selected each item that you did. (NOTE: DO NO ...
This document discusses statistical models and inferential statistics. It defines statistical modeling as using mathematical tools and statistical conclusions to understand real-life situations. There are three main types of statistical models: parametric models which have known parameters; nonparametric models which have flexible parameters; and semi-parametric models which are a blend of the two. Inferential statistics are used to draw conclusions about populations based on samples, while descriptive statistics describe sample characteristics. Common inferential statistics techniques include hypothesis testing, regression analysis, z-tests, t-tests, f-tests, and confidence intervals.
This document provides information about getting fully solved assignments for MBA students. It details contact information for an assignment help service via email or phone call, and provides a sample assignment question document. The sample assignment covers topics in statistics, including probability, sampling, hypothesis testing, analysis of variance, index numbers, and includes 6 questions with sub-questions and evaluation criteria. Students are instructed to answer all questions, with approximately 400 word answers for 10 mark questions.
This document discusses using multiple regression analysis to predict real estate sale prices. Several independent variables are considered as predictors, including floor height, distance from elevator, ocean view, whether it is an end unit, and whether furniture is included. The analysis finds some variables like ocean view and floor height are statistically significant in predicting sale price, while others like the interaction between distance from elevator and ocean view are also important. The regression model provides insight into how real estate businesses can focus their resources based on which factors most influence prices.
MELJUN CORTES research lectures_evaluating_data_statistical_treatmentMELJUN CORTES
This document discusses the importance of statistics in research and the proper treatment of data. It notes that statistics are the backbone of research and help organize data in tables and graphs to guide meaningful interpretations. The document outlines the data analysis process and different levels of measurement for variables. It provides a matrix for statistical treatment of different types of data and describes common statistical operations like measures of central tendency, variance, correlation, and statistical tests. Dangers of misusing statistics are also discussed.
This document provides an overview of a presentation on statistical hypothesis testing using the t-test. It discusses what a t-test is, how to perform a t-test, and provides an example of a t-test comparing spelling test scores of two groups that received different teaching strategies. The document outlines the six steps for conducting statistical hypothesis testing using a t-test: 1) stating the hypotheses, 2) choosing the significance level, 3) determining the critical values, 4) calculating the test statistic, 5) comparing the test statistic to the critical values, and 6) writing a conclusion.
Assignment 2 Tests of SignificanceThroughout this assignment yo.docxrock73
Assignment 2: Tests of Significance
Throughout this assignment you will review mock studies. You will needs to follow the directions outlined in the section using SPSS and decide whether there is significance between the variables. You will need to list the five steps of hypothesis testing (as covered in the lesson for Week 6) to see how every question should be formatted. You will complete all of the problems. Be sure to cut and past the appropriate test result boxes from SPSS under each problem and explain what you will do with your research hypotheses. All calculations should be coming from your SPSS. You will need to submit the SPSS output file to get credit for this assignment. This file will save as a .spv file and will need to be in a single file. In other words, you are not allowed to submit more than one output file for this assignment.
The five steps of hypothesis testing when using SPSS are as follows:
1. State your research hypothesis (H1) and null hypothesis (H0).
2. Identify your confidence interval (.05 or .01)
3. Conduct your analysis using SPSS.
4. Look for the valid score for comparison. This score is usually under ‘Sig 2-tail’ or ‘Sig. 2’. We will call this “p”.
5. Compare the two and apply the following rule:
a. If “p” is < or = confidence interval, than you reject the null.
Be sure to explain to the reader what this means in regards to your study. (Ex: will you recommend counseling services?)
* Be sure that your answers are clearly distinguishable. Perhaps you bold your font or use a different color.
ASSIGNMENT 2(200) WORD MINIUM
1. They allow us to see if our relationship is "statistically significant". (Remember that this only shows us that there is or is not a relationship but does NOT show us if it is big, small, or in-between.)
2. It let's us know if our findings can be generalized to the population which our sample was selected from and represents.
This week you will decide which test of significance you will use for your project. For this class your choices for tests will include one of the following:
· Chi-square
· t Test
· ANOVA
We will be using a process for hypothesis testing which outlines five steps researchers can follow to complete this process:
1. Write your research hypothesis (H1) and your null hypothesis (H0).
2. Identify and record your confidence interval. These are usually .05 (95%) or .01 (99%).
3. Complete the test using SPSS.
4. Identify the number under Sig. (2-tail). This will be represented by "p".
5. Compare the numbers in steps 2 and 4 and apply the following rule:
1. If p < or = confidence interval, than you reject the null hypothesis
Determine what to do with your null and explain this to your reader. Be sure to go beyond the phrase "reject or fail to reject the null" and explain how that impacts your research and best describes the relationship between variables.
TEST QUESTIONS-NEED FULL ANSWERS
Q1
Make up and discuss research examples corresponding to the various ...
Statistical analysis involves investigating trends, patterns, and relationships using quantitative data. It requires careful planning from the start, including specifying hypotheses and designing the study. After collecting sample data, descriptive statistics summarize and organize the data, while inferential statistics are used to test hypotheses and make estimates about populations. Key steps in statistical analysis include planning hypotheses and research design, collecting a sufficient sample, summarizing data with measures of central tendency and variability, and testing hypotheses or estimating parameters with techniques like regression, comparison tests, and confidence intervals. The results must be interpreted carefully in terms of statistical significance, effect sizes, and potential decision errors.
This document provides an overview of descriptive statistics, inferential statistics, and regression analysis using PASW Statistics software. It discusses topics such as frequency analysis, measures of central tendency, hypothesis testing, t-tests, ANOVA, chi-square tests, correlation, and linear regression. The document is divided into multiple parts that cover opening and manipulating data files, descriptive statistics, tests of significance, regression analysis, and chi-square/ANOVA. It also discusses importing/exporting data and using scripts in PASW Statistics.
SPSS (Statistical Package for the Social Sciences) is statistical software used for data management and analysis. It allows users to process questionnaires, report data in tables and graphs, and analyze data through various tests like means, chi-square, and regression. Originally called SPSS Inc., it is now owned by IBM and known as IBM SPSS Statistics. The document provides an introduction to SPSS and outlines how to define variables, enter data, select cases, run descriptive statistics like frequencies and crosstabs, and manipulate output files.
This document discusses statistical analysis using SPSS. It describes descriptive statistics, which present data in a usable form by describing frequency, central tendency, and dispersion. Inferential statistics make broader generalizations from samples to populations using hypothesis testing. Hypothesis testing involves research hypotheses, null hypotheses, levels of significance, and type I and II errors. Choosing an appropriate statistical test depends on the hypothesis and measurement levels of the variables. SPSS is a comprehensive system for statistical analysis that can analyze many file types and generate reports and statistics.
Between Black and White Population1. Comparing annual percent .docxjasoninnes20
Between Black and White Population
1. Comparing annual percent of Medicare enrollees having at least one ambulatory visit between B and W
2. Comparing average annual percent of diabetic Medicare enrollees age 65-75 having hemoglobin A1c between B and W
3. Comparing average annual percent of diabetic Medicare enrollees age 65-75 having eye examination between B and W
4. Comparing average annual percent of diabetic Medicare enrollees age 65-75 having
Students will develop an analysis report, in five main sections, including introduction, research method (research questions/objective, data set, research method, and analysis), results, conclusion and health policy recommendations. This is a 5-6 page individual project report.
Here are the main steps for this assignment.
Step 1: Students require to submit the topic using topic selection discussion forum by the end of week 1 and wait for instructor approval.
Step 2: Develop the research question and
Step 3: Run the analysis using EXCEL (RStudio for BONUS points) and report the findings using the assignment instruction.
The Report Structure:
Start with the
1.Cover page (1 page, including running head).
Please look at the example http://www.apastyle.org/manual/related/sample-experiment-paper-1.pdf (you can download the file from the class) and http://www.umuc.edu/library/libhow/apa_tutorial.cfm to learn more about the APA style.
In the title page include:
· Title, this is the approved topic by your instructor.
· Student name
· Class name
· Instructor name
· Date
2.Introduction
Introduce the problem or topic being investigated. Include relevant background information, for example;
· Indicates why this is an issue or topic worth researching;
· Highlight how others have researched this topic or issue (whether quantitatively or qualitatively), and
· Specify how others have operationalized this concept and measured these phenomena
Note: Introduction should not be more than one or two paragraphs.
Literature Review
There is no need for a literature review in this assignment
3.Research Question or Research Hypothesis
What is the Research Question or Research Hypothesis?
***Just in time information: Here are a few points for Research Question or Research Hypothesis
There are basically two kinds of research questions: testable and non-testable. Neither is better than the other, and both have a place in applied research.
Examples of non-testable questions are:
How do managers feel about the reorganization?
What do residents feel are the most important problems facing the community?
Respondents' answers to these questions could be summarized in descriptive tables and the results might be extremely valuable to administrators and planners. Business and social science researchers often ask non-testable research questions. The shortcoming with these types of questions is that they do not provide objective cut-off points for decision-makers.
In order to overcome this problem, researchers often seek to answer o ...
Similar to Histograms and Descriptive Statistics Scoring GuideCRITERIANON.docx (12)
httpswww.azed.govoelaselpsUse this to see the English Lang.docxpooleavelina
https://www.azed.gov/oelas/elps/
Use this to see the English Language Proficiency Standards of Arizona-Pick a grade level
https://cms.azed.gov/home/GetDocumentFile?id=54de1d88aadebe14a87070f0
http://www.corestandards.org/ELA-Literacy/introduction/how-to-read-the-standards/
how to read standards
Week 04
Acquisition and Customer Lifetime Value (CLV)
https://www.smh.com.au/politics/federal/nbn-customers-face-higher-prices-or-poorer-internet-connection-audit-warns-20190813-p52go7.html
Customer Relationship Management?
CRM is the process of carefully managing detailed information about individual
customers and all customer touch points to maximize customer loyalty.
Now closely associated with data warehousing and mining
Relationship
Relationship
Identifying good customers: RFM Model
Recency
Frequency
Monetary Value
Time/purchase occasions since the last purchase
Number of purchase occasions since first purchase
Amount spent since the first purchase
R
F
M
Total RFM Score: R Score + F score + M Score
CASE: Database for BookBinders Book Club
Predict response to a mailing for the book, Art History of Florence, based on the
following variables accumulated in the database and the responses to a test mailing:
Gender
Amount purchased
Months since first purchase
Months since last purchase
Frequency of purchase
Past purchases of art books
Past purchases of children’s books
Past purchases of cook books
Past purchases of DIY books
Past purchases of youth books
Recency
Frequency
Monetary
Example: RFM Model Scoring Criteria
R
Months from last
purchase
13-max 10-12 7-9 3-6 0-2
Score 5pts 10 15 20 25
F
Frequency > 30 21-30 16-20 11-15 0-10
Score 25pts 20 15 10 5
M
Amount
purchased
> 400 301-400 201-300 101- 200 100
Score 50 45 30 15 10
Implement using Nested If statements in Excel
Decile Classification
• Standard Assessment Method
• Apply the results of approach and
calculate the “score” of each individual
• Order the customers based on “score”
from the highest to the lowest
• Divide into deciles
• Calculate profits per deciles
Customer 1 Score 1.00
Customer 2 Score 0.99
….
Customer 230 Score 0.92
Customer 2300 Score 0.00
Decile1
Decile10
…
..
…
..
Output for Bookbinders club
Decile Score RFM No. of Mailings Cost of mailing RFM Units sold RFM Profit
10 17.6% 5000 $3,250 783 $4,733
20 34.8% 10000 $6,500 1,543 $9,243
30 46.1% 15000 $9,750 2,043 $11,093
40 53.4% 20000 $13,000 2,370 $11,170
50 65.2% 25000 $16,250 2,891 $13,241
60 77.9% 30000 $19,500 3,457 $15,757
70 83.3% 35000 $22,750 3,696 $14,946
80 91.7% 40000 $26,000 4,065 $15,465
90 97.5% 45000 $29,250 4,326 $14,876
100 100.0% 50000 $32,500 4,435 $12,735
Note: Market Potential = 4435 units and margin = $10.20
Leaky bucket
New customer
acquisition
Purchase increase by
current customers
Purchase decrease by
current customers
Lost customers
Lost customers
Credit Card Rewards Program ...
The 30 June 2019 local elections in Albania took place in a context of deep political polarization and crisis. The main opposition parties boycotted the elections and called on voters to abstain. As a result, many mayoral races were uncontested. The elections suffered from a lack of trust in the impartiality of the election administration due to its unbalanced composition. While voting and counting were carried out efficiently on election day, the broader process failed to provide voters with a genuine choice between political alternatives. The elections did not resolve the underlying political disputes and the country remained in a state of political uncertainty.
httpfmx.sagepub.comField Methods DOI 10.117715258.docxpooleavelina
http://fmx.sagepub.com
Field Methods
DOI: 10.1177/1525822X04269550
2005; 17; 30 Field Methods
Don A. Dillman and Leah Melani Christian
Survey Mode as a Source of Instability in Responses across Surveys
http://fmx.sagepub.com/cgi/content/abstract/17/1/30
The online version of this article can be found at:
Published by:
http://www.sagepublications.com
can be found at:Field Methods Additional services and information for
http://fmx.sagepub.com/cgi/alerts Email Alerts:
http://fmx.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://fmx.sagepub.com/cgi/content/refs/17/1/30 Citations
at SAGE Publications on September 9, 2009 http://fmx.sagepub.comDownloaded from
http://fmx.sagepub.com/cgi/alerts
http://fmx.sagepub.com/subscriptions
http://www.sagepub.com/journalsReprints.nav
http://www.sagepub.com/journalsPermissions.nav
http://fmx.sagepub.com/cgi/content/refs/17/1/30
http://fmx.sagepub.com
10.1177/1525822X04269550FIELD METHODSDillman, Christian / SURVEY MODE AS SOURCE OF INSTABILITY
Survey Mode as a Source of Instability
in Responses across Surveys
DON A. DILLMAN
LEAH MELANI CHRISTIAN
Washington State University
Changes in survey mode for conducting panel surveys may contribute significantly to
survey error. This article explores the causes and consequences of such changes in
survey mode. The authors describe how and why the choice of survey mode often
causes changes to be made to the wording of questions, as well as the reasons that
identically worded questions often produce different answers when administered
through different modes. The authors provide evidence that answers may change as a
result of different visual layouts for otherwise identical questions and suggest ways
to keep measurement the same despite changes in survey mode.
Keywords: survey mode; questionnaire; panel survey; measurement; survey error
Most panel studies require measurement of the same variables at different
times. Often, participants are asked questions, several days, weeks, months,
or years apart to measure change in some characteristics of interest to the
investigation. These characteristics might include political attitudes, satis-
faction with a health care provider, frequency of a behavior, ownership of
financial resources, or level of educational attainment. Whatever the charac-
teristic of interest, it is important that the question used to ascertain it perform
the same across multiple data collections.
In addition, declining survey response rates, particularly for telephone
surveys, have encouraged researchers to use multiple modes of data collec-
tion during the administration of a single cross-sectional survey. Encouraged
by the availability of more survey modes than in the past and evidence that a
change in modes produces higher response rates (Dillman 2002), surveyors
This is a revision of a paper presented at t ...
https://iexaminer.org/fake-news-personal-responsibility-must-trump-intellectual-laziness/
Fake news: Personal responsibility must trump intellectual laziness
By Matt Chan January 4, 2017
Where do you get your news? That question has become incredibly important given the results of our Presidential Election. How many times have you heard, “I read a news story on Facebook and …” The problem: Facebook is not a news service; it’s a “social media” site whose purpose is to connect like-minded friends and family, to provide you with social connections, and online entertainment.
For Asian Americans social media provides an important and useful way of connecting socially and in some cases politically, but there is a downside. The downside is how social media actually works. These sites employ elaborate algorithms to track and analyze your posts, likes, and dislikes to provide you with a custom experience unique to you. The truth is you are being marketed to, not informed. What looks like news, is not really news, it’s personal validation. All in an attempt to keep you on the site longer, to click a few more things, to make you feel good about what you’re reading. It makes it seem like most people agree with you because you’re only fed information and stories that validate your worldview.
On the other hand, real news is hard work. Its fact-based information presented by people who have checked, researched, and documented what they are presenting as the truth. Real news can be verified.
“Fake News” is, well, fake, often times entirely made-up or containing a hint of truth. Social media was largely responsible for pushing “fake news” stories that were entirely made up to drive clicks on websites. These clicks in turn generated money for the people promoting the stories. The more outrageous the story, the more clicks, the more revenue. When you factor in the algorithms that feed you what you like, you can clearly see the more “fake news” you consume on social media, the more is pushed your way. There’s an abundance of pseudo news sites that merely re-post and curate existing stories, adding their bias to validate their audience’s beliefs, no matter how crazy or mainstream. It is curated solely for you. Now factor in that nearly 44% of Americans obtain some or most of their news from social media and you have a very toxic mix.
The mainstream news media has also fallen into this validation trap. You have one news network that solely reflects the right wing, others that take the view of the left-center leaning, and what is lost are the facts and context, the balance we need to evaluate, learn, and understand the world. People seeking fact-based journalism lose, because the more extreme the media becomes to entice consumers with provocative headlines and click-bait to earn more money, the less their news is fact-based and becomes more opinion driven.
There was a time when fact-based reporting was required of broadcast news. It was called “The Fairness Doctrin ...
http1500cms.comBECAUSE THIS FORM IS USED BY VARIOUS .docxpooleavelina
http://1500cms.com/
BECAUSE THIS FORM IS USED BY VARIOUS GOVERNMENT AND PRIVATE HEALTH PROGRAMS, SEE SEPARATE INSTRUCTIONS ISSUED BY
APPLICABLE PROGRAMS.
NOTICE: Any person who knowingly files a statement of claim containing any misrepresentation or any false, incomplete or misleading information may
be guilty of a criminal act punishable under law and may be subject to civil penalties.
REFERS TO GOVERNMENT PROGRAMS ONLY
MEDICARE AND CHAMPUS PAYMENTS: A patient’s signature requests that payment be made and authorizes release of any information necessary to process
the claim and certifies that the information provided in Blocks 1 through 12 is true, accurate and complete. In the case of a Medicare claim, the patient’s signature
authorizes any entity to release to Medicare medical and nonmedical information, including employment status, and whether the person has employer group health
insurance, liability, no-fault, worker’s compensation or other insurance which is responsible to pay for the services for which the Medicare claim is made. See 42
CFR 411.24(a). If item 9 is completed, the patient’s signature authorizes release of the information to the health plan or agency shown. In Medicare assigned or
CHAMPUS participation cases, the physician agrees to accept the charge determination of the Medicare carrier or CHAMPUS fiscal intermediary as the full charge,
and the patient is responsible only for the deductible, coinsurance and noncovered services. Coinsurance and the deductible are based upon the charge
determination of the Medicare carrier or CHAMPUS fiscal intermediary if this is less than the charge submitted. CHAMPUS is not a health insurance program but
makes payment for health benefits provided through certain affiliations with the Uniformed Services. Information on the patient’s sponsor should be provided in those
items captioned in “Insured”; i.e., items 1a, 4, 6, 7, 9, and 11.
BLACK LUNG AND FECA CLAIMS
The provider agrees to accept the amount paid by the Government as payment in full. See Black Lung and FECA instructions regarding required procedure and
diagnosis coding systems.
SIGNATURE OF PHYSICIAN OR SUPPLIER (MEDICARE, CHAMPUS, FECA AND BLACK LUNG)
I certify that the services shown on this form were medically indicated and necessary for the health of the patient and were personally furnished by me or were furnished
incident to my professional service by my employee under my immediate personal supervision, except as otherwise expressly permitted by Medicare or CHAMPUS
regulations.
For services to be considered as “incident” to a physician’s professional service, 1) they must be rendered under the physician’s immediate personal supervision
by his/her employee, 2) they must be an integral, although incidental part of a covered physician’s service, 3) they must be of kinds commonly furnished in physician’s
offices, and 4) the services of nonphysicians must be included on the physician’s bills.
For CHA ...
https://www.medicalnewstoday.com/articles/323444.php
https://ascopubs.org/doi/full/10.1200/JCO.2008.16.0333
https://journals.lww.com/co-hematology/Abstract/2007/03000/Influence_of_new_molecular_prognostic_markers_in.5.aspx
Influence of new molecular prognostic markers in patients with karyotypically normal acute myeloid leukemia: recent advances
Mrózek, Krzysztofa; Döhner, Hartmutb; Bloomfield, Clara Da
Current Opinion in Hematology: March 2007 - Volume 14 - Issue 2 - p 106–114
doi: 10.1097/MOH.0b013e32801684c7
Myeloid disease
Purpose of review Molecular study of cytogenetically normal acute myeloid leukemia is among the most active areas of leukemia research. Despite having the same normal karyotype, adults with de-novo cytogenetically normal acute myeloid leukemia who constitute the largest cytogenetic group of acute myeloid leukemia, are very diverse with respect to acquired gene mutations and gene expression changes. These genetic alterations affect clinical outcome and may assist in selection of proper treatment. Herein we critically summarize recent clinically relevant molecular genetic studies of cytogenetically normal acute myeloid leukemia.
Recent findings NPM1 gene mutations causing aberrant cytoplasmic localization of nucleophosmin have been demonstrated to be the most frequent submicroscopic alterations in cytogenetically normal acute myeloid leukemia and to confer improved prognosis, especially in patients without a concomitant FLT3 gene internal tandem duplication. Overexpressed BAALC, ERG and MN1 genes and expression of breast cancer resistance protein have been shown to confer poor prognosis. A gene-expression signature previously suggested to separate cytogenetically normal acute myeloid leukemia patients into prognostic subgroups has been validated on a different microarray platform, although gene-expression signature-based classifiers predicting outcome for individual patients with greater accuracy are still needed.
Summary The discovery of new prognostic markers has increased our understanding of leukemogenesis and may lead to improved prognostication and generation of novel risk-adapted therapies.
http://www.bloodjournal.org/content/127/1/53?sso-checked=true
An update of current treatments for adult acute myeloid leukemia
Hervé Dombret and Claude Gardin
Abstract
Recent advances in acute myeloid leukemia (AML) biology and its genetic landscape should ultimately lead to more subset-specific AML therapies, ideally tailored to each patient's disease. Although a growing number of distinct AML subsets have been increasingly characterized, patient management has remained disappointingly uniform. If one excludes acute promyelocytic leukemia, current AML management still relies largely on intensive chemotherapy and allogeneic hematopoietic stem cell transplantation (HSCT), at least in younger patients who can tolerate such intensive treatments. Nevertheless, progress has been made, notably in terms of standard drug dose in ...
httpstheater.nytimes.com mem theater treview.htmlres=9902e6.docxpooleavelina
https://theater.nytimes.com/ mem/ theater/ treview.html?res=9902e6db1639f931a25753c1a962948260
THEATER: WILSON'S 'MA RAINEY'S' OPENS
By FRANK RICH
Published: October 12, 1984, Friday
LATE in Act I of ''Ma Rainey's Black Bottom,'' a somber, aging band trombonist (Joe Seneca) tilts his head heavenward to sing the blues. The setting is a dilapidated Chicago recording studio of 1927, and the song sounds as old as time. ''If I had my way,'' goes the lyric, ''I would tear this old building down.''
Once the play has ended, that lyric has almost become a prophecy. In ''Ma Rainey's Black Bottom,'' the writer August Wilson sends the entire history of black America crashing down upon our heads. This play is a searing inside account of what white racism does to its victims - and it floats on the same authentic artistry as the blues music it celebrates. Harrowing as ''Ma Rainey's'' can be, it is also funny, salty, carnal and lyrical. Like his real-life heroine, the legendary singer Gertrude (Ma) Rainey, Mr. Wilson articulates a legacy of unspeakable agony and rage in a spellbinding voice.
The play is Mr. Wilson's first to arrive in New York, and it reached here, via the Yale Repertory Theater, under the sensitive hand of the man who was born to direct it, Lloyd Richards. On Broadway, Mr. Richards has honed ''Ma Rainey's'' to its finest form. What's more, the director brings us an exciting young actor - Charles S. Dutton - along with his extraordinary dramatist. One wonders if the electricity at the Cort is the same that audiences felt when Mr. Richards, Lorraine Hansberry and Sidney Poitier stormed into Broadway with ''A Raisin in the Sun'' a quarter-century ago.
As ''Ma Rainey's'' shares its director and Chicago setting with ''Raisin,'' so it builds on Hansberry's themes: Mr. Wilson's characters want to make it in white America. And, to a degree, they have. Ma Rainey (1886-1939) was among the first black singers to get a recording contract - albeit with a white company's ''race'' division. Mr. Wilson gives us Ma (Theresa Merritt) at the height of her fame. A mountain of glitter and feathers, she has become a despotic, temperamental star, complete with a retinue of flunkies, a fancy car and a kept young lesbian lover.
The evening's framework is a Paramount-label recording session that actually happened, but whose details and supporting players have been invented by the author. As the action swings between the studio and the band's warm-up room - designed by Charles Henry McClennahan as if they might be the festering last- chance saloon of ''The Iceman Cometh'' - Ma and her four accompanying musicians overcome various mishaps to record ''Ma Rainey's Black Bottom'' and other songs. During the delays, the band members smoke reefers, joke around and reminisce about past gigs on a well-traveled road stretching through whorehouses and church socials from New Orleans to Fat Back, Ark.
The musicians' speeches are like improvised band solos - variously fiz ...
https://fitsmallbusiness.com/employee-compensation-plan/
The puzzle of motivation | Dan Pink [Video file]. Retrieved from https://www.youtube.com/watch?v=rrkrvAUbU9Y
Refining the total rewards package through employee input at MillerCoors [Video file]. Retrieved from https://www.youtube.com/watch?v=_I7nv0B4_NU&feature=youtu.be
How to design an employee compensation plan [SlideShare slides]. Retrieved from http://www.slideshare.net/FitSmallBusiness/how-to-design-a-compensation-plan-dave?ref=http://fitsmallbusiness.com/how-to-pay-employees/
Compensation strategies [Video file]. Retrieved from https://youtu.be/U2wjvBigs7w
· Expectations for Power Point Presentations in Units IV and V
I would like to provide information about what needs to be included in presentations. Please review the rubric prior to submitting any assignment. If you don't know where to find this, please contact me.
1. You need a title slide.
2. You need an overview of the presentation slide (slide after the title slide). This is how you would organize a presentation if you were presenting it at work.
3. You need a summary slide (before the reference slide); same reason as above.
4. Please do not forget to cite on slides where you are writing about something related to what you have read. Please consider each slide a paragraph. You can cite on the slides or in the notes. If you do not cite, you will not get credit for the slide.
- Direct quotes should not be used in this presentation as they are not analysis.
5. Remember, all I can evaluate is what you submit, so please consider using notes to explain what you are writing in further detail. Bullets are great and you can use these but then provide more detail in the notes.
6. Graphics - Please include graphics/charts/graphs as this is evaluated in the rubric (quality of the presentation).
7. References - For all references, you need citations. For all citations, you need references. They must match. All must be formatted using APA requirements. Please review the Quick Reference Guide that was posted in the announcements.
Please never hesitate to email me with any questions. If you need further clarification about feedback or if you do not agree with any of the feedback, please contact me. My door is always open.
Assignment 1
Positioning Statement and Motto
Use the provided information, as well as your own research, to assess one (1) of the stated brands (Tesla, SmoothieKing, Suave, or Nintendo) by completing the questions below with an ORIGINAL response to each. At the end of the worksheet, be sure to develop a new ORIGINAL positioning statement and motto for the brand you selected. Submit the completed template in the Week 4 assignment submission link.
Name:
Professor’s Name:
Course Title:
Date:
Company/Brand Selected (Tesla, SmoothieKing, Suave or Nintendo):
1. Target Customers/Users
Who are the target customers for the company/brand? Make sure you tell why you selected each item that you did. (NOTE: DO NO ...
This document provides instructions for students completing a research paper for an introductory radiography course. It outlines requirements for the paper, including length of 3 pages, use of 3 scholarly sources from 2008-present, and APA formatting. Key topics that must be addressed are introduced, including the chosen research topic, importance of the topic, and evidence of research through in-text citations on every page and a reference list. Formatting guidelines specify use of a cover page, introduction, body, and summary. The instructions emphasize accurately citing all sources to avoid plagiarism. Students are encouraged to visit the campus writing center for assistance meeting the standards.
https://www.worldbank.org/en/country/vietnam/overview
-------------- Context ----------------
Vietnam’s development over the past 30 years has been remarkable. Economic and political reforms under Đổi Mới, launched in 1986, have spurred rapid economic growth, transforming what was then one of the world’s poorest nations into a lower middle-income country. Between 2002 and 2018, more than 45 million people were lifted out of poverty. Poverty rates declined sharply from over 70% to below 6% (US$3.2/day PPP), and GDP per capita increased by 2.5 times, standing over US$2,500 in 2018.
In the medium-term, Vietnam’s economic outlook is positive, despite signs of cyclical moderation in growth. After peaking at 7.1% in 2018, real GDP growth in 2019 is projected to slightly decelerate in 2019, led by weaker external demand and continued tightening of credit and fiscal policies. Real GDP growth is projected to remain robust at around 6.5% in 2020 and 2021. Annual headline inflation has been stable for the seven consecutive years – at single digits, trending towards 4% and below in recent years. The external balance remains under control and should continue to be financed by strong FDI inflows which reached almost US$18 billion in 2018 – accounting for almost 24% of total investment in the economy.
Vietnam is experiencing rapid demographic and social change. Its population reached 97 million in 2018 (up from about 60 million in 1986) and is expected to expand to 120 million before moderating around 2050. Today, 70% of the population is under 35 years of age, with a life expectancy of 76 years, the highest among countries in the region at similar income levels. But the population is rapidly aging. And an emerging middle class, currently accounting for 13% of the population, is expected to reach 26% by 2026.
Vietnam ranks 48 out of 157 countries on the human capital index (HCI), second in ASEAN behind Singapore. A Vietnamese child born today will be 67% as productive when she grows up as she could be if she enjoyed complete education and full health. Vietnam’s HCI is highest among middle-income countries, but there are some disparities within the country, especially for ethnic minorities. There would also be a need to upgrade the skill of the workforce to create productive jobs at a large scale in the future.
Over the last thirty years, the provision of basic services has significantly improved. Access of households to modern infrastructure services has increased dramatically. As of 2016, 99% of the population used electricity as their main source of lighting, up from 14 % in 1993. Access to clean water in rural areas has also improved, up from 17% in 1993 to 70% in 2016, while that figure for urban areas is above 95%.
Vietnam performs well on general education. Coverage and learning outcomes are high and equitably achieved in primary schools — evidenced by remarkably high scores in the Program for International Student Assessment (PISA) in 2012 and 2015, ...
HTML WEB Page solutionAbout.htmlQuantum PhysicsHomeServicesAbou.docxpooleavelina
HTML WEB Page solution/About.htmlQuantum PhysicsHomeServicesAboutContact Me
This website gives a detail inward look in quantam physics as it is a evolving field now-a-days and has many upcoming changes that is going to leave the world in shock. There has been a lot of confusion lately related to this topics in people so it is encourage that people visit this website and get to know more about this field and explore the horizons there is yet to come.
HTML WEB Page solution/FirstLastHomePage.htmlQuantum PhysicsHomeServicesAboutContact Me
Definition
Quantum mechanics is the part of material science identifying with the little.
It brings about what may have all the earmarks of being some extremely peculiar decisions about the physical world. At the size of particles and electrons, a significant number of the conditions of old style mechanics, which depict how things move at ordinary sizes and speeds, stop to be helpful. In traditional mechanics, objects exist in a particular spot at a particular time. Be that as it may, in quantum mechanics, protests rather exist in a fog of likelihood; they have a specific possibility of being at point An, another possibility of being at point B, etc.Three revolutionary principles
Quantum mechanics (QM) created over numerous decades, starting as a lot of questionable scientific clarifications of tests that the math of old style mechanics couldn't clarify. It started at the turn of the twentieth century, around a similar time that Albert Einstein distributed his hypothesis of relativity, a different numerical unrest in material science that portrays the movement of things at high speeds. In contrast to relativity, nonetheless, the sources of QM can't be credited to any one researcher. Or maybe, various researchers added to an establishment of three progressive rules that bit by bit picked up acknowledgment and exploratory confirmation somewhere in the range of 1900 and 1930. They are:
Quantized properties:
Certain properties, for example, position, speed and shading, can once in a while just happen in explicit, set sums, much like a dial that "clicks" from number to number. This tested a crucial presumption of old style mechanics, which said that such properties should exist on a smooth, ceaseless range. To portray the possibility that a few properties "clicked" like a dial with explicit settings, researchers begat the word "quantized".
Particles of light:
Light can now and again act as a molecule. This was at first met with unforgiving analysis, as it negated 200 years of trials indicating that light acted as a wave; much like waves on the outside of a quiet lake. Light acts comparatively in that it ricochets off dividers and twists around corners, and that the peaks and troughs of the wave can include or counteract. Included wave peaks bring about more splendid light, while waves that counterbalance produce obscurity. A light source can be thought of ...
https://www.huffpost.com/entry/online-dating-vs-offline_b_4037867
For your initial post, provide a sentence to share which article you are referring to so that you can best communicate with your peers. Include a link to your selection.
· Explain how the argument contains or avoids bias.
i. Provide specific examples to support your explanation.
ii. What assumptions does it make?
· Discuss the credibility of the overall argument.
i. Were the resources the argument was built upon credible?
ii. Does the credibility support or undermine the article’s claims in any important ways?
In response to your peers, provide an additional resource to support or refute the argument your peer makes. Do you agree with their claims of credibility? Are there any other possible bias not identified?
Response #1
Allysa Tantala posted Sep 22, 2019 10:17 PM
Subscribe
The article that I am looking at is Online Dating Vs. Offline Dating: Pros and Cons.It was written by Julie Spira, an online dating expert, bestselling author, and CEO of Cyber-Dating Expert. The name of the article is spot on in describing what it is about. The author goes through the pros and cons of dating online and offline in today’s day and age. The author avoids bias because she looks at both options in both their positive and negative attributes. She comes at the issues from both angles and I believe she does a very good job at remaining unbiased. She states that “if you're serious about meeting someone special, you must include a combination of both online and offline dating in your routine” (Spira, 2013, par. 18). She’s stating that both options have their pros and cons and that really a combination of both is needed to find someone. The only bias I could see anyone pointing out would be that she is a woman, so you do not get the male perspective on these things. That being said, I one hundred percent think she covers all of the questions people may have about online and offline dating in today’s world. The only assumption being made here is that the reader wants to be out in the dating world and they need to know what is best. But, the title of the article is pretty self-explanatory so if someone did not want to know these things, they would not have to waste their time reading it all because they could tell what it would be about by the title.
The resource that she used was herself, and like I stated above, she is an online dating expert, bestselling author, and CEO of Cyber-Dating Expert; so she is more than qualified to give her perspective on these issues. I find her to be credible and thought provoking. Her credibility supports everything the article says and makes the reader feel like they are being told the truth by someone who completely understands all of the pros and cons.
Resource:
Spira, J. (2013, December 3). Online Dating Vs. Offline Dating: Pros and Cons. Retrieved from https://www.huffpost.com/entry/online-dating-vs-offline_b_4037867
Response #2
Jennifer Caforio posted Se ...
https://www.vitalsource.com/products/comparative-criminal-justice-systems-harry-r-dammer-jay-s-v9781285630779
THE ASSIGNMENT IS BASED ON CHAPTER 1 (ONE)
Login : [email protected]
Password: Greekyogurt13!
1
3Defining the Problem
Rigina CochranMPA/593
August 19, 2019
Peter ReevesDefining the Problem
The health care system in Colorado is a composition of medical professionals providing services such as diagnosis, treatment, as well as preventive measures to mental illness and injuries ("Healthcare policy in Colorado - Ballotpedia," 2019). Health care policy involves the establishment and implementation of legislation and other regulations that the states use to manage its health care system effectively. Further, this sector consists of other participants, such as insurance and health information technology. The cost citizens pay for medical care and also the access to quality care influence the overall health care providers in Colorado. Therefore, the need for the creation and implementation of laws that help the state maintain efficiency in the health sector in Colorado.
Problem Statement
The declining standards of medical care within the United States has caused significant concern in the world. Due to these rising concerns, there have been various policies implemented, leading to mixed reactions among the different states. Some of the active policies implemented offer a long-term solution to this problem including Medicaid and Medicare. After acquiring state control, the Republicans dismissed the idea to expand and create medical insurance for Medicaid in Colorado. Sustaining the structure of the health care payroll calls for the deductions from the employees and the employers, which may lead to loss of jobs and increased burden of expenditure (Garcia, 2019).
Identify the Methodology
The main objective of this policy plan is to investigate the role of legislation in the management of the health care sector in the United States. Due to the need for achieving in-depth exploration, this paper uses a combination of both qualitative and quantitative methods of data collection by addressing both practical and theoretical aspects of the research. Based on the answers that the policy requires, choosing survey as the research design. This method involves collecting and analyzing data from a few people who represent the principal group within health care. However, the survey method faces some challenges such as attitudes and perception of the health workers leading to the delimitation of the study. The target population for the study includes the nurses within the health sectors in Colorado. The selection of the participants involved in the use of stratified random sampling.
Identify your Stakeholders
The major stakeholders in the creation and implementation of the policy plan include the legislatures, local government, patients, and other private parties such as the insurance companies. Collectively, these bodies are involved in the makin ...
Avoidant/Restrictive Food Intake Disorder (ARFID) is a feeding disorder characterized by avoidance of food due to sensory characteristics, fear of aversive consequences, or lack of interest in eating. This results in insufficient calorie or nutrient intake leading to issues like weight loss, nutritional deficiencies, or interference with functioning. Treatments that have shown promise for ARFID include family-based treatment involving parents supporting exposure to new foods, cognitive-behavioral therapy with elements like food exposure and relaxation training, and hospital-based refeeding programs, some of which utilize tube feeding for severe cases. However, more research is still needed, as existing studies on treating ARFID are limited and no single approach has been proven
https://www.youtube.com/watch?time_continue=59&v=Bh_oEYX1zNM&feature=emb_logo
BA 325 Pivot Table Assignment Answer Sheet
Name:
Before you do anything fill out your name on the assignment and save your file as BA325 Firstname Lastname (use your actual name).
The table has all of the questions from the DuPont Assignment. Fill in your answers to the questions in the corresponding cell in the Answer column. Below the table there is a spot for the Screen Clippings from both the Practice Assignment, and the DuPont Assignment.
After you have filled out all of the answers and Screen Clippings submit the file to the Assignments folder in D2L.
Q Number
Question
Answer
Q1
How much was American Airlines’ Net Revenues in 2013?
Q2
What was the Return on Equity for Apple in 2015?
Q3
Which company had the highest Net Income and in which year? What was the value?
Q4
Which company had the lowest Net Income and in which year? What was the value?
Q5
How many unique companies in your sample had Net Losses exceeding one billion dollars? Which companies, and what years?
Q6
What was the Sum of the Net Income for all companies in the sample for 2015?
Q7
Which company had the highest total Net Income over the three year period? What was the value?
Q8
Which company had the lowest total Net Income over the three year period? What was the value?
Q9
Which industry had the highest Average Profit Margin over the three year period? What was the value?
Q10
In which year was the Average Profit Margin the highest for the entire sample? What was the value?
Q11
For how many companies do you have Profit Margin ratio data in 2013?
Q12
For what Industry do you have the most Profit Margin ratio data in the sample? What was the value? For that Industry what year was the highest? What was the value?
Q13
Which Industry has the highest Average Asset Turnover over the three year period? What was the value?
Q14
Which of the remaining Industries has the highest Asset Turnover in 2014? What was the value?
Q15
Which Industry has the highest Average Financial Leverage over the three year period? What was the value?
Q16
Which Industry has the lowest Average Financial Leverage that does not include negative numbers in any year? What was the value?
Q17
What is the Average Financial Leverage for the Transportation Industry in 2013?
Note: The answer is odd. You will have to use Data Cleaning to resolve the issue.
Q18
Which Industry has the highest Average Return on Equity over the three year period and which company is the highest within that Industry? What are the values?
Q19
Which two companies in the Public Utilities Industry have the highest Average Return on Equity during the period? What are the values?
Q20
Which Industry had the largest decrease in Average Return on Equity between 2013 and 2014? What was the value?
Q21
Which Industry had the largest increase in Average Return on Equity between 2014 and 2015? What was the value?
Q22
Bonus Question 1: How many industrie ...
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Communicating effectively and consistently with students can help them feel at ease during their learning experience and provide the instructor with a communication trail to track the course's progress. This workshop will take you through constructing an engaging course container to facilitate effective communication.
Histograms and Descriptive Statistics Scoring GuideCRITERIANON.docx
1. Histograms and Descriptive Statistics Scoring Guide
CRITERIA
NON-PERFORMANCE
BASIC
PROFICIENT
DISTINGUISHED
Apply the appropriate SPSS procedures for creating histograms
to generate relevant output.
Does not provide SPSS output.
Provides SPSS output with errors.
Applies the appropriate SPSS procedures for creating
histograms to generate relevant output.
Analyzes the histogram output, demonstrating insight and
understanding of relevant data.
Interpret histogram results, including concepts of skew,
kurtosis, outliers, symmetry, and modality.
Does not provide an interpretation of histogram results.
Provides an interpretation of histogram results.
Interprets histogram results, including concepts of skew,
kurtosis, outliers, symmetry, and modality.
Evaluates histogram results, including concepts of skew,
kurtosis, outliers, symmetry, and modality.
Analyze the strengths and limitations of examining a
distribution of scores with a histogram.
Does not identify the strengths and limitations of examining a
distribution of scores with a histogram.
Identifies the strengths and limitations of examining a
distribution of scores with a histogram.
Analyzes the strengths and limitations of examining a
distribution of scores with a histogram.
Evaluates the strengths and limitations of examining a
distribution of scores with a histogram. Demonstrates insight
and understanding of relevant data.
Apply the appropriate SPSS procedure for generating
2. descriptive statistics to generate relevant output.
Does not provide SPSS output.
Includes some, but not all, of the required output. Numerous
errors in SPSS output.
Applies the appropriate SPSS procedure for generating
descriptive statistics to generate relevant output.
Applies the appropriate SPSS procedure for generating
descriptive statistics to generate relevant output. Includes all
relevant output; no irrelevant output is included. No errors in
SPSS output.
Analyze meaningful versus meaningless variables reported in
descriptive statistics.
Does not identify meaningful versus meaningless variables
reported in descriptive statistics.
Identifies meaningful versus meaningless variables reported in
descriptive statistics.
Analyzes meaningful versus meaningless variables reported in
descriptive statistics.
Evaluates meaningful versus meaningless variables reported in
descriptive statistics.
Interpret descriptive statistics for meaningful variables.
Does not identify meaningful variables.
Identifies meaningful variables.
Interprets descriptive statistics for meaningful variables.
Evaluates descriptive statistics for meaningful variables.
Apply the appropriate SPSS procedures for creating z scores
and descriptive statistics to generate relevant output.
Does not provide SPSS output.
Provides SPSS output with errors.
Applies the appropriate SPSS procedures for creating z scores
and descriptive statistics to generate relevant output.
Analyzes the z scores and descriptive statistics output,
demonstrating insight and understanding of relevant data.
Analyze the relevant data from the computation, interpretation,
and application of z scores.
Does not identify the relevant data or generate output.
3. Identifies the relevant data and generates output.
Analyzes the relevant data from the computation, interpretation,
and application of z scores.
Evaluates the relevant data from the computation,
interpretation, and application of z scores. Justifies the
meaningfulness of selected variables.
Analyze real-world application of Type I and Type II errors,
and the research decisions that influence the relative risk of
each.
Does not describe a real-world application of Type I and Type
II errors and the research decisions that influence the relative
risk of each.
Describes, but does not analyze, a real-world application of
Type I and Type II errors and the research decisions that
influence the relative risk of each.
Analyzes a real-world application of Type I and Type II errors
and the research decisions that influence the relative risk of
each.
Evaluates real-world application of Type I and Type II errors
and the research decisions that influence the relative risk of
each.
Apply the logic of null hypothesis testing to cases.
Does not apply the logic for null hypothesis testing.
Inconsistently applies the logic for null hypothesis testing.
Applies the logic of null hypothesis testing to cases.
Analyzes the logic of null hypothesis testing. Demonstrates
insight and understanding of relevant data to either reject or not
reject the null hypothesis.
Communicate in a manner that is scholarly, professional, and
consistent with expectations for members of the identified field
of study.
Does not communicate in a manner that is scholarly,
professional, and consistent with the expectations for members
in the identified field of study.
Inconsistently communicates in a manner that is scholarly,
professional, and consistent with the expectations for members
4. in the identified field of study.
Communicates in a manner that is scholarly, professional, and
consistent with the expectations for members in the identified
field of study.
Communicates in a manner that is professional, scholarly, and
consistent with expectations for members of the identified field
of study. Adheres to APA guidelines, and work is appropriate
for publication.
Running head: Z SCORE ASSESSMENT ANSWER TEMPLATE
1
UNIT 4 ASSIGNMENT 1 ANSWER TEMPLATE
5z-Score Assessment Answer TemplateStudent NameCapella
UniversityUnit 4 Assignment 1 Answer Template
The following assignment includes three sections consisting of:
1. z scores in SPSS.
2. Case studies of Type I and Type II errors.
3. Case studies of null hypothesis testing.
Additional notes:
· Answer in complete sentences.
· Follow APA rules for scholarly writing.
· Include a reference list if necessary.
· Save your answers and upload this template to the assignment
area for grading.
Section 1: z Scores in SPSS
A z score is typically analyzed when population mean (µ) and
5. population standard deviation (σ) are known. However, in SPSS,
we can still calculate z scores with the grades.sav data using the
sample mean (M) and sample standard deviation (s). To do this,
open grades.sav in SPSS. On the Analyze menu, point to
Descriptive Statistics, and then click Descriptives…
You will be calculating and interpreting z scores for the total
variable. In the Descriptives dialog box, move the total variable
into the Variable(s) box. Select the Save standardized values as
variables option and click OK.
SPSS provides descriptive statistics for total in the Output
window. SPSS also creates a new variable in the far right
column, labeled Ztotal, in the Data Editor area. Ztotal provides
a z score for each case on the total variable. You are now
prepared to answer the following Section 1 questions.Question 1
What is the sample mean (M) and sample standard deviation (s)
for total? You will use these values in Question 2 below.
[Answer here in complete sentences. Also insert the output
from SPSS here. Replace this prompt and the prompts below,
using as much space as necessary to answer questions.]
Question 2
A z score for this sample is calculated as [(X – M) ÷ s]. Locate
Case #53’s unstandardized total score (X) in the Data Editor. In
the formula below, replace X, M, s, and ? to show how the z
score in Ztotal is derived for Case #53.
(X – M ) ÷ s = ?Question 3
Run Descriptives… on Ztotal. What are the mean and standard
deviation of Ztotal? (Hint: “0E7” in SPSS is scientific notation
for 0). Are the mean and standard deviation what you would
expect? Justify your answer.
6. [Answer here in complete sentences. Also place the SPSS output
here.]
Question 4
Case number 6 has a Ztotal score of 1.22. What does a z value
of 1.22 represent?
[Answer here in complete sentences.]
Question 5
Identify the case with the lowest z score. Refer to Z Scores in
Suggested Resources. Interpret the percentile rank of this z
score rounded to whole numbers.
[Answer here in complete sentences.]
Question 6
Identify the case with the highest z score. Refer to Z Scores in
Suggested Resources. Interpret the percentile rank of this z
score rounded to whole numbers.
[Answer here in complete sentences.]Section 2: Cases Studies
of Type I and Type II Errors
Question 7
A jury must determine the guilt of a criminal defendant (not
guilty, guilty). Identify how the jury would make a correct
decision. Analyze how the jury would commit a Type I error
versus a Type II error.
[Answer here in complete sentences.]Question 8
An I/O psychologist asks employees to complete surveys
measuring job satisfaction and organizational citizenship
behavior. She intends to measure the strength of association
7. between these two variables. The researcher is concerned that
she will commit a Type I error. What research decision
influences the magnitude of risk of a Type I error in her study?
[Answer here in complete sentences]
Question 9
A clinical psychologist is studying the efficacy of a new drug
medication for depression. The study includes a placebo group
(no medication) versus a treatment group (new medication). He
then measures the differences in depressive symptoms across
the two groups.
What would a Type I error represent within the context of his
study? How can he reduce the risk of committing a Type I
error? How does this decision affect the risk of committing a
Type II error?
[Answer here in complete sentences.]
Section 3: Case Studies of Null Hypothesis TestingQuestion 10
You are running a series of statistical tests in SPSS using the
standard criterion for rejecting a null hypothesis. You obtain the
following p values.
Test 1 calculates group differences with a p value = .07.
Test 2 calculates the strength of association between two
variables with a p value = .50.
Test 3 calculates group differences with a p value = .001.
For each test below, state whether or not you reject the null
hypothesis. For each test, also explain what your decision
implies in terms of group differences (Test 1 and Test 3) and in
terms of the strength of association between two variables (Test
2).
Test 1 (group differences) =
8. Test 2 (strength of association) =
Test 3 (group differences) =
Question 11
A researcher calculates a statistical test and obtains a p value of
.86. He decides to reject the null hypothesis. Is this decision
correct, or has he committed a Type I or Type II error? Explain
your answer.
[Answer here in complete sentences]
Question 12
You are proposing a research study that you would like to
conduct while attending Capella University. During the
proposal, a committee member asks you to explain in your own
words what you meant by saying “p less than .05.” Provide an
explanation.
[Answer here in complete sentences]References
Provide references if necessary. This concludes Unit 4
Assignment 1. Save your answers and upload this template to
the assignment area.
Warner, R. M. (2013). Applied statistics: From bivariate
through multivariate techniques (2nd ed.). Thousand Oaks, CA:
Sage.
Print Copy/Export Output Instructions
SPSS output can be selectively copied and pasted into Word by
using the Copy command:
1. Click on the SPSS output in the Viewer window.
2. Right-click for options.
3. Click the Copy command.
4. Paste the output into a Microsoft Word document.
The Copy command will preserve the formatting of the SPSS
tables and charts when pasting into Microsoft Word.
9. An alternative method is to use the Export command:
1. Click on the SPSS output in the Viewer window.
2. Right-click for options.
3. Click the Export command.
4. Save the file as Word/RTF (.doc) to your computer.
5. Open the .doc file.
Data Set Instructions
The grades.sav file is a sample SPSS data set. The fictional data
represent a teacher’s recording of student demographics and
performance on quizzes and a final exam across three sections
of the course. Each section consists of about 35 students (N =
105).Software Installation
Make sure that IBM SPSS Statistics Standard GradPack is fully
licensed, installed on your computer, and running properly. It is
important that you have either the Standard or Premium version
of SPSS that includes the full range of statistics. Proper
software installation is required in order to complete your first
SPSS data assignment in Assessment 1.
Next, click grades.sav in the Assessment 1 Resources to
download the file to your computer.
· You will use grades.sav throughout the course.
The definition of variables in the grades.sav data set are found
in the Assessment 1 Context. Understanding these variable
definitions is necessary for interpreting SPSS output.
In Assessment 1, you will define values and scales of
measurement for all variables in your grades.sav file.
Verify the values and scales of measurement assigned in the
grades.sav file using information in the Data Set on page 2 of
this document.
Data Set
There are 21 variables in grades.sav,. Open your grades.sav file
and go to the Variable View tab. Make sure you have the
10. following values and scales of measurement assigned.
SPSS variable
Definition
Values
Scale of measurement
id
Student identification number
Nominal
lastname
Student last name
Nominal
firstname
Student first name
Nominal
gender
Student gender
1 = female; 2 = male
Nominal
ethnicity
Student ethnicity
1 = Native; 2 = Asian; 3 = Black;
4 = White; 5 = Hispanic
Nominal
year
Class rank
1 = freshman; 2 = sophomore;
3 = junior; 4 = senior
Scale
lowup
Lower or upper division
1 = lower; 2 = upper
Ordinal
section
11. Class section
Nominal
gpa
Previous grade point average
Scale
extcr
Did extra credit project?
1 = no; 2 = yes
Nominal
review
Attended review sessions?
1 = no; 2 = yes
Nominal
quiz1
Quiz 1: number of correct answers
Scale
quiz2
Quiz 2: number of correct answers
Scale
quiz3
Quiz 3: number of correct answers
Scale
quiz4
Quiz 4: number of correct answers
Scale
quiz5
Quiz 5: number of correct answers
Scale
final
12. Final exam: number of correct answers
Scale
total
Total number of points earned
Scale
percent
Final percent
Scale
grade
Final grade
Nominal
passfail
Passed or failed the course?
Nominal
2
Remove or Replace: Header Is Not Doc Title
Assessment 1 ContextTransitioning from Descriptive Statistics
to Inferential Statistics
In this assessment, we begin the transition from descriptive
statistics to inferential statistics which include correlation, t-
tests, and analysis of variance (ANOVA). This context
document includes information on key concepts related to
descriptive statistics, as well as concepts related to probability
and the logic of null hypothesis testing (NHT).Scales of
13. Measurement
In our initial quest to develop comprehension of statistical
analysis, it is first important to be sure the raw materials used
in the activity are understood. Statistical methods are methods
of analyzing data. What does that mean? In order to understand
statistics, we must first develop a basic vocabulary for
describing data, and recognize a system of names for different
categories and kinds of data. For the most part, the statement
means statistics provide various ways of answering specific
questions about data. It may serve us well to first back up and
make sure the fundamental units of statistical data are
understood.
In statistical analysis we make use of a concept called variables.
A variable is an abstract concept of a placeholder or a reserved
space. For instance, we may have a variable named GENDER.
Gender can have two possible values, male or female. The
concept of a variable is often easiest to understand in a concrete
sense by likening data to a typical table like those seen in
textbooks, or like a spreadsheet table found on computers. It is
a series of rows and columns which cross over each other. A
column in a table may be arranged such that the gender
identities of a group of people is recorded. We may have a list
of names in each of the rows of the table in the first column,
then a second column in the table which has the letters M or F
for each name. The title, or heading, of this column might read
"gender." The concept of a variable is like the column heading
where the gender is recorded. The column is the reserved space
for gender data, called a variable, and the column heading is the
name of the variable. Notice that the values of gender—male
and female—vary among different people in the rows. This is
why the entire column is called a variable. The values of the
variable vary among the rows.
Given this concept, be sure you understand that male and female
14. are NOT variables. The variable corresponds to the name of the
column where the information is recorded in the table. In the
case of this example, the name of the variable would probably
be GENDER—not male or female. Make sure you understand
this distinction so you do not use the term variable incorrectly,
as it will cause a great deal of confusion when trying to
communicate with others about statistics. In the case of the data
we are discussing here, it would make no sense whatsoever, and
would confuse whoever you were trying to communicate with, if
you said something like "the male variable," or that you are
working with two variables—male and female. That is not true,
and it will confuse whoever you say it to.
Consider that in the table where our list of people's names were
recorded along with their gender values, we now have two
columns. The first column contains names, the second column
contains the gender value for each person. Suppose there was a
third column in the table. In that column, we may decide to
write down each person's height in inches. At the top of the
column, we would place the title HEIGHT, and each row in the
table would have a number which was the height of the person
designated in the first column of each row. Notice that HEIGHT
is another variable. Notice it is a fundamentally different kind
of variable from GENDER, because it is a number which
corresponds to how tall the person is, while GENDER only has
two qualitative values—male and female. This points to another
important idea about data. There are several different types of
data, and you must be able to use a system of names which have
been developed to distinguish between different types of
variables. Understanding these categories of data is critical to
launching your effort to understand statistics, because if you do
not learn this system before beginning your exposure to
statistical analysis, you will be completely lost more times than
you are successful. It is worth your time to spend the requisite
amount of time and effort required to understand the concepts
we are about to discuss before you speed ahead to your
15. introduction to statistics in order to be finished. Of course, that
is only true if you want to understand the statistics. If you
would rather be completely lost, and have to call your instructor
later in the course and ask for extensions on your assignments
while you seek help, or if you look forward to asking for an
incomplete grade because you cannot comprehend anything you
are studying and you cannot do the assignments, then simply
browse through this concept of levels of measurement without
understanding it. You are guaranteed to suffer that fate.
There are two ways to understand what is important about the
different types of variables. The first is the most extensive and
formal, and the second is a rough approximation that will
usually serve your purposes.
First, an important concept in understanding descriptive
statistics is the four levels of measurement—nominal, ordinal,
interval, and ratio (Warner, 2013). These are sometimes called
"scales of measurement," "levels of data," or simply "data
levels." These scales of measurement, or kinds of variables, are
concepts which are needed to begin your study, because each
technique of analysis to which you are introduced is designed
for only specified kinds of variables, or levels of measurement.
In order to understand what kind of variables to use for the
analysis, and which kinds of variables are involved in different
parts of the analysis, you must be able to recognize the level of
measurement of a variable when you see its definition. There
are two important concepts involved in determining the level of
measurement for a variable. It is critical to know both the
characteristics of the variable's values (such as male or female,
or height in inches), but also what it is the values are being used
to measure or designate in the research project where the
variable is being used.
Consider that last statement again. We must know (1) the kinds
of values that are written in the column where the variable is
16. recorded, and (2) what those values correspond to in the people
they are being used to describe. Failure to consider both ideas
will result in misunderstanding levels of measurement. It is
instructive to begin by simply describing the four levels, and
then move on to some examples and explanations. First, please
remember that the four levels are a hierarchy. That is, they are
in order from lowest to highest, and each successive level has
all the qualities of the levels before it, plus some new added
category of information.
· Nominal data refers to numbers (or letters) arbitrarily assigned
to represent group membership, such as gender (male = 1;
female = 2). Nominal data is useful in comparing groups, but
they are meaningless in terms of measures of central tendency
and dispersion reviewed below. It does not matter whether the
values of the nominal variable are numbers or letters. If it is
numbers, they are mathematically meaningless. That is, 2 is not
"more" than 1.
· Ordinal data represents ranked data, such as coming in first,
second, or third in a marathon. However, ordinal data does not
tell us how much of a difference there is between
measurements. The first-place and second-place finishers could
finish 1 second apart, whereas the third-place finisher arrives 2
minutes later. Ordinal data lacks equal intervals, which again
prevents most mathematical interpretations, such as adding or
averaging the values.
· Interval data refers to data with equal intervals between data
points. This is the first level of measurement where the values
can have general mathematical meaning. An example is degrees
measured in Fahrenheit. The drawback for interval data is the
lack of a "true zero" value (freezing at 32 degrees Fahrenheit,
and 0 does not mean "no heat"). The most serious consequence
of a lack of a true zero is that one cannot reason using ratios – 4
is not necessarily twice as large as 2, and 5 is not necessarily
17. half as much as 10.
· Ratio data do have a true zero, such as heart rate, where "0"
represents a heart that is not beating. This level allows full
mathematical interpretation of the values, including ratios.
These four scales of measurement are routinely reviewed in
introductory statistics textbooks as the "classic" way of
differentiating measurements. However, the boundaries between
the measurement scales are fuzzy; for example, is intelligence
quotient (IQ) measured on the ordinal or interval scale?
Recently, researchers have argued for a simpler dichotomy in
terms of selecting an appropriate statistic. Most of the time,
being able to classify a variable into one of the two following
categories will serve the purposes needed:
· Categorical versus quantitative measures.
· A categorical variable is a nominal variable. It simply
categorizes things according to group membership (for example,
apple = 1, banana = 2, grape = 3).
· A quantitative measure represents a difference in magnitude of
something, such as a continuum of "low to high" statistics
anxiety. In contrast to categorical variables designated by
arbitrary values, a quantitative measure allows for a variety of
arithmetic operations, including =, <, >, +, -, x, and ÷.
Arithmetic operations generate a variety of descriptive statistics
discussed next.
Note that categorical variables generally correspond to the
nominal and ordinal levels of measurement in the previous
system, and quantitative variables typically correspond to the
interval and ratio levels of measurement. Quantitative variables,
or variables which are at the interval or ratio level of
measurement, are designated as scale variables in SPSS
software.
18. In order to determine the level of measurement of a variable,
one must consider the nature of the values as well as what those
values represent, or what the variable is measuring. For
instance, the same type of variable may have two levels of
measurement if the two versions are measuring different things
in a research project. The level of measurement refers to the
construct in the research project which is being measured—not
the values of the variable itself. A most interpretable example
would be a distinction between two different variables which
are exactly the same, except their operational definition in the
research project in which they are used is different. Consider a
score on a teacher-made test, where the score consists of the
number of correct answers out of 100 questions. The variable
can be defined in two ways:
1. An index which corresponds to the amount of knowledge the
test taker has in the topic area covered by the test.
2. The number of grade points credit earned on the test by the
test taker.
Notice that every person will have the same values on each of
these two variables. It is tempting to say there are no
differences. A closer look reveals these two variables are at two
different levels of measurement. First, when defined by the first
definition above—as the amount of knowledge—we know very
little about the meaning of the numbers involved. We do not
know if each question has the same amount of knowledge in it.
Also, we certainly cannot say that a score of zero means the
person has "no knowledge." Notice that this restricts the
variable to the ordinal level of measurement, because we cannot
even be sure that the necessary qualities for interval level
measurement are met (equal distances between points in terms
of amount of knowledge). We cannot say that a person that
scores 10 has twice as much knowledge as a person who scores
5, and we cannot say that the difference in knowledge between
19. two people who score 5 and 6 is the same as the difference
between two people who score 1 and 2. On the other hand, when
the measurement is defined by the second definition above, the
level of measurement is ratio. Clearly, all the requirements for
ratio level data are met. Zero means no grade points, and a
person who scores 6 has twice as many grade points as a person
who scores 3. The point of the preceding paragraph is that the
level of measurement of a variable depends on what it is being
used for—what is being measured in the research project.
Measures of Central Tendency and Dispersion
Descriptive statistics summarize a set of scores in terms of
central tendency (for example, mean, median, mode) and
dispersion (for example, range, variance, standard deviation).
As an example, consider a psychologist who measures 5
participants' heart rates in beats per minute: 62, 72, 74, 74, and
118.
· The simplest measure of central tendency is the mode. It is the
most frequent score within a distribution of scores (for example,
two scores of hr = 74). Technically, in a distribution of scores,
you can have two or more modes. An advantage of the mode is
that it can be applied to categorical data. It is also not sensitive
to extreme scores. This measure is suitable for all levels of
measurement.
· The medianis the geometric center of a distribution because of
how it is calculated. All scores are arranged in ascending order.
The score in the middle is the median. In the five heart rates
above, the middle score is a 74. If you have an even number of
scores, the average of the two middle scores is used. The
median also has the advantage of not being sensitive to extreme
scores. This measure is only suitable for data which are at the
ordinal level of measurement or above.
· The meanis probably what most people consider to be an
20. average score. In the example above, the mean heart rate is
(62+72+74+74+118) ÷ 5 = 80. Although the mean is more
sensitive to extreme scores (for example, 118) relative to the
mode and median, it can be more stable across samples, and it is
the best estimate of the population mean. It is also used in many
of the inferential statistics studied in this course, such as t-tests
and analysis of variance (ANOVA). This measure is suitable
only for interval and ratio level variable (quantitative
measures). It relies on math (sums and quotients) which are not
valid for ordinal measures.
· In contrast to measures of central tendency, measures of
dispersion summarize how far apart data are spread on a
distribution of scores. The rangeis a basic measure of dispersion
quantifying the distance between the lowest score and the
highest score in a distribution (for example, 118 – 62 = 56). A
deviance represents the difference between an individual score
and the mean. For example, the deviance for the first heart rate
score (62) is 62– 80, or - -18. By calculating the deviance for
each score above from a mean of 80, we arrive at -18, -8, -6, -6,
and +38. Summing all of the deviances equals 0, which is not a
very informative measure of dispersion. An alternative measure
of dispersion is the interquartile range (IQR), as well as the
semi-interquartile range (sIQR). If the values are placed in
order from lowest to highest, each value can be assigned a rank,
with the lowest rank being 1 which corresponds to the lowest
value in the data set. The highest value of the variable will have
a rank equal to the number of values in the data set. The ranks
can be used to identify two important scores in the data set.
First, the 25th percentile is the score which 25% of the scores
fall at or below. There is also a 75th percentile, which is the
score which 25% of the sample scores above. The IQR is the
difference between the scores at the 75th and 25th percentiles,
and the sIQR is half of that value. These measures are suitable
for variables at the ordinal level of measurement and above.
· A somewhat more informative measure of dispersion is sum of
squares (SS), which you will see again in the study of analysis
21. of variance (ANOVA). To get around the problem of summing
to zero, the sum of squares involves calculating the square of
each deviation and then summing those squares. In the example
above, SS = [(-18)2 + (-8)2 + (-6)2 + (-6)2 + (+38)2] = [(324) +
(64) + (36) + (36) + (1444)] = 1904. The problem with SS is
that it increases as data points increase (Field, 2009), and it still
is not a very informative measure of dispersion. This measure is
suitable only for interval and ratio levels of measurement.
· This problem is solved by next calculating the variance (s2),
which is the average distance between the mean and a particular
score (squared). Instead of dividing SS by 5 for the example
above, we divide by the degrees of freedom, N – 1, or 4. The
variance is therefore SS ÷ (N– 1), or 1904 ÷ 4 = 476. The
problem with interpreting variance is that it is the average
distance of "squared units" from the mean. What is, for
example, a "squared" heart rate score? This measure is suitable
only for interval and ratio levels of measurement.
· The final step is calculating the standard deviation(s), which
is simply calculated as the square root of the variance, or in our
example, √476 =21.82. The standard deviation represents the
average deviation of scores from the mean. In other words, the
average distance of heart rate scores to the mean is 21.82 beats
per minute. If the extreme score of 118 is replaced with a score
closer to the mean, such as 90, then s = 9.35. Thus, small
standard deviations (relative to the mean) represent a small
amount of dispersion; large standard deviations (relative to the
mean) represent a large amount of dispersion (Field, 2009). The
standard deviation is an important component of the normal
distribution. This measure is suitable only for interval and ratio
levels of measurement.
Notice that the various methods of expressing central tendency
and dispersion are suitable for different levels of measurement.
A brief summary of the types of measures used with different
levels of measurement are shown in the table below:
22. .
Typical Measures Central Tendency Dispersion
Levels of Measurement
Categorical Quantitative Nominal Ordinal Interval
Ratio Mode Median
Mean
k*
sIQR
Standard Deviation
Note: The number of categories or groups (often designated as
k) is the primary expression of variation for nominal level
data.Visual Inspection of a Distribution of Scores
An assumption of the statistical tests that you will study in this
course is that the scores for a dependent variable, like a range
of heart rate scores, are normally (or approximately normal) in
shape. The assumption is first checked by examining a
histogram of the distribution. This method is meaningful only
for quantitative variables—interval or ratio levels of
measurement. It makes no sense to create histograms of nominal
or ordinal (categorical) variables.
Departures from normality and symmetry are assessed in terms
of skew and kurtosis. Skewness is the tilt or extent a
distribution deviates from symmetry around the mean. A
distribution that is positively skewed has a longer tail extending
to the right (that is, the "positive" side of the distribution) A
distribution that is negatively skewed has a longer tail
extending to the left (that is, the "negative" side of the
distribution) In contrast to skewness, kurtosis is defined as the
peakedness of a distribution of scores.
The use of these terms is not limited to your description of a
23. distribution following a visual inspection. They are included in
your list of descriptive statistics and should be included when
analyzing your distribution of scores. Skew and kurtosis scores
of near zero indicate a shape that is symmetric or close to
normal respectively. Values of -1 to +1 are considered ideal,
whereas values ranging from -2 to +2 are considered acceptable
for psychometric purposes.Outliers
Outliers are defined as extreme scores on either the left of right
tail of a distribution, and they can influence the overall shape of
that distribution. There are a variety of methods for identifying
and adjusting for outliers. Outliers can be detected by
calculating z-scores or by inspection of a box plot. Once an
outlier is detected, the researcher must determine how to handle
it. The outlier may represent a data entry error that should be
corrected. The outlier may be a valid extreme score, and can
either be left alone, deleted, or transformed. Whatever decision
is made regarding an outlier, the researcher must be transparent
and justify his or her decision.
Probability and Hypothesis Testing
The interpretation of inferential statistics requires an
understanding of probability and the logic of null hypothesis
testing (NHT). The logic and interpretative skills addressed in
this assessment are vital to your success in interpreting data in
this course as well as to your success in advanced courses in
statistics.
It is useful to first consider some general ideas about hypothesis
testing to be sure terminology is clear and automatically
understood. Inaccuracies in vocabulary can present barriers
which will follow you through to the advanced concepts and
spoil your learning experience, so it is best to clear up common
errors before beginning to study the more complex ideas. An
hypothesis is simply a precise statement or prediction of a fact
we think is true. The first thing you should note is that we are
using two words here: hypothesis and hypotheses. The only
difference is the “is” and “es” in the last two letters. This is
24. simply the distinction between singular and plural. We may
examine a single hypothesis, or we may speak of many different
hypotheses. The first term rhymes with “this” and the second
rhymes with “these.” In fundamental statistics (and even in
most advanced methods), we encounter hypotheses of two types.
Confusion among these types is a common barrier in learning
statistics, so it will be valuable for you to study these ideas
carefully in order to provide and advance organizer for material
to come. The two types of hypotheses tests we will study are
hypotheses tests related to (1) group differences, and
hypotheses related to (2) association between variables. Each of
these types of hypotheses tests have two mutually exclusive and
conflicting statistical hypotheses, called the null hypothesis and
the alternative hypothesis. In each of these two general types of
hypotheses tests, the purpose of the statistical test is to show
that the null hypothesis has a low probability of being true. The
reason this is done is usually to support the alternative
hypothesis, because if the null hypothesis is false, the
alternative hypothesis must be true.Group Differences
An hypothesis of group differences is related to the question of
whether or not two or more groups have the same central
tendency (such as equal means). There are two possible
hypotheses in this category: The groups have equal means or the
groups do not have equal means. The first hypothesis, that the
means are equal, is called the null hypothesis. The hypothesis
that the groups do not have equal means is the alternative
hypothesis. This is true regardless of what the researcher is
interested in (whether the researcher wants the groups to be
equal or not). It is related to the mathematical structure of the
tests, and it is a constant in the statistical analysis. The purpose
of the statistical test is to demonstrate that the null hypothesis
has a very small probability of being true. If the null hypothesis
is not true, then the alternative hypothesis must be
true.Association Between Variables
25. The second type of is whether or not two variables are
association (related or correlated). When two variables are
associated, scores or values are analyzed in pairs. The
hypothesis is related to the idea that the values of the first
variable somehow predict the values of the second. The
relationship between the two variables is usually expressed by
some type of correlation coefficient. It is usually an index
which varies between -1 and +1. If the relationship between the
two variables is computed to be zero, then there is no
association between them. The value on the first variable has
nothing to do with the value on the second. If the correlation
coefficient is positive (between 0 and +1), it means those cases
with high values on the first variable tend to have high values
on the second. If the correlation coefficient is negative
(between -1 and 0), then it means high values on the first
variable tend to match up with low values on the second. The
magnitude or size of the correlation coefficient (its distance
from zero and closeness to -1 or +1) is the strength of the
relationship between the two variables. If the strength of the
relationship is zero, then there is no relationship – the value of
the first variable has nothing to do with the second.
Consider a table of data where students are listed in rows, and
variables are listed in columns, like a teacher’s gradebook.
There may be two variables, such as total number of absences in
one column and final test scores in another. Each case (each
student) has a value on both of these variables. We might expect
those students who have high numbers on absences would tend
to have lower numbers on the final exam. Note we would expect
the correlation between absences and exam scores to be
negative—those students who have many absences will tend to
have low exam scores. Statistical hypotheses are generally
related to questions of whether there is a relationship between
the two variables or not. In this type of statistical test, there are
two hypotheses: (1) the correlation between the two variables is
zero (the null hypothesis), or the correlation between the two
26. variable is not zero (the alternative hypothesis). This is true
regardless of what the researcher is interested in (regardless of
whether the research wants the variables to be related, or thinks
they are related, or not). The purpose of the statistical test is to
demonstrate that the null hypothesis has a very small
probability of being true. If the null hypothesis is not true, then
the alternative hypothesis must be true.The Standard Normal
Distribution and z-Scores
A student receives an intelligence quotient (IQ) score of 115 on
a standardized intelligence test. What is his or her percentile
rank? To calculate the percentile rank, you must understand the
logic and application of the standard normal distribution. The
advantage of the standard normal distribution is that the
proportional area under the curve is constant. This constancy
allows for the calculation of a percentile rank of an individual
score X for a given distribution of scores.
When a population mean (µ) and population standard deviation
(σ) are known, such as a distribution of scores for a
standardized intelligence test, the standard normal distribution
determines the percentile rank of a given X-score (for example,
50th percentile on IQ). Around 2/3rds of standard scores
(68.26%) fall within +/- 1 population standard deviation from
the population mean. Approximately 95 percent of standard
scores fall within +/- 2 population standard deviations.
Approximately 99 percent of standard scores fall within +/- 3
population standard deviations. Knowing this, we can begin the
process of determining a percentile rank from an individual
score.
Consider a standardized intelligence quotient (IQ) test where µ
= 100 and σ = 15. A standard score, or z-score, is calculated
with individual X scores rescaled to µ = 0 and σ = 1. The
formula for a z-score is [( X - µ) ÷ σ]. For example, an IQ score
of X = 100 would be rescaled to z = 0.00 [(100 – 100) ÷ 15 = 0].
27. A z-score of 0 means that the X-score is 0 standard deviations
above or below the mean. An IQ score of X = 85 would be
rescaled to z = -1.00 [(85 – 100) ÷ 15 = -1.00]. A z-score of -1
represents an IQ score 1 standard deviation below the mean. An
IQ score of 130 would be rescaled to z = 2.00 [(130 – 100) ÷ 15
= 2.00). A z-score of 2 represents an IQ score 2 standard
deviations above the mean. In short, a negative z-score falls to
the left of the population mean, whereas a positive z-score falls
to the right of the population mean.
Once a given z-score is calculated for a given X-score, its
percentile rank can be determined. The proportion of area under
the curve can also be calculated.
An important z-score is +/- 1.96, where 95 percent of scores fall
under the area of the curve, whereas 2.5 percent fall to the left
and 2.5 percent fall to the right (2.5% + 2.5% = 5%). A z score
beyond these cutoffs is typically considered to be "extreme"
(Warner, 2013). In addition, we will see below that most
inferential statistics set "statistical significance" to obtained
probability values of less than 5 percent.
Hypothesis Testing
Probability is crucial for hypothesis testing. In hypothesis
testing, you want to know the likelihood that your results
occurred by chance. As was discussed in the introduction to
hypothesis testing, we are generally concerned with the
probability that a null hypothesis is true (recall that a null
hypothesis is an hypothesis that two group means are equal, or
that two variables have no relationship). Whenever two groups
have equal means in a population, taking random samples of
those two groups and computing their means, then examining
the difference between those means, will rarely result in the
means having exactly zero difference. Because the samples are
selected randomly and do not contain all members of the
population, there will tend to be some small or negligible
28. difference in means for almost any two samples. These
differences are due to chance alone in the random selection
process. In the same way, whenever two variables are
completely unrelated in a population, random selection of a
small sample will rarely result in the two variable having
exactly zero correlation in those samples.
There will be some small amount of correlation between the two
variables in most random samples by chance alone. No matter
how unlikely, there is always the possibility that your results
have occurred by chance, even if that probability is less than 1
in 20 (5%). However, you are likely to feel more confident in
your inferences if the probability that your results occurred by
chance is less than 5 percent compared to, say, 50 percent. Most
psychologists find it reasonable to designate less than a 5
percent chance as a cutoff point for determining statistical
significance. This cutoff point is referred to as the alpha level.
An alpha level is set to determine when a researcher will reject
or fail to reject a null hypothesis (discussed next). The alpha
level is set before data are analyzed to avoid "fishing" for
statistical significance. In high-stakes research (for example,
testing a new cancer drug), researchers may want to be even
more conservative in designating an alpha level, such as less
than 1 in 100 (1%) that the results are due to chance.Null and
Alternative Hypotheses
The null hypothesis (H0) refers to the difference between a
group mean and a given population parameter, such as a
population mean of 100 on IQ, or to the difference between two
group means. Imagine that we ask two groups of students to
complete a standardized IQ test, and then we calculate the mean
IQ score for each group. We observe that the mean IQ for Group
A is 100 (MA = 100), whereas the mean IQ for group B is 115
(MB = 115). Is a mean difference of 15 IQ points statistically
significant or just due to chance? The null hypothesis predicts
that H0: MA = MB. That is, the null hypothesis predicts "no"
29. difference between groups. Remember that "null" also means
"zero," so we could also state the null hypothesis as H0: MA –
MB = 0. When comparing groups, in general, the null
hypothesis predicts that group means will not differ. When
testing the strength of a relationship between two variables,
such as the correlation between IQ scores and grade point
average (GPA), in general, the null hypothesis is that the
relationship (expressed as a single correlation coefficient)
between Variable x and Variable y is zero.
By contrast, the alternative hypothesis (H1)does predict a
difference between two groups, or in the case of relationships,
that two variables are significantly related. An alternative
hypothesis can be directional (for example, H1: Group A has a
higher mean score than Group B) or nondirectional (H1: Group
A and Group B will differ). In hypothesis testing, you either
reject or fail to reject the null hypothesis. Note that this is not
stating, "accept the null hypothesis as true." By default, if you
reject the null hypothesis, you accept the alternative hypothesis
as true (because the hypotheses are mutually exclusive).
However, if you do not reject the null hypothesis, you cannot
accept the null hypothesis as true. You have simply failed to
find statistical justification to reject the null hypothesis, and the
null hypothesis was assumed to begin with. There was no
evidence for it. The test was simply done to show it was false in
order to support the allegation that the alternative hypothesis
was true.Type I and Type II Errors
If you commit a Type Ierror, this means that you have
incorrectly rejected a true null hypothesis. You have incorrectly
concluded that there is a significant difference between groups,
or a significant relationship, where no such difference or
relationship actually exists. Type I errors have real-world
significance, such as concluding that an expensive new cancer
drug works when actually it does not work, costing money and
potentially endangering lives. Keep in mind that you will
probably never know whether the null hypothesis is "true" or
not, as we can only determine that our data fail to reject it.
30. If you commit a Type IIerror, this means that you have NOT
rejected a false null hypothesis when you should have rejected
it. You have incorrectly concluded that no differences or no
relationships exist when they actually do exist. Type II errors
also have real-world significance, such as concluding that a new
cancer drug does not work when it actually does work and could
save lives.
Your alpha level will affect the likelihood of making a Type I
or a Type II error. If your alpha level is small (for example, .01,
less than 1 in 100 chance), you are less likely to reject the null
hypothesis. So, you are less likely to commit a Type I error.
However, you are more likely to commit a Type II error.
You can decrease the chances of committing a Type II error by
increasing the alpha level (for example, .10, less than 1 in 10
chance). However, you are now more likely to commit a Type I
error. Since the chances of committing Type I and Type II
errors are inversely proportional, you will have to decide which
type of error is more serious. You need to assess the risk
associated with each type of error. Your research questions will
help in this decision. In standard psychological research, alpha
level is set to .05 (that is, a 1 in 20 chance of committing a
Type I error). An alpha level of .05 is used throughout the
remainder of this course.Probability Values and the Null
Hypothesis
The statistic used to determine whether or not to reject a null
hypothesis is referred to as the calculated probability value or p
value, denoted p. When you run an inferential statistic in SPSS,
it will provide you with a p value for that statistic (SPSS labels
p values as “Sig.”). If the test statistic has a probability value of
less than 1 in 20 (.05), we can say "p < .05, the null hypothesis
is rejected." Keep in mind in the coming weeks that we are
looking for values less than .05 to reject the null hypothesis.
This may seem counterintuitive at first, because usually we
assume that "bigger is better." In the case of null hypothesis
testing, the opposite is the case—if we expect to reject a null
hypothesis, remember that, for p values, "smaller is better." Any
31. p value less than .05, such as .02, .01, or .001, means that we
reject the null hypothesis. Any p value greater than .05, such as
.15, .33, or .78, means that we do not reject the null hypothesis.
Make sure you understand this point, as it is a common area of
confusion among statistics learners.
Based on your understanding of the null hypothesis, the
alternative hypothesis, the alpha level, and the p value, you can
begin to make statements about your research results. If your
results fall within the rejection region, you can claim that they
are "statistically significant," and you reject the null hypothesis.
In other words, you will conclude that groups do differ in some
way or that two variables are significantly related. If the results
do not fall within the rejection region, you cannot make this
claim. Your data fail to reject the null hypothesis. In other
words, you cannot conclude that the groups differ in some way,
or that the two variables are related.
This assessment covers the terminology and concepts behind
hypothesis testing, which prepares you for the remaining
assessments in this course. The statistical tests include:
· Correlations.
· t-Tests.
· Analysis of Variance.
ReferencesFarris, J. R. (2012). Organizational commitment and
job satisfaction: A quantitative investigation of the
relationships between affective, continuance, and normative
constructs (Doctoral dissertation, Capella University). Available
from ProQuest Dissertations and Theses database. (UMI No.
3527687)Field, A. (2009). Discovering statistics using SPSS
(3rd ed.). Thousand Oaks, CA: Sage Publications.George, D., &
Mallery, P. (2014). IBM SPSS statistics 21 step by step: A
simple guide and reference (13th ed.). Upper Saddle River, NJ:
32. Pearson.Lane, D. M. (2013). HyperStat online statistics
textbook. Retrieved from
http://davidmlane.com/hyperstat/index.htmlOnweugbuzie, A. J.
(1999). Statistics anxiety among African American graduate
students: An affective filter? Journal of Black Psychology, 25,
189–209.Pan, W., & Tang, M. (2004). Examining the
effectiveness of innovative instructional methods on reducing
statistics anxiety for graduate students in the social sciences.
Journal of Instructional Psychology, 31, 149–159.
Scoville, D. M. (2012). Hope and coping strategies among
individuals with chronic neuropathic pain (Doctoral
dissertation, Capella University). Available from ProQuest
Dissertations and Theses database. (UMI No. 3518448)
Stewart, A. M. (2012). An examination of bullying from the
perspectives of public and private high school children
(Doctoral dissertation, Capella University). Available from
ProQuest Dissertations and Theses database. (UMI No.
3505818)
Warner, R. M. (2013). Applied statistics: From bivariate
through multivariate techniques (2nd ed.). Thousand Oaks, CA:
Sage.
1
11