The document provides an overview of key statistical concepts including variance, standard deviation, the normal distribution, frequency distributions, data matrices, hypothesis testing, and point and interval estimation. Variance and standard deviation are measures of how dispersed data points are around the mean. The normal distribution is symmetric and bell-shaped. Hypothesis testing involves specifying a null hypothesis, alternative hypothesis, test statistic, decision rule, and critical region to determine whether to reject the null hypothesis. Point and interval estimation aims to estimate population parameters from samples and provide confidence intervals.
Get ready to face Data Science interviews with this set of Statistics questions. This will help you have insight upon the important statistics concepts that are frequently asked in interviews.
Severe Testing: The Key to Error Correctionjemille6
D. G. Mayo's slides for her presentation given March 17, 2017 at Boston Colloquium for Philosophy of Science, Alfred I.Taub forum: "Understanding Reproducibility & Error Correction in Science"
Get ready to face Data Science interviews with this set of Statistics questions. This will help you have insight upon the important statistics concepts that are frequently asked in interviews.
Severe Testing: The Key to Error Correctionjemille6
D. G. Mayo's slides for her presentation given March 17, 2017 at Boston Colloquium for Philosophy of Science, Alfred I.Taub forum: "Understanding Reproducibility & Error Correction in Science"
Fusion Confusion? Comments on Nancy Reid: "BFF Four-Are we Converging?"jemille6
D. Mayo's comments on Nancy Reid's "BFF Four-Are we Converging?" given May 2, 2017 at The Fourth Bayesian, Fiducial and Frequentists Workshop held at Harvard University.
Today we’ll try to cover a number of things:
1. Learning philosophy/philosophy of statistics
2. Situating the broad issues within philosophy of science
3. Little bit of logic
4. Probability and random variables
Byrd statistical considerations of the histomorphometric test protocol (1)jemille6
"Statistical considerations of the histomorphometric test protocol"
John E. Byrd, Ph.D. D-ABFA
Maria-Teresa Tersigni-Tarrant, Ph.D.
Central Identification Laboratory
JPAC
Hypothesis testing and estimation are used to reach conclusions about a population by examining a sample of that population.
Hypothesis testing is widely used in medicine, dentistry, health care, biology and other fields as a means to draw conclusions about the nature of populations
Fusion Confusion? Comments on Nancy Reid: "BFF Four-Are we Converging?"jemille6
D. Mayo's comments on Nancy Reid's "BFF Four-Are we Converging?" given May 2, 2017 at The Fourth Bayesian, Fiducial and Frequentists Workshop held at Harvard University.
Today we’ll try to cover a number of things:
1. Learning philosophy/philosophy of statistics
2. Situating the broad issues within philosophy of science
3. Little bit of logic
4. Probability and random variables
Byrd statistical considerations of the histomorphometric test protocol (1)jemille6
"Statistical considerations of the histomorphometric test protocol"
John E. Byrd, Ph.D. D-ABFA
Maria-Teresa Tersigni-Tarrant, Ph.D.
Central Identification Laboratory
JPAC
Hypothesis testing and estimation are used to reach conclusions about a population by examining a sample of that population.
Hypothesis testing is widely used in medicine, dentistry, health care, biology and other fields as a means to draw conclusions about the nature of populations
INTRODUCTION
CHARACTERISTICS OF A HYPOTHESIS
CRITERIA FOR HYPOTHESIS CONSTRUCTION
STEPS IN HYPOTHESIS TESTING
SOURCES OF HYPOTHESIS
APPROACHES TO HYPOTHESIS TESTING
THE LOGIC OF HYPOTHESIS TESTING
TYPES OF ERRORS IN HYPOTHESIS
Topic Learning TeamNumber of Pages 2 (Double Spaced)Num.docxAASTHA76
Topic: Learning Team
Number of Pages: 2 (Double Spaced)
Number of sources: 1
Writing Style: APA
Type of document: Essay
Academic Level:Master
Category: Psychology
VIP Support: N/A
Language Style: English (U.S.)
Order Instructions:
I will attach the instruction. On this paper please follow the instructions carefully. Thank you
Correlation
PSYCH/610 Version 2
1
University of Phoenix Material
Correlation
A researcher is interested in investigating the relationship between viewing time (in seconds) and ratings of aesthetic appreciation. Participants are asked to view a painting for as long as they like. Time (in seconds) is measured. After the viewing time, the researcher asks the participants to provide a ‘preference rating’ for the painting on a scale ranging from 1-10. Create a scatter plot depicting the following data:
Viewing Time in Seconds
Preference Rating
10
3
12
4
24
7
5
3
16
6
3
4
11
4
5
2
21
8
23
9
9
5
3
3
17
5
14
6
What does the scatter plot suggest about the relationship between viewing time and aesthetic preference? Is it accurate to state that longer viewing times are the result of greater preference for paintings? Explain. Submit your scatter plot and your answers to the questions to your instructor.
LEARNING OBJECTIVES
· Explain how researchers use inferential statistics to evaluate sample data.
· Distinguish between the null hypothesis and the research hypothesis.
· Discuss probability in statistical inference, including the meaning of statistical significance.
· Describe the t test and explain the difference between one-tailed and two-tailed tests.
· Describe the F test, including systematic variance and error variance.
· Describe what a confidence interval tells you about your data.
· Distinguish between Type I and Type II errors.
· Discuss the factors that influence the probability of a Type II error.
· Discuss the reasons a researcher may obtain nonsignificant results.
· Define power of a statistical test.
· Describe the criteria for selecting an appropriate statistical test.
Page 267IN THE PREVIOUS CHAPTER, WE EXAMINED WAYS OF DESCRIBING THE RESULTS OF A STUDY USING DESCRIPTIVE STATISTICS AND A VARIETY OF GRAPHING TECHNIQUES.In addition to descriptive statistics, researchers use inferential statistics to draw more general conclusions about their data. In short, inferential statistics allow researchers to (a) assess just how confident they are that their results reflect what is true in the larger population and (b) assess the likelihood that their findings would still occur if their study was repeated over and over. In this chapter, we examine methods for doing so.
SAMPLES AND POPULATIONS
Inferential statistics are necessary because the results of a given study are based only on data obtained from a single sample of research participants. Researchers rarely, if ever, study entire populations; their findings are based on sample data. In addition to describing the sample data, we want to make statements ab.
This document is highly important for the learners of research methodology. A number of statistical terminologies are defined with examples for the simplicity of learners.
Page 266LEARNING OBJECTIVES· Explain how researchers use inf.docxkarlhennesey
Page 266
LEARNING OBJECTIVES
· Explain how researchers use inferential statistics to evaluate sample data.
· Distinguish between the null hypothesis and the research hypothesis.
· Discuss probability in statistical inference, including the meaning of statistical significance.
· Describe the t test and explain the difference between one-tailed and two-tailed tests.
· Describe the F test, including systematic variance and error variance.
· Describe what a confidence interval tells you about your data.
· Distinguish between Type I and Type II errors.
· Discuss the factors that influence the probability of a Type II error.
· Discuss the reasons a researcher may obtain nonsignificant results.
· Define power of a statistical test.
· Describe the criteria for selecting an appropriate statistical test.
Page 267IN THE PREVIOUS CHAPTER, WE EXAMINED WAYS OF DESCRIBING THE RESULTS OF A STUDY USING DESCRIPTIVE STATISTICS AND A VARIETY OF GRAPHING TECHNIQUES. In addition to descriptive statistics, researchers use inferential statistics to draw more general conclusions about their data. In short, inferential statistics allow researchers to (a) assess just how confident they are that their results reflect what is true in the larger population and (b) assess the likelihood that their findings would still occur if their study was repeated over and over. In this chapter, we examine methods for doing so.
SAMPLES AND POPULATIONS
Inferential statistics are necessary because the results of a given study are based only on data obtained from a single sample of research participants. Researchers rarely, if ever, study entire populations; their findings are based on sample data. In addition to describing the sample data, we want to make statements about populations. Would the results hold up if the experiment were conducted repeatedly, each time with a new sample?
In the hypothetical experiment described in Chapter 12 (see Table 12.1), mean aggression scores were obtained in model and no-model conditions. These means are different: Children who observe an aggressive model subsequently behave more aggressively than children who do not see the model. Inferential statistics are used to determine whether the results match what would happen if we were to conduct the experiment again and again with multiple samples. In essence, we are asking whether we can infer that the difference in the sample means shown in Table 12.1 reflects a true difference in the population means.
Recall our discussion of this issue in Chapter 7 on the topic of survey data. A sample of people in your state might tell you that 57% prefer the Democratic candidate for an office and that 43% favor the Republican candidate. The report then says that these results are accurate to within 3 percentage points, with a 95% confidence level. This means that the researchers are very (95%) confident that, if they were able to study the entire population rather than a sample, the actual percentage who preferred th ...
A chi-squared test (χ2) is basically a data analysis on the basis of observations of a random set of variables. Usually, it is a comparison of two statistical data sets. This test was introduced by Karl Pearson in 1900 for categorical data analysis and distribution. So, it was mentioned as Pearson’s chi-squared test.
hypothesis-Meaning need for hypothesis qualities of good hypothesis type of hypothesis null and alternative hypothesis sources of hypothesis formulation of hypothesis, hypothesis testing
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
2. Variance
Variance is a measure of dispersion of data
points about the mean for interval- and ratio-level
data.
Variance is a fundamental concept that social
scientists seek to explain in the dependent
variable.
3.
4. Standard Deviation
Standard deviation is a measure of dispersion of
data points about the mean for interval- and ratio-
level data.
Like the mean, standard deviation is sensitive to
extreme values.
Standard deviation is calculated as the square
root of the variance.
5.
6.
7. Normal Distribution
The bulk of observations lie in the center,
where there is a single peak.
In a normal distribution half (50 percent) of the
observations lie above the mean and half lie
below it.
The mean, median and mode have the same
statistical values.
Fewer and fewer observations fall in the tails.
The spread of the distribution is symmetric.
8. Normal Distribution
Mathematical theory allows us to know what
percentage of observations lie within one
(68%), two (95%) or three (98%) standard
deviations of the mean.
If data are not perfectly normally distributed, the
percentages will only be approximations.
Many naturally occurring variables do have
nearly normal distributions.
Some can be transformed using logarithms.
12. Example
Calculate the ID and IQV for a former PS 372
class grades using the following frequencies or
proportions:
Grade Freq. Prop.
A 4 (.12)
B 7 (.21)
C 4 (.12)
D 7 (.21)
E 12 (.34)
13. Index of Diversity
ID = 1 – (p2
a
+ p2
b
+ p2
c
+p2
d
+p2
e
)
ID = 1 - (.122
+ .212
+ .122
+ .212
+ .342
)
ID = 1 - (.0144 + .0441 + .0144 + .0441 + .1156)
ID = 1 - (.2326)
ID = .7674
17. Data Matrix
A data matrix is an array of rows and columns
that stores the values of a set of variables for all
the cases in a data set.
This is frequently referred to as a dataset.
21. Properties of Good Graphs
Should answer several of the following questions:
(JRM 384)
1. Where does the center of the distribution lie?
2. How spread out or bunched up are the
observations?
3. Does it have a single peak or more than one?
4. Approximately what proportion of observations
in in the ends of the distributions?
22. Properties of Good Graphs
5. Do observations tend to pile up at one end of
the measurement scale, with relatively few
observations at the other end?
6. Are there values that, compared with most,
seem very large or very small?
7. How does one distribution compare to another
in terms of shape, spread, and central tendency?
8. Do values of one variable seem related to
another variable?
29. Population
A population refers to any well-defined set of
objects such as people, countries, states,
organizations, and so on. The term doesn't simply
mean the population of the United States or some
other geographical area.
30. Population
A sample is a subset of the population.
Samples are drawn in some known manner and
each case is chosen independently of the other.
From here on out, when the book uses the term
sample, random sample or simple random
sample, it's making reference to the same
concept, which is a sample chosen at random.
31. Populations
Parameters are numerical features of a
population.
A sample statistic is an estimator that
corresponds to a population parameter of
interest and is used to estimate the population
value.
Y is the sample mean, (μ) is the population
mean.
^ is a “hat”, caret or circumflex
32. Two Kinds of Inference
Hypothesis Testing
Point and interval estimation
33. Hypothesis Testing
Many claims can be translated into specific
statements about a population that can be
confirmed or disconfirmed with the aid of
probability theory.
Ex: There is no ideological difference between the
voting patterns between the voting patterns of
Republican and Democrat justices on the U.S.
Supreme Court.
34. Point and Interval Estimation
The goal here is to estimate unknown population
parameters from samples and to surround those
estimates with confidence intervals. Confidence
intervals suggest the estimates reliability or
precision.
35. Hypothesis Testing
Start with a specific verbal claim or proposition.
Ex: The chances of getting heads or tails when
flipping the coin is are roughly the same.
Ex: The chances of the United States electing a
Republican or Democrat president are roughly the
same.
37. Hypothesis Testing
Next, the researcher constructs a null hypothesis.
A null hypothesis is a statement that a
population parameter equals a specific value.
38. Hypothesis Testing
Following up on the coin example, the null
hypothesis would equal .5.
Stated more formally: H0
: P = .5
Where P stands for the probability that the coin
will be heads when tossed.
H0
is typically used to denote a null hypothesis.
39. Hypothesis Testing
Next, specify an alternative hypothesis.
An alternative hypothesis is a statement
about the value or values of a population
parameter. It is proposed as an alternative to
the null hypothesis.
An alternative hypothesis can merely state that
the population does not equal the null
hypothesis, or is greater than or less than the
null hypothesis.
40. Hypothesis Testing
Suppose you believe the coin is unfair, but have
no intuition about whether it is too prone to come
up heads or tails.
Stated formally, the alternative hypothesis is:
HA
: P ≠ .5
41. Hypothesis Testing
Perhaps you believe the coin is more likely to
come up heads than tails. You would formulate
the following alternative hypothesis:
HA
: P > .5
Conversely, if you believe the coin is less likely to
come up heads than tails, you would formulate
the alternative hypothesis in the opposite
direction:
HA
: P < .5
42. Hypothesis Testing
After specifying the null and alternative
hypothesis, identify the sample estimator that
corresponds to the parameter in question.
The sample must come from the data, which in
this case is generated by flipping a coin.
43. Hypothesis Testing
Next, determine how the sample statistic is
distributed in repeated random samples. That
is, specify the sampling distribution of the
estimator.
For example, what are the chances of getting
10 heads in 10 flips (p = 1.)? What about 9
heads in 10 flips (p = .9)? 8 flips (p = .8)?
44.
45. Hypothesis Testing
Make a decision rule based on some criterion
of probability or likelihood.
In social sciences, a result that occurs with a
probability of .05 (that is, 1 chance in 20) is
considered unusual and consequently is
grounds for rejecting a null hypothesis.
Other common thresholds (.01, .001) are also
common..
Make the decision rule before collecting data.
46. Hypothesis Testing
In light of the decision rule, define a critical
region. The critical region consists of those
outcomes so unlikely to occur that one has
cause to reject the null hypothesis should they
occur.
So there are areas of “rejection” (critical areas)
and nonrejection.
47.
48. Hypothesis Testing
Collect a random sample and calculate the
sample estimator.
Calculate the observed test statistic. A test
statistic converts the sample result into a
number that can be compared with the critical
values specified by your decision rule and
critical values.
Examine the observed test statistic to see if it
falls in the critical region.
Make practical or theoretical interpretation of
the findings.