This document discusses key concepts related to sampling theory and measurement in research studies. It defines important sampling terms like population, sampling criteria, sampling methods, sampling error and bias. It also covers levels of measurement, reliability, validity and various measurement strategies like physiological measures, observations, interviews, questionnaires and scales. Finally, it provides an overview of statistical analysis techniques including descriptive statistics, inferential statistics, the normal curve and common tests like t-tests, ANOVA, and regression analysis.
Learn the process of Research.
Research process consists of a series of actions or steps necessary to carry out research. It guides a researcher to conduct research in a planned and organized sequence.
Summary slides for "Systematic Review and Meta-Analysis Course for Healthcare Professionals", January 8-9, 2013, King Abdullah Medical City, Makkah, Saudi Arabia
http://KAMCResearch.org
A comprehensive presentation based on a qualitative research methodology 'Grounded Theory, presented at Government College University Lahore, Pakistan.
Learn the process of Research.
Research process consists of a series of actions or steps necessary to carry out research. It guides a researcher to conduct research in a planned and organized sequence.
Summary slides for "Systematic Review and Meta-Analysis Course for Healthcare Professionals", January 8-9, 2013, King Abdullah Medical City, Makkah, Saudi Arabia
http://KAMCResearch.org
A comprehensive presentation based on a qualitative research methodology 'Grounded Theory, presented at Government College University Lahore, Pakistan.
Research Methodology, Research Terminologies and Techniques. These slides are based on the lectures delivered in Research Academy Karachi. These are useful for the researchers and academicians.
Dear viewers Check Out my other piece of works at___ https://healthkura.com
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Assessment of Qualitative Data, Qualitative & Quantitative Data, Data Processing
Presentation Contents:
- Introduction to data
- Classification of data
- Collection of data
- Methods of data collection
- Assessment of qualitative data
- Processing of data
- Editing
- Coding
- Tabulation
- Graphical representation
If anyone is really interested about research related topics particularly on data collection, this presentation will be the best reference.
For Further Reading
- Biostatistics by Prem P. Panta
- Fundamentals of Research Methodology and Statistics by Yogesh k. Singh
- Research Design by J. W. Creswell
- Internet
Research Methodology Introduction ch1
MEANING OF RESEARCH, OBJECTIVES OF RESEARCH,TYPES OF RESEARCH,Research Approaches ,Research Methods versus Methodology,research process guideline:
Every researcher is a cyborg! Academic researchers engage various sorts of research in vitro (in the glass) and in vivo (in the living body), or they engage in experimental laboratory work and analyze data in natural in-world experiments. In between, many conduct surveys, focus groups, interviews, and other types of research work. In the computer-assisted qualitative data analysis software (CAQDAS) space, NVivo is one of the foremost tools, enabling the creation of manual codebooks, multimedia analysis, and various forms of “auto” or unsupervised machine learning. NVivo works as a “database” for structured and unstructured data (multimedia). It enables the drawing of content from various social media sites. Technologies augment human analytical capabilities, in the qualitative and quantitative research spaces. This presentation demonstrates some of the capabilities of NVivo. This also addresses how a researcher is changed by the computational capabilities they harness.
Research Methodology, Research Terminologies and Techniques. These slides are based on the lectures delivered in Research Academy Karachi. These are useful for the researchers and academicians.
Dear viewers Check Out my other piece of works at___ https://healthkura.com
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Assessment of Qualitative Data, Qualitative & Quantitative Data, Data Processing
Presentation Contents:
- Introduction to data
- Classification of data
- Collection of data
- Methods of data collection
- Assessment of qualitative data
- Processing of data
- Editing
- Coding
- Tabulation
- Graphical representation
If anyone is really interested about research related topics particularly on data collection, this presentation will be the best reference.
For Further Reading
- Biostatistics by Prem P. Panta
- Fundamentals of Research Methodology and Statistics by Yogesh k. Singh
- Research Design by J. W. Creswell
- Internet
Research Methodology Introduction ch1
MEANING OF RESEARCH, OBJECTIVES OF RESEARCH,TYPES OF RESEARCH,Research Approaches ,Research Methods versus Methodology,research process guideline:
Every researcher is a cyborg! Academic researchers engage various sorts of research in vitro (in the glass) and in vivo (in the living body), or they engage in experimental laboratory work and analyze data in natural in-world experiments. In between, many conduct surveys, focus groups, interviews, and other types of research work. In the computer-assisted qualitative data analysis software (CAQDAS) space, NVivo is one of the foremost tools, enabling the creation of manual codebooks, multimedia analysis, and various forms of “auto” or unsupervised machine learning. NVivo works as a “database” for structured and unstructured data (multimedia). It enables the drawing of content from various social media sites. Technologies augment human analytical capabilities, in the qualitative and quantitative research spaces. This presentation demonstrates some of the capabilities of NVivo. This also addresses how a researcher is changed by the computational capabilities they harness.
This slides can help the audience to know about the different sampling methods and the importance of these methods for the users.This could also help in assisting the researcher to select the appropriate method for their research to be conducted.
TOPIC OUTLINE: 1. The Normal Curve
a. Definition/Description
b. Area Under Normal Curve
2. Standard Scores
a. Z-Scores
b. T-Scores
c. Other Standard Scores
Karl Friedrich Gauss:
one of the scientist that developed the concept of normal curve.
Normal Curve
is a continuous probability distribution in statistics
Karl Pearson:
first to refer to the curve as “Normal Curve”
Asymptotic:
approaching the x-axis but never touches it
Symmetric:
made up of exactly similar parts facing each other
STANDARD SCORES
-is a raw score that has been converted from one scale to another scale.
Z-scores
called a zero plus or minus one scale
Scores can be positive and negative
T-Scores
a none of the scores is negative. It can be called a 50 plus or minus ten scale. ( 50 mean set and 10 SD set )
Stanine: Standard Nine
(STAndard NINE) is a method of scaling test scores on a nine-point standard scale with a mean of five and a standard deviation of two.
This PowerPoint is one small part of the Matter, Energy, and the Environment Unit from www.sciencepowerpoint.com. This unit consists of a five part 3,500+ slide PowerPoint roadmap, 12 page bundled homework package, modified homework, detailed answer keys, 20 pages of unit notes for students who may require assistance, follow along worksheets, and many review games. The homework and lesson notes chronologically follow the PowerPoint slideshow. The answer keys and unit notes are great for support professionals. The activities and discussion questions in the slideshow are meaningful. The PowerPoint includes built-in instructions, visuals, and review questions. Also included are critical class notes (color coded red), project ideas, video links, and review games. This unit also includes four PowerPoint review games (110+ slides each with Answers), 38+ video links, lab handouts, activity sheets, rubrics, materials list, templates, guides, and much more. Also included is a 190 slide first day of school PowerPoint presentation.
Areas of Focus: Matter, Dark Matter, Elements and Compounds, States of Matter, Solids, Liquids, Gases, Plasma, Law Conservation of Matter, Physical Change, Chemical Change, Gas Laws, Charles Law, Avogadro's Law, Ideal Gas Law, Pascal's Law, Archimedes Principle, Buoyancy, Seven Forms of Energy, Nuclear Energy, Electromagnet Spectrum, Waves / Wavelengths, Light (Visible Light), Refraction, Diffraction, Lens, Convex / Concave, Radiation, Electricity, Lightning, Static Electricity, Magnetism, Coulomb's Law, Conductors, Insulators, Semi-conductors, AC and DC current, Amps, Watts, Resistance, Magnetism, Faraday's Law, Compass, Relativity, Einstein, and E=MC2, Energy, First Law of Thermodynamics, Second Law of Thermodynamics-Third Law of Thermodynamics, Industrial Processes, Environmental Studies, The 4 R's, Sustainability, Human Population Growth, Carrying Capacity, Green Design, Renewable Forms of Energy (The 11th Hour)
This unit aligns with the Next Generation Science Standards and with Common Core Standards for ELA and Literacy for Science and Technical Subjects. See preview for more information
If you have any questions please feel free to contact me. Thanks again and best wishes. Sincerely, Ryan Murphy M.Ed www.sciencepowerpoint@gmail.com
Teaching Duration = 4+ Weeks
Chapter 8
Sampling
Sampling
Sampling involves decisions about who or what will be tested, observed, or interviewed in your study (Morse, 2007)
Key questions to address:
Who should and should not be included?
How many should be included?
Probability
Probability is the likelihood that an event or a condition will occur
You can express probability in terms of the chance the event will occur or in percentages
Levels of Significance
Levels of significance are the difference that will be accepted as too large to be attributed to chance
These levels are set by the researcher at the outset of a study
Probability Samples
Probability samples are formed to ensure that each subject has an equal chance of being included so an unbiased sample can be used
Probability Samples
A sampling design explains how the subjects are chosen and should include:
Number of subjects
How they will be assessed, screened, and selected
Inclusion and exclusion criteria
Probability Samples
Random selection is accomplished by having:
Identification of all possible participants
Every potential participant is given an equal chance of being selected
Probability Samples
Variations of random sampling include:
Stratified: randomly select from each stratum
Cluster: sample groups rather than individuals
Multistage: sample from multiple sets of clusters
Nonprobability Sampling
Reasons why researchers use nonprobability samples are:
Limited resources for developing an accurate sampling frame or purchase lists of potential subjects
Information needed to identify all potential subjects is not available
Nonprobability Sampling
Reasons why researchers use nonprobability samples are:
Limited number of subjects
Subjects are difficult to find or difficult to persuade to participate in study
Subjects do not complete study
Experimental mortality
Nonprobability Sampling
Types of nonprobability samples include:
Quota sampling: select a specified number of participants from each group
Convenience sampling: enroll those who are available
Snowball network or referral sampling: begin with known individuals and ask them to refer others who meet selection criteria
Tracking and Reporting
Sample Development
In order to improve the reporting of randomized controlled trials (RCTs), the Consolidated Standards of Reporting Trials (CONSORT) were developed
A flow diagram that can be used for tracking sample development
CONSORT Flow Diagram
Source: Altman, D.G., Schulz, K.F., Moher, D., Egger, M.. Davidoff, F., Elbourne, D., Gøtzsche, P.C., & Lang, T. (2001). The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annuals of Internal Medicine; 134(8), 663-694.
Example of Flowchart
Source: Buchbinder, R., Osborne, R.H., Ebeling, P. R., Wark, J.D., Mitchell, P.M., Wriedt, C., Graves, S.D., Staples, M.P., & Murphy, B. (2009). A randomized trial of vertebroplasty for painful osteoporotic vertebral factures. The New England Journal of Medicine, 361 ...
Need a nonplagiarised paper and a form completed by 1006015 before.docxlea6nklmattu
Need a nonplagiarised paper and a form completed by 10/06/015 before 7:00pm. I have attached the documents along the rubics that must be followed.
Coyne and Messina Articles, Part 2 Statistical Assessment
Details:
1) Write a paper of 1,000-1,250 words regarding the statistical significance of outcomes as presented in Messina's, et al. article "The Relationship between Patient Satisfaction and Inpatient Admissions Across Teaching and Nonteaching Hospitals."
2) Assess the appropriateness of the statistics used by referring to the chart presented in the Module 4 lecture and the resource "Statistical Assessment."
3) Discuss the value of statistical significance vs. pragmatic usefulness.
4) Prepare this assignment according to the APA guidelines found in the APA Style Guide located in the Student Success Center. An abstract is not required.
5) This assignment uses a grading rubric. Instructors will be using the rubric to grade the assignment; therefore, students should review the rubric prior to beginning the assignment to become familiar with the assignment criteria and expectations for successful completion of the assignment.
Statistics: What you Need to Know
Introduction
Often, when people begin a statistics course, they worry about doing advanced mathematics or their math phobias kick in. Understanding that statistics as addressed in this course is not a math course at all is important. The only math you will do is addition, subtraction, multiplication, and division. In these days of computer capability, you generally don't even have to do that much, since Excel is set up to do basic statistics for you. The key elements for the student in this course is to understand the various types of statistics, what their requirements are, what they do, and how you can use and interpret the results. Referring back to the basic components of a valid research study, which statistic a researcher uses depends on several things:
·
The research question itself
·
The sample size
·
The type of data you have collected
·
The type of statistic called for by the design
All quantitative studies require a data set. Qualitative studies may use a data set or may use observations with no numerical data at all. For the purposes of the next modules, our focus will be on quantitative studies.
Types of Statistics
There are several types of statistics available to the researcher. Descriptive statistics provide a basic description of the data set. This includes the measures of central tendency: means, medians, and modes, and the measures of dispersion, including variances and standard deviations. Descriptive statistics also include the sample size, or "N", and the frequency with which each data point occurs in the data set.
Inferential statistics allow the researcher to make predictions, estimations, and generalizations about the data set, the sample, and the population from which the sample was drawn. They allow you to draw inferences, generaliza.
Statistics What you Need to KnowIntroductionOften, when peop.docxdessiechisomjj4
Statistics: What you Need to Know
Introduction
Often, when people begin a statistics course, they worry about doing advanced mathematics or their math phobias kick in. Understanding that statistics as addressed in this course is not a math course at all is important. The only math you will do is addition, subtraction, multiplication, and division. In these days of computer capability, you generally don't even have to do that much, since Excel is set up to do basic statistics for you. The key elements for the student in this course is to understand the various types of statistics, what their requirements are, what they do, and how you can use and interpret the results. Referring back to the basic components of a valid research study, which statistic a researcher uses depends on several things:
The research question itself
The sample size
The type of data you have collected
The type of statistic called for by the design
All quantitative studies require a data set. Qualitative studies may use a data set or may use observations with no numerical data at all. For the purposes of the next modules, our focus will be on quantitative studies.
Types of Statistics
There are several types of statistics available to the researcher. Descriptive statistics provide a basic description of the data set. This includes the measures of central tendency: means, medians, and modes, and the measures of dispersion, including variances and standard deviations. Descriptive statistics also include the sample size, or "N", and the frequency with which each data point occurs in the data set.
Inferential statistics allow the researcher to make predictions, estimations, and generalizations about the data set, the sample, and the population from which the sample was drawn. They allow you to draw inferences, generalizations, and possibilities regarding the relationship between the independent variable and the dependent variable to indicate how those inferences answer the research question. Researchers can make predictions and estimations about how the results will fit the overall population. Statistics can also be described in terms of the types of data they can analyze. Non-parametric statistics can be used with nominal or ordinal data, while parametric statistics can be used with interval and ratio data types.
Types of Data
There are four types of data that a researcher may collect.
Nominal Data Sets
The Nominal data set includes simple classifications of data into categories which are all of equal weight and value. Examples of categories that are equal to each other include gender (male, female), state of birth (Arizona, Wyoming, etc.), membership in a group (yes, no). Each of these categories is equivalent to the other, without value judgments.
Ordinal Data Sets
Ordinal data sets also have data classified into categories, but these categories have some form or order or ranking attached, often of some sort of value / val.
INTRODUCTION
DEFINITION
HYPOTSIS
ANALYSIS OF QUANTITATIVE DATA
STEPS OF QUANTITATIVE DATA ANALYSIS.
STEPS OF QUANTITATIVE DATA ANALYSIS.
INTERPRETATION OF DATA
PARAMETRIC TESTS
Commonly Used Parametric Tests.
Marketing Research Project on T test and Sample Designing, Detail Analysis of all the aspect of T test and usage of all the tools for finding out the different variants.
Similar to Sampling, measurement, and stats(2013) (20)
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. Sampling Theory Concepts
Population
Target Population
Accessible Population
Elements of a Population
Sampling Criteria
3. Sampling Criteria
Characteristics essential for
inclusion or exclusion of
members in the target
population
Between the Ages of 18 & 45
Ability to speak English
Dx of diabetes within last month,
or
No Hx of chronic illness
4. Sampling Theory Concepts
Sampling Plans or
Methods
Sampling Error
Random Variation
Systematic Variation
5. Sampling Error
Random Variation
The expected difference in values that
occurs when different subjects from
the same sample are examined.
Difference is random because some
values will be higher and others lower
than the average population values.
6. Sampling Error
Systematic Variation (Bias)
Consequence of selecting
subjects whose measurement
values differ in some specific
way from those of the
population.
These values do not vary
randomly around the
population mean
8. Sampling Theory Concepts
Sample Mortality
Subject Acceptance Rate:
Percentage of individuals
consenting to be subjects
Representativeness
9. Representativeness
Needs to evaluate:
setting
characteristics of the subjects:
age, gender, ethnicity, income,
education
distribution of values
measured in the study
12. Sample Size
Factors influencing sample size
Effect size
Type of study conducted
Number of variables studied
Measurement sensitivity
Data analysis techniques
13. Power Analysis
Standard Power of 0.8
Level of Significance
alpha = .05, .01, .001
Effect Size
.2 Small; .5 Medium; .8 Large
Sample Size
14. Example Sample
A convenient sample of 55 adults
scheduled for first time elective CABG
surgery without cardiac
catheterization, who had not had
other major surgery within the
previous year, and who were not
health professionals met the study
criteria and were randomly assigned
to one of two instruction conditions...
15. Example Sample
Based on a formulation of 80% power, a
medium critical effect size of 0.40 for each of
the dependent variables, and a significance
level of .05 for one-tailed t-tests means, a
sample size of 40 was deemed sufficient to
test the study hypotheses...
16. Example Sample
The study included a convenience
sample of 32 post-op Lung Cancer
patients. A power analysis was
conducted to determine size. A
minimum of 27 subjects was necessary
to achieve the statistical power of 0.8
and a medium (0.5) effect size at the
0.05 level of significance....The
subjects were 25 men and 7 women
with an age range from 18-58 years
(mean = 32.74)....
17. Critiquing the Sample
Were the sample criteria
identified?
Was the sampling method
identified?
Were the characteristics of
the sample described?
18. Critiquing the Sample
Was the sample size identified?
Was the percent of subjects
consenting to participate
indicated?
Was the sample mortality
identified?
Was the sample size adequate?
22. Levels of Measurement
Nominal
data categorized, but no order or zero (ex- gender
numbers)
Ordinal
categories with order, but intervals not necessarily
equal and no zero (ex – pain)
Interval
equal intervals, but no true zero (ex- temp scales)
Ratio
equal intervals with a true zero. These are real
numbers, for things such as weight, volume, length.
26. Age
How old are you?
25-34
35-44
45-54
55 or older
What LOM?
27. Income
1 = under $35,000
2 = $35-50,000
3 = $50 - 100,000
LOM?
28. What is reliability?
Reliability - is concerned
with how consistently the
measurement technique
measures the concept of
interest.
29. Types of Reliability
Stability -- is
concerned with the
consistency of
repeated measures or
test-retest reliability
30. Types of Reliability
Equivalence -- is focused
on comparing two versions
of the same instrument
(alternate forms reliability)
or two observers (interrater
reliability) measuring the
same event.
31. Types of Reliability
Homogeneity -- addresses the
correlation of various items
within the instrument or
internal consistency;
determined by split-half
reliability or Cronbach’s alpha
coefficient.
47. Tailedness
One-Tailed Test- .05 Level of Significance
Two-Tailed Test- .05 Level of Significance
Significantly different
from mean
Significantly different
from meanSignificantly different
from mean
0.025 0.025
0.05
Tail
Tail
48. Process for Quantitative Data
Analysis
• Preparation of the Data for Analysis
• Description of the Sample
• Testing the Reliability of the Instruments
for the Present Sample
• Testing Comparability of Design Groups
• Exploratory Analysis of Data
• Confirmatory Analyses Guided by
Objectives, Questions, or Hypotheses
• Post Hoc Analyses
49. Cleaning Data
Examine data
Cross-check every piece of data with the
original data
If file too large, randomly check for
accuracy
Correct all errors
Search for values outside the appropriate
range of values for that variable.
50. Missing Data
Identify all missing data points
Obtain missing data if at all possible
Determine number of subjects with data
missing on a particular variable
Make judgement - are there enough
subjects with data on the variable to
warrant using it in statistical analyses?
51. Transforming Data
Transforming skewed data so that it is linear
(required by many statistics).
Squaring each value
calculating the square root of each
value
52. Calculating Variables
Involves using values from two or
more variables in your data set to
calculate values for a new variable
to add to the data set.
Summing scale values to obtain
a total score
Calculating weight by height
values to get a value for Body
Mass Index
53. Statistical Tools
Used to allow easy calculation of statistics
Computer-based tools allow rapid analysis but
sometimes too easy
Must still know what each type of test is for and how to
use them
Don’t fall into the trap of using a test just because it is
easy to do now
Many papers appearing with questionable tests just
because a computer program allows the calculation
54. Statistics Exercises
Stat Trek
http://stattrek.com/
Tutorial for exercises
Understand rationale for the selection of each test type.
Be prepared to utilize test if asked, and know major advantages
of each main test.
Miller Text (Chapter 21, Fifth Edition, pgs 753-792)
Material very thorough.
Many little-used tests described.
Read for idea of why other tests are available
Don’t get bogged down in the details
55. Descriptive Statistics
Describes basic features of a data group.
Basis of almost all quantitative data analysis
Does not try to reach conclusions (inferences), only
describe.
Provide us with an easier way to see and quickly interpret
data
56. Descriptive Statistics
Data Types
Based on types of measurement
Measurement scales can show magnitude, intervals, zero point, and
direction
Equal intervals are necessary if one plans any statistical analysis of
data
Interval scales possess equal intervals and a magnitude
Ratio scales show equal intervals, magnitude and a zero point
Ordinal scales show only magnitude, not equal intervals or a zero
point
Nominal data in non-numeric (not orderable) whereas
ordinal data is numeric and can be ordered but not based
on continuous scale of equal intervals
57. Descriptive Statistics
Goal of use is to be able to summarize the data in a way
that is easy to understand
May be described numerically or graphically
Describe features of the distribution
Examples include distribution shape (skewed, normal
(bell-shaped), modal, etc), scale, order, location
58. Descriptive Statistics
Location Statistics
How the data “falls”
Examples would be statistics of central tendency
Mean
Average of numerical data
Σ x / n
Median
Midpoint of data values
Value of data where 50% of data values is above and 50% below (if
number of data points is even, then the middle two values are averaged)
Mode
Most frequent data value
May be multi-modal if there is an identical number of max data values
59. Descriptive Statistics
Location Statistics
Data outliers may need to be accounted for and possibly
eliminated
This can be done by trimming or weighting the mean to
effectively eliminate the effect from outliers
60. Descriptive Statistics
Count Statistics
One of the simplest means of expressing an idea
Works for ordinal and nominal data
61. Descriptive Statistics
Statistics of Scale
Measures how much dispersal there is in a data set
(variability)
Example statistics include sample range, variance,
standard deviation (the square root of the variance), SEM
(SD/sq root of N)
Outliers can influence variance and standard deviation
greatly, so try to avoid their use if there are lots of outliers
that can not be weighted out
62. Descriptive Statistics
Distribution Shape Statistics
Determines how far from “normal” the distribution of data
is based on normal distribution shapes (Gaussian)
Skewness measures how “tailed” the data distribution is
(positive to right, negative to left)
Kurtosis measures whether the “tail” is heavy or light
63. Inferential Statistics
Attempts to come to conclusions about a data set that are
not exactly stated by the data (inferred)
Many tests use probability to help determine if data
points to a likely conclusion.
Often used to compare two groups of data to see if they
are ‘statistically different’
Often used to decide whether or not a conclusion one is
trying to reach from the data set is reliable (within
statistical probability)
64. Inferential Statistics
Simplest form is the comparison of average data between
two data sets to see if they are different
Students t-test is often used to compare differences
between 2 groups
Usually one control group and one experimental
Should be only one altered variable in experimental
group
65. Inferential Statistics
Most common inferential statistical tests belong to the
General Linear Model family
Data is based on an equation in which a wide variety of
research outcomes can be described
Problems with these types of analysis tools usually comes
from the wrong choice of the equation used
Errors in the wrong equation used can result in the data
conclusions being biased one way or the other, leading to
accepting or rejecting the null hypothesis wrongly
66. Inferential Statistics
Common Linear Model tests include:
Students t-test
Analysis of variance (ANOVA)
Analysis of covariance (ANCOVA)
Regression analysis
Multivariate factor analysis
67. Inferential Statistics
Type of research design used also determines the
type of testing which can be done:
Experimental analysis
Usually involves comparison of one or more groups against a
control, and thus t-test or ANOVA tests are the most commonly
used
Quasi-experimental analysis
Typically lack a control group, and thus the random analysis that is
usually used to assign individuals to groups
These types of analysis are much more complex to compensate for
the random assignments