The document discusses the concepts of reliability, validity, and utility in research. It defines reliability as providing consistent results, validity as measuring what is intended, and utility as being practical to implement. The document then examines various methods for establishing reliability, such as test-retest reliability and internal consistency. It also explores different aspects of validity like content validity, criterion validity, and construct validity. Finally, it notes factors that determine the utility or practicality of a measurement tool, such as administration time and costs.
A presentation on validity and reliability assessment of questionnaire in research. Also includes types of validity and reliability and steps in achieving the same.
Reliability refers to how consistent and stable your research results are ,and how well they can be replicated by other researchers. Validity refers to how well your research measures what it intends to measure, and how accurately it reflects the reality of the phenomenon you are studying,
1) A cyber crime is a crime that involves a computer and the Inter.docxSONU61709
1) A cyber crime is a crime that involves a computer and the Internet. A forensics investigation involves gathering and preserving evidence in a way that is suitable for presentation in a court of law. Use the library to research any recent (within the past 12 months), real-world cyber crime. Discuss each of the following scenarios:
· What was the cyber crime? Who or what did the cyber crime affect?
· How did the cyber crime occur?
· In your opinion, how could the cyber crime have been avoided?
· How would you conduct the forensics investigation for this cyber crime?
Use and list at least 2 sources to support your response to the question. You may use the textbook as a resource. Be sure to use APA formatting for all references.
Responses to Other Students: Respond to at least 2 of your fellow classmates with at least a 100-word reply about their Primary Task Response regarding items you found to be compelling and enlightening. To help you with your discussion, please consider the following questions:
· What did you learn from your classmate's posting?
· What additional questions do you have after reading the posting?
· What clarification do you need regarding the posting?
· What differences or similarities do you see between your posting and other classmates' postings?
2) Antiforensic techniques make proper forensic investigations more difficult. Antiforensic techniques are deliberate and can reduce the quantity and quality of digital evidence. Antiforensic techniques can also be used to increase security. Use the library to research antiforensic techniques, and discuss the following:
· What are at least 3 examples of antiforensic techniques, and how are they used?
· Discuss how antiforensic techniques affect computer forensics, file recovery, and security.
Use and list at least 2 sources to support your response to the question. You may use the textbook as a resource. Be sure to use APA formatting for all references.
3) Review and reflect on the knowledge you have gained from this course. Based on your review and reflection, write at least 3 paragraphs on the following:
· What was the most valuable concept that you learned in this class that you will most likely use in your future career?
· What concept in this course provided the most insight to the technical aspects of computer forensics? Explain.
· The main post should include at least 1 reference to research sources, and all sources should be cited using APA format.
A Primer on the Validity of
Assessment Instruments Gail M. Sullivan, MD, MPH
1. What is reliability?1
Reliability refers to whether an assessment instrument gives
the same results each time it is used in the same setting with
the same type of subjects. Reliability essentially means
consistent or dependable results. Reliability is a part of the
assessment of validity.
2. What is validity?1
Validity in research refers to how accurately a study answers
the study question or the strength of the study c ...
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
A presentation on validity and reliability assessment of questionnaire in research. Also includes types of validity and reliability and steps in achieving the same.
Reliability refers to how consistent and stable your research results are ,and how well they can be replicated by other researchers. Validity refers to how well your research measures what it intends to measure, and how accurately it reflects the reality of the phenomenon you are studying,
1) A cyber crime is a crime that involves a computer and the Inter.docxSONU61709
1) A cyber crime is a crime that involves a computer and the Internet. A forensics investigation involves gathering and preserving evidence in a way that is suitable for presentation in a court of law. Use the library to research any recent (within the past 12 months), real-world cyber crime. Discuss each of the following scenarios:
· What was the cyber crime? Who or what did the cyber crime affect?
· How did the cyber crime occur?
· In your opinion, how could the cyber crime have been avoided?
· How would you conduct the forensics investigation for this cyber crime?
Use and list at least 2 sources to support your response to the question. You may use the textbook as a resource. Be sure to use APA formatting for all references.
Responses to Other Students: Respond to at least 2 of your fellow classmates with at least a 100-word reply about their Primary Task Response regarding items you found to be compelling and enlightening. To help you with your discussion, please consider the following questions:
· What did you learn from your classmate's posting?
· What additional questions do you have after reading the posting?
· What clarification do you need regarding the posting?
· What differences or similarities do you see between your posting and other classmates' postings?
2) Antiforensic techniques make proper forensic investigations more difficult. Antiforensic techniques are deliberate and can reduce the quantity and quality of digital evidence. Antiforensic techniques can also be used to increase security. Use the library to research antiforensic techniques, and discuss the following:
· What are at least 3 examples of antiforensic techniques, and how are they used?
· Discuss how antiforensic techniques affect computer forensics, file recovery, and security.
Use and list at least 2 sources to support your response to the question. You may use the textbook as a resource. Be sure to use APA formatting for all references.
3) Review and reflect on the knowledge you have gained from this course. Based on your review and reflection, write at least 3 paragraphs on the following:
· What was the most valuable concept that you learned in this class that you will most likely use in your future career?
· What concept in this course provided the most insight to the technical aspects of computer forensics? Explain.
· The main post should include at least 1 reference to research sources, and all sources should be cited using APA format.
A Primer on the Validity of
Assessment Instruments Gail M. Sullivan, MD, MPH
1. What is reliability?1
Reliability refers to whether an assessment instrument gives
the same results each time it is used in the same setting with
the same type of subjects. Reliability essentially means
consistent or dependable results. Reliability is a part of the
assessment of validity.
2. What is validity?1
Validity in research refers to how accurately a study answers
the study question or the strength of the study c ...
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
4. Day 1
Day 2
Day 3
Report 1
Report 2
Report 3
Same every
time and is
same what
was asked
Different reports
every time or
different from
what is asked
Reliability Validity
5. What do reliability, validity and utility mean….
• Same information obtained every time the same situation
comes - RELIABILITY
• Information is what is wanted - VALIDITY
• It is practically possible to obtain the information - UTILITY
• These qualities helps trust a person or machine
• Facilitate outcomes such as making friends, employees,
getting service deals for equipments etc.
6.
7. Gareis and Grant (2008), in Teacher-made Assessments: How to Connect Curriculum,
Instruction, and Student Learning p. 33, taken from https://reliablerubrics.com/2015/03/18/what-
is-reliability-and-validity/
8. What are reliability, validity and utility?
• Reliability is related to accuracy
• Validity is related to success of measuring what is intended
• If a tool is valid, it is also reliable BUT if a tool is reliable, its
not necessary that its valid
• A tool may be reliable and valid, but it should also be
practically possible to use it
• When experiments, tests, or measuring procedure is reliable
and valid, then results from replicated studies can support
claims of generalization of findings and contribute to
research based evidences
10. Sources of ‘error’ in measurement
• ‘Error’ in measurement means a variation from true reading
• ‘Errors’ in measurement can be due to different reasons
• The sampling of items –
• type of items,
• relation to construct and its aspects,
• number of items,
• How the tool is used
• How participants respond –
• Guessing
• Marking answers incorrectly
• Skipping questions by mistake
• Misinterpreting test instructions
Random error
Systematic error
11. Any measurement can have two types of errors (variations in repeated readings)
Random error
Caused due to chance
Systematic error
Caused due to specific reason
E.g., if responder is
distracted
E.g., if responder already has
experience in the construct
being tested
Checking Reliability Checking Validity
12. Stability Internal
Consistency
Equivalence
Test the measurement tool for ….
How to test
reliability ?
Is the tool giving
same result on
repeated
measurements?
Are the items in
the tool
measuring the
same construct?
When two people administer
the tool will results be same?
If we have two versions of the
tool, will they give equivalent
results?
13. Stability
• Question asked: Is the instrument or data collection
(measurement) tool able to give same results on repeated
administrations?
• The method: test-retest reliability
• The instrument is administered two times (about 15 days
apart) and the correlation coefficient of the readings obtained
both times is used as reliability coefficient
• Advantage: can see consistency across time
• Limitations: can be affected by …
• Memory if duration between tests is less
• Maturation if duration between tests is more
14. E.g., Test-retest correlation for two sets of scores of many college
students on Rosenberg Self-Esteem scale
https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/
Pearson’s r for these data is +.95
15. Internal Consistency
• Question asked: Are the items in the tool measuring the same
construct or concept or parts of the same concept?
• The method: Split-half reliability, Cronbach’s alpha or Coefficient
alpha and Kuder-Richardson Formula 20 (KR-20).
• The items are split into two halves. The correlation coefficient for
the two sets of readings from the two halves is calculated as the
split-half reliability coefficient.
• Advantages: Is not affected by memory or maturation effects
• Limitations: Does not consider fluctuations across time
16. E.g., Split-half correlation for sets of odd and even-numbered scores of many
college students on Rosenberg Self-Esteem scale
Pearson’s r for these data is +.88
https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/
Find Split-half
correlations for
all possible
combinations of
halves
and take their
mean.
Conceptually,
that mean is the
Cronbach’s alpha
17. Equivalence
Questions asked are:
• When two people
administer the tool will
results be same?
• If we have two versions of
the tool, will they give
equivalent results?
The methods:
• Inter-rater reliability
• Alternate form test reliability
The statistics used include:
• Kendall’s tau
• Inter-class correlations
• Rasch’s item-response model
Bannigan & Watson, 2009; Drost, 2011
18. How to make tests more reliable?
Write items clearly
Make test instructions
easily understood
Train raters effectively
by making rules of
scoring clearly
Add more items
Obtain a reliability
coefficient of 0.7 to
0.8
When situation
demands, reliability
coefficient of 0.9 can
be sought
20. Most attributes that are to be measured in the fields of social sciences
like education, are construct variables
Example: happiness, intelligence, anxiety, academic achievement, fear,
personality, etc.
Construct variables are variables or attributes that are abstract in
nature. They cannot be universally defined hence understood in the
same way.
Construct variables have to be operationally defined based on theory
about them
22. What are
the types of
validity?
• Content validity at two levels
• Face validity
• Content validity
• Criterion validity – concurrent validity
& predictive validity
• Construct validity
• convergent validity
• divergent validity
• factorial validity
• discriminant validity
• Statistical conclusion validity
23. Face Validity
• Check the test tool by subject
experts or researchers to see if the
items are reasonable, relevant.
• Focus is to confirm ‘subject’s
acceptance of text’
• Informal
Content Validity
• Critical review by expert panel to
check if content of test tool
matches with all aspects of a
construct for clarity and
completeness
• Comparison with literature
• Both above
• Formal
• Content validity index (CVI) or
Content validity ratio (CVR) can be
used
Content Validity at two levels
Bannigan & Watson, (2009)
24. Calculation of Content validity index
• Calculate item-wise content validity index (I-CVI)
• Number of experts who give ‘very relevant’ / total number of
experts
• The above measure is calculated for each item on the tool
• Items that have I-CVI <0.79 are taken as relevant
• Items that have I-CVI between 0.70 and 0.79 are taken as
needing revision
• Items that have I-CVI below 0.70 are eliminated
(Rodrigues, Adachi and Beattie, 2017)
25. Criterion validity: comparison to established
‘criteria’
Concurrent validity
• Test tool is compared to already
established ‘criteria’ by conducting
both tests at the same time
• Procedure applied to scale under
development
• Correlation of each question with
criterion score is used to refine
questionnaire
• E.g., a culture-relevant short test
for self-esteem is created and
compared to existing self-esteem
tool.
Predictive validity
• Test tool is compared to already
established ‘criteria’ by conducting
test tool at one time and criterion
tool at a future time
• Procedure used to predict if test
can predict performance in the
construct
• Correlation between test scores
and targeted outcomes
• E.g., TET test score used to predict
how well teachers perform in their
classes
26. Construct validity: correlation between test tool
and construct under investigation
• It is an indirect approach
• Relevant when scale or test tool has been developed
based on the assumption of a particular theory
• Starts with defining the topic or construct to be
assessed. Here:
• Hypotheses about correlations with other instruments
• Respondents who would score low or high
• Other findings that can be predicted from the scores
27. Construct validity
Convergent validity Divergent validity Factorial validity Discriminant validity
How similar is this
scale to other scales
measuring same or
related concepts
How different is this
scale from scales
measuring different
concepts?
What are the
factors that are
exactly covered by
the items on the
scale? Uses Factor
Analysis
Can the scale
discriminate among
people with differing
values on the construct?
Uses Discriminant
Analysis
Exploratory factor analysis Confirmatory factor analysis
Many variables. Find the
variables that would relate
to the construct
Variables and their relations
are known. Use data to test
hypothesis of their relations
30. Statistical conclusion validity
•Question asked: is the inference obtained on the
relationship tested trustworthy and dependable?
•Threats to statistical conclusion validity are:
• Low statistical power
• Violation of assumptions
• Reliability of measures
• Reliability of treatment
• Random irrelevancies of experimental setting
• Random heterogeneity of respondents
Drost, 2011, pg. 115
32. Utility or Practicality of a test
• Time to administer
• Ease of administering
• Easy language
• Does not cause boredom to
respondents which can
increase error
• Need to test the scale in
different settings as part of
reliability
• The tests should be cost
effective
Can my test tool or
scale be actually used
in the field?
33. Summary
• Reliability and Validity are important checks to ensure
trustworthiness of research findings
• Reliability addresses random errors in measurement
• Validity address systematic errors in measurement
• Utility addresses practicability of use of the measurement tool
• Reliability measures are in the form of correlation coefficients
• Validity measures are in the form of comments, reviews, and
correlation coefficients
• Correlation coefficients, factor analysis & discriminant analysis can
be obtained using statistical software such as SPSS, PSPP and JASP
34. References
• https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-
measurement/
• Drost, E.A. (2011) Validity and Reliability in Social Science Research. Education
Research and Perspectives, 38(1),105- 123
https://www3.nd.edu/~ggoertz/sgameth/Drost2011.pdf
• Bannigan, K., & Watson, R. (2009). Reliability and validity in a nutshell. Journal
of clinical nursing, 18(23), 3237–3243.
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-2702.2009.02939.x
• Rodrigues, I.B., Adachi, J.D., Beattie, K.A. et al. (2017) Development and
validation of a new tool to measure the facilitators, barriers and preferences to
exercise in people with osteoporosis. BMC Musculoskelet Disord 18, 540.
https://doi.org/10.1186/s12891-017-1914-5.
https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-
017-1914- 5.pdf