Your SlideShare is downloading. ×
0
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
251109 rm-c.s.-assessing measurement quality in quantitative studies
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

251109 rm-c.s.-assessing measurement quality in quantitative studies

1,945

Published on

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,945
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
64
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Assessing Measurement quality in quantitative studies. Presented By: Mrs. Christy Simpson Professor, Maternity nursing
  • 2. <ul><li>Definition: </li></ul><ul><li>Quantitative Data: </li></ul><ul><li>Information collected in a quantified (numeric ) form. </li></ul><ul><li>Quantitative Research : </li></ul><ul><li>The investigation of phenomena that lend themselves to precise measurement and quantification, often involving a rigorous and controlled design. </li></ul><ul><li>Quantitative Analysis: </li></ul><ul><li>Manipulation of numeric data through statistical procedures for the purpose of describing phenomena or assessing the magnitude and reliability of relationships among them. </li></ul>
  • 3. <ul><li>Measurement: </li></ul><ul><li>Quantitative studies derive data through the measurement of variables. </li></ul><ul><li>Measurement involves the assignment of numbers to represent the amount of an attribute present in an object or person using a specified set of rules. </li></ul><ul><li>Principles of Measurement: </li></ul><ul><li>Classical measurement theory e.g Psychosocial constructs such as depression or social support. </li></ul><ul><li>Alternative measurement theory or Item Response theory e.g Cognitive constructs, Achievement or ability. </li></ul>
  • 4. <ul><li>Advantages of measurement: </li></ul><ul><li>- Measurement is a language of communication. </li></ul><ul><li>- Numbers are less vague than words and therefore can communicate information more correctly. e.g Obese than 80Kg </li></ul>
  • 5. <ul><li>Errors of Measurement: </li></ul><ul><li>Instruments that are not perfectly accurate yield measurements containing some error. </li></ul><ul><li>With in classical measurement theory, any observed (Obtained) score can be decomposed conceptually in to two parts : </li></ul><ul><li>a) An error component </li></ul><ul><li>b) A true component </li></ul><ul><li>Obtained score = true score ± error </li></ul>
  • 6. <ul><li>Many factors contribute to errors of measurement: </li></ul><ul><li>Some are random or variable, others are systematic, which represent bias. </li></ul><ul><li>The most common influences on measurement error are: </li></ul><ul><li>Situational contaminants: </li></ul><ul><li>Scores can be affected by the conditions under which they are produced.e.g. A participant’s awareness of an observer’s presence (reactivity). </li></ul><ul><li>Other environmental factors are:Temperature , lighting etc. </li></ul>
  • 7. <ul><li>2.Transitory personal factors: </li></ul><ul><li>A person’s score can be influenced by such temporary personal states as fatique, hunger, anxiety or mood. </li></ul><ul><li>3. Response set biases: </li></ul><ul><li>Relatively enduring characteristics of respondents can interfere with accurate measures. E.g social desirability, acquiescence </li></ul><ul><li>4. Administration Variations: </li></ul><ul><li>Alterations in the method of collecting data from one person to the next. </li></ul>
  • 8. <ul><li>Errors cont’d </li></ul><ul><li>5. Instrument Clarity: </li></ul><ul><li>If the directions for obtaining measures are poorly understood, then scores may be affected by misunderstanding. E.g. Self - report instrument may be interpreted differently by different respondents. </li></ul><ul><li>6. Item Sampling: </li></ul><ul><li>Errors can be introduced as a result of the sampling of items used in the measure. </li></ul><ul><li>7. Instrument format: Technical characteristics of an instrument. E.g open ended questions yield different information than closed ones. </li></ul>
  • 9. <ul><li>Criterion to assess the quality of quantitative instrument: </li></ul><ul><li>Reliability: </li></ul><ul><li>An instrument’s reliability is the consistency with which it measures the target attribute. </li></ul><ul><li>The less variation an instrument produces in repeated measurements, the higher its reliability. </li></ul><ul><li>The three key aspects of reliability: </li></ul><ul><li>Stability , Internal consistency and equivalence </li></ul>
  • 10. <ul><li>Stability: </li></ul><ul><li>The stability of an instrument is the extent to which similar results are obtained on two separate occasions. </li></ul><ul><li>Assessments of an instrument’s stability involve procedures that evaluate test – retest reliability. </li></ul><ul><li>e.g. Administer the same measure to a sample twice and then compare the scores by computing a reliability coefficient, which is an index of the magnitude of the test’s reliability. Statistical analysis is correlation –coefficient. </li></ul>
  • 11. <ul><li>How to read a correlation coefficient: </li></ul><ul><li>Two relationships: </li></ul><ul><li>Positive relationship: </li></ul><ul><li>The possible values for a correlation coefficient ranges from – 1.00 through .00 to + 1.00. </li></ul><ul><li>Positive relationship value should be 1 </li></ul><ul><li>e.g: Anxiety scale - Administer the scale twice with 2 weeks duration </li></ul>
  • 12. <ul><li>Negative Relationship: </li></ul><ul><li>When two variables are inversely related, increases in one variable are associated with decreases in the second variable. </li></ul><ul><li>The value of negative relationship is -1. </li></ul><ul><li>e.g: IQ is more in tall person. </li></ul><ul><li>The higher the coefficient, the more stable the measure. </li></ul><ul><li>The reliability coefficient is higher for short term retests than longterm retests </li></ul>
  • 13. <ul><li>Internal consistency: </li></ul><ul><li>Scales designed to measure an attribute ideally are composed of items that measure that attribute and nothing else. </li></ul><ul><li>An instrument may be said to be internally consistent or homogeneous to the extent that its measure the same trait. </li></ul><ul><li>e.g Depression scale </li></ul><ul><li>The most widely used method for evaluating internal consistency is coefficient alpha or Cronbach’s alpha.Normal range of value is .00 and +1.00 </li></ul>
  • 14. <ul><li>Equivalence: </li></ul><ul><li>The degree to which two or more independent observers or coders agree about the scoring on an instrument. </li></ul><ul><li>Inter rater reliability can be assessed. When ratings are dichotomus, Following equation is used to calculate the proportion of agreements. </li></ul><ul><li>Number of agreement </li></ul><ul><li>Number of agreement + disagreements </li></ul><ul><li>The statistics used is Cohen’s Kappa which adjust for chance agreements. Multi rater Kappa when more than two raters. </li></ul>
  • 15. <ul><li>Factors affecting reliability: </li></ul><ul><li>More items tapping the same concept should be added. </li></ul><ul><li>Items that have no discriminating power should be removed </li></ul>
  • 16. <ul><li>Validity: </li></ul><ul><li>It is the degree to which an instrument measures what it is supposed to measure. </li></ul><ul><li>A measuring device that is unreliable cannot possibly be valid. </li></ul><ul><li>Validation efforts should be viewed as evidence gathering enterprises. </li></ul><ul><li>The more evidence gathered, using various methods to assess validity, the stronger the inference. </li></ul>
  • 17. <ul><li>Types of validity: </li></ul><ul><li>Face validity: </li></ul><ul><li>Refers to whether the instrument looks as though it is measuring the appropriate. </li></ul><ul><li>Scale is established by consulting the experts and person with a same disease </li></ul><ul><li>2. Content Validity: </li></ul><ul><li>Concerns the degree to which an instrument has an appropriate sample of items for the construct being measured and adequately covers the construct domain. </li></ul><ul><li>Content validity is relevant for both affective and cognitive measures </li></ul>
  • 18. <ul><li>Content Valid cont’d </li></ul><ul><li>An content validity is necessarily based on judgement.No objective methods to ensure content validity. </li></ul><ul><li>Use a panel of substantive experts to evaluate and document the content validity of new instruments.Validation by minimum of three. </li></ul>
  • 19. <ul><li>Calculate the Content Validity index,(CVI) Experts rate items on a 4 – point scale of relevance, the item(I) CVI is computed as the number of raters giving a rating of either 3 or 4 , divided by the number of experts.I-CVI of .80 is considered an acceptable value. </li></ul><ul><li>Scale CVI (S) CVI can be also done. </li></ul>
  • 20. <ul><li>3. Concurrent Validity: </li></ul><ul><li>Concurrent Validity refers to a measurement device’s ability to vary directly with a measure of the same construct or indirectly with a measure of an opposite construct. It allows you to show that your test is valid by comparing it with an already valid test. </li></ul>
  • 21. <ul><li>4. Criterion – Related validity: </li></ul><ul><li>Determines the relationship between an instrument and an external criterion. </li></ul><ul><li>The instrument is said to be valid if its scores correlate highly with scores on the criterion. </li></ul><ul><li>Two types of criterion related validity: </li></ul><ul><li>a) Predictive validity : Refers to the adequacy of an instrument in differentiating between people’s performance on some future criterion. e.g , High school grades for nursing school performance </li></ul>
  • 22. <ul><li>b) Construct validity: </li></ul><ul><li>It is a key criterion for assessing the quality of a study. </li></ul><ul><li>sometimes also called factorial validity, has to do with the logic of items which comprise measures of social concepts. </li></ul><ul><li>The key construct validity questions: </li></ul><ul><li>- What is this instrument really measuring? </li></ul><ul><li>- Does it adequately measure the abstract concept of interest </li></ul>
  • 23. <ul><li>Construct cont’d </li></ul><ul><li>A good construct has a theoretical basis which is translated through clear operational definitions involving measurable indicators. </li></ul><ul><li>It involves logical analysis and hypothesis test. </li></ul>
  • 24. <ul><li>Methods of construct validity: </li></ul><ul><li>1.Known groups Technique: </li></ul><ul><li>The instrument is administered to groups hypothesized to differ on the critical attribute because of some known characteristics. </li></ul><ul><li>E.g Anxiety among primi & Multi in labour. </li></ul><ul><li>2. Hypothesized Relationship: </li></ul><ul><li>Testing hypothesized relationships, often on the basis of theory. E.g Smoking ---Cancer </li></ul>
  • 25. <ul><li>3. Convergent and Discriminant Validity: </li></ul><ul><li>An important construct validation tool is a procedure known as the Multitrait – multimethod matrix method which involves convergence and Discriminiability. </li></ul><ul><li>Convergence is evidence that different methods of measuring a construct yield similar results.e.g Self report,Observation etc. </li></ul><ul><li>Discriminiability is the ability to differentiate the construct from other similar constructs. </li></ul><ul><li>e.g. Psychological & Physical problems (HIV) </li></ul>
  • 26. <ul><li>4. Factor Analysis: </li></ul><ul><li>It is a method for identifying clusters of related variables – that is ,dimensions underlying a central construct. </li></ul><ul><li>It is a statistical procedure for identifying unitary clusters of items. </li></ul><ul><li>e,g Assess nursing students confidence in caring mentally ill patients. </li></ul>
  • 27. <ul><li>Criteria for screening and diagnostic instruments: </li></ul><ul><li>Sensitivity and Specificity </li></ul><ul><li>Sensitivity is the instrument’s ability to identify a case correctly.(Its rate of yielding true positives) </li></ul><ul><li>True positives divided by positives, (Smokers who had high cotinine / all real smokers) </li></ul><ul><li>Specificity is the instrument’s ability to identify non cases correctly.(Its rate of yielding true negatives)Teenagers reported that they did not smoke,True negatives / all negatives. </li></ul>
  • 28. Urinary cotininie level Sensitivity = A/(A+C) = .50, Specificity = D/ (B+D) = .83 ( Percentage) Positive predictive value = A/(A+B) =.67 Negative predictive value =D/(C=D)=.71 Likelihood ratio –Positive (LR+) = Sensitivity/(1- Specificity) = 2.99 Likelihood ratio – Negative(LR_) = (1- sensitivity) / specificity =.60 LR Summarizes the relationship specificity and sensitivity in a single number. Self Reported smoking Positive Cotinine Negative Cotinine Total Yes , Smoked A (True positive) 20 B (False positive)10 A+B =30 No,Did not smoke C (False negative)20 D(True negative)50 C+D = 70 A+C=40 B+D=60 A+B+C+D 100
  • 29. <ul><li>Other criteria to assess quantitative measures: </li></ul><ul><li>Efficiency </li></ul><ul><li>One aspect of efficiency is the number of items on the instrument. Long instruments tend to be more reliable than shorter ones. </li></ul><ul><li>Spearman – Brown formula , to estimate how reliable the scale would be with fewer items </li></ul><ul><li>There are other 6 criteria to check the quality and it is related to reliability and validity. </li></ul>
  • 30. <ul><li>Six Criteria </li></ul><ul><li>Comprehensibility: </li></ul><ul><li>Subjects and researchers should be able to comprehend the behaviors required to secure accurate and valid measures. </li></ul><ul><li>2. Precision: </li></ul><ul><li>An instrument should discriminate between people with different amounts of an attribute as precisely as possible. </li></ul><ul><li>3. Speededness: </li></ul><ul><li>Researchers should allow adequate time to obtain complete measurements without rushing the measuring process. </li></ul>
  • 31. <ul><li>Criteria cont’d </li></ul><ul><li>4. Range: </li></ul><ul><li>The instrument should be capable of achieving a meaningful measure from the smallest expected value of the variable to the largest. </li></ul><ul><li>5. Linearity: </li></ul><ul><li>A researcher normally strives to construct measures that are equally accurate and sensitive over the entire range of values. </li></ul><ul><li>6. Reactivity: </li></ul><ul><li>Instrument should avoid affecting the attribute being measured. </li></ul>
  • 32. <ul><li>Conclusion: </li></ul><ul><li>Quantitative Research studies are more common </li></ul><ul><li>Easy to do and analyze </li></ul><ul><li>Quality of the instrument must be assessed. </li></ul><ul><li>Reliability and validity are the main qualities. </li></ul><ul><li>Measure carefully to make the study findings more relevant to use it for nursing or midwifery practice. </li></ul>

×