Concepts and-measurement


Published on

Published in: Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Concepts and-measurement

  1. 1. Concepts <ul><li>creating explanations </li></ul><ul><li>building concepts </li></ul><ul><li>measurement </li></ul>
  2. 2. creating explanations <ul><li>Independent Variable </li></ul><ul><li>Explanatory Variable </li></ul><ul><li>Driving Force </li></ul>Dependent Variable Outcome X + Y  Z
  3. 3. units <ul><li>Unit of analysis: the level the dependent variable is being measured; the case </li></ul><ul><ul><li>Ex: Nation (portion Christian) </li></ul></ul><ul><li>Unit of observation: the unit on which data are being gathered </li></ul><ul><ul><li>Ex: Individual within the nation (data on that individual’s belief) </li></ul></ul>
  4. 4. what is a concept? <ul><li>Gerring : “an alignment among three intertwined components: the term ( a linguistic label comprised of one or a few words), the phenomena to be defined (the referents, extension, or denotation of a concept), and the properties or attributes that define those phenomena (the definition, intension, or connotation of a concept)” (39) </li></ul>
  5. 5. How do we build (background/systematized) concepts? <ul><li>process of adjustment to maximize fulfillment of the criteria for a good concept (Gerring, see table --> ) </li></ul><ul><li>“ successful definition involves the identification of attributes that provide necessary and sufficient conditions for locating examples of the term” </li></ul><ul><li>avoid homonymy, synonymy, instability </li></ul><ul><li>strategies </li></ul><ul><ul><li>choose a definition from a classic work </li></ul></ul><ul><ul><li>find a causal-explanatory understanding of a term (term defined by what explains it or what it explains) </li></ul></ul><ul><ul><li>intellectual history of the concept (genealogy is a version of this) </li></ul></ul><ul><ul><li>focus on specific definitional attributes, grouping attributes that are similar </li></ul></ul>
  6. 6. “ ideology”
  7. 8. from concept to indicator
  8. 9. Measurement: sensitive , valid, reliable <ul><li>Sensitivity : extent to which cases are homogenous within the value of your variable; precision in measures </li></ul><ul><ul><li>Keep variable as sensitive as possible </li></ul></ul><ul><ul><li>But keep in mind limits of measurement method </li></ul></ul>There is a sensitivity-reliability trade-off in self-reports with rating scales. People are generally good at giving up to 7 positions, but more than that leads to greater error.
  9. 10. measurement levels <ul><li>nominal : variation in kind or type; categories </li></ul><ul><ul><li>Example: married, living with partner, never married, separated, divorced, widowed </li></ul></ul><ul><li>ordinal : variation in degree along a continuum; rank order </li></ul><ul><ul><li>Example: support for Obama (strongly favor, favor, neutral, oppose, strongly oppose) </li></ul></ul><ul><li>interval : variation in degree along a continuum; relative positions on a continuum </li></ul><ul><ul><li>Example: year in which event occurred </li></ul></ul><ul><li>ratio : variation in degree along a continuum; numbers signify absolute position on the continuum and the number zero is meaningful </li></ul><ul><ul><li>Example: number of battlefield deaths; age of a person </li></ul></ul>
  10. 11. Measurement: sensitive, valid, reliable <ul><li>Valid: extent to which what you measure is what you say you measure </li></ul><ul><ul><li>Face Validity: plausible on its “face” </li></ul></ul><ul><ul><li>Content Validity: extent to which all components of a systematized concept are measured in the indicator; matching a list of attributes </li></ul></ul><ul><ul><li>Criterion-related Validity: extent to which an indicator matches criteria; predictive, concurrent </li></ul></ul><ul><ul><li>Construct Validity: extent to which what you measure behaves as it should within a system of related concepts; an attribute of a measure/indicator </li></ul></ul><ul><ul><li>Convergent/Divergent Tests </li></ul></ul><ul><ul><li>Internal Validity: extent to which a research design yields strong evidence of causality; an attribute of the research design </li></ul></ul><ul><ul><li>External Validity: extent to which a research design yields findings that generalize; an attribute of the research design </li></ul></ul>
  11. 12. Measurement: Construct Validity <ul><li>Construct Validity: extent to which what you measure behaves as it should within a system of related concepts; an attribute of a measure/indicator </li></ul><ul><ul><li>Nomological Net </li></ul></ul><ul><ul><ul><li>does not have to be causal, could be correlational </li></ul></ul></ul><ul><ul><ul><li>should be completely non-controversial, established in literature or common sense </li></ul></ul></ul><ul><ul><ul><li>example: GRE scores and first-year grad school GPA </li></ul></ul></ul>
  12. 13. Measurement: convergence and divergence <ul><li>Convergent: alternative measures of a concept should be strongly correlated </li></ul><ul><li>Divergent: different concepts should not correlate highly with each other </li></ul>
  13. 14. Measurement: sensitive, valid, reliable <ul><li>Reliable: extent to which a measure is free from random error; measures are repeatable, consistent, dependable </li></ul><ul><li>Assessments: </li></ul><ul><ul><li>test-retest </li></ul></ul><ul><ul><li>interobserver; intercoder </li></ul></ul><ul><ul><li>split-half </li></ul></ul><ul><ul><li>Cronbach’s Alpha </li></ul></ul>
  14. 15. True score theory & reliability <ul><li>The error term has two components: </li></ul><ul><li>Random Error: the more reliable the measure, the less the random error </li></ul><ul><li>Systematic Error: the more valid the measure, the less the systematic error </li></ul>Observed score = True ability + Random error X = T + e
  15. 16. Assessing reliability <ul><li>test-retest: about consistency of response to a treatment; assumes the characteristic being measures is stable over time </li></ul><ul><li>interobserver; intercoder: two or more observe or code; can be done as a test on a random subset of cases </li></ul><ul><li>alternative-form: measure same attribute more than once using two different measures of the same concept ( e.g. to measure liberalism, use two different sets of question on same respondents at two different times ) </li></ul><ul><li>split-half: two measures of same concept applied at the same time ( e.g. survey of political opinions; ten questions related to liberalism, take two different sets of questions as different measures of liberalism ) </li></ul><ul><li>Cronbach’s Alpha: average of all split halves coefficients - all possible combinations; a statistical technique... </li></ul>
  16. 17. Elkins example <ul><li>graded v. dichotomous measures of democracy </li></ul>
  17. 18. Concept & Measurement Exercise <ul><li>describe each level for your concept </li></ul><ul><li>identify a scale (nominal, ordinal, etc.) </li></ul><ul><li>identify strategies for considering the reliability and validity of your measure </li></ul><ul><li>is your concept embedded in any particular ontological or epistemological understanding of the world? </li></ul>