Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

Validity is an important concept in the study of psychological tests. This presentation discusses different types of validity with easy to comprehend examples.

  • Be the first to comment

  • Be the first to like this


  1. 1. /aleemashraf1/aleemashraf1 /aleemashraf1/ VALIDITY Validity: (Accuracy) Reliability is not enough to prove the usefulness of a test. It must be both valid and reliable. Consider the following example: A weighing scales consistently tells you that your weight is 50 kg. But due to an internal problem this reading is incorrect. In this example the weight results are consistent (reliable) but are not accurate (valid). Validity is defined as “the extent to which the test measures what it is designed to measure”. There are a number of ways to prove the validity of a test: construct validity, content validity, criterion validity and face validity.
  2. 2. /aleemashraf1/aleemashraf1 /aleemashraf1/ VALIDITY 1) Construct validity: (Concept/theory identification) ?Construct: A psychological construct is an abstract attitude, ability, skill or a set of characteristics defined by a theory. ?Examples of some constructs: anxiety, depression, intelligence, honesty, language proficiency etc. ?Construct validity tries to answer the question such as, “does the test accurately measure the construct (theory) behind the test. (not something else) ?The scores on the test should confirm to the hypotheses formulated in the construct. ?The hypotheses can then be rejected or accepted on the basis of test scores. ?Example-1:Alfred Binet hypothesizes that intelligence increases with age. TheAlfred Binet scale of intelligence should on average, show the difference in the scores of the young preschool kids and the higher grade children for his test to have construct validity. ?Example-2: For a Piagetian scale to be construct valid, it must predict the scores according to the hypotheses formulated by Jean Piaget in his theory of cognitive development, i.e. Piaget says, children learn certain skills at certain ages like, object permanence is learned at the end of sensorimotor stage. On average the Piagetian scale should confirm to this hypothesis.
  3. 3. /aleemashraf1/aleemashraf1 /aleemashraf1/ VALIDITY ?If a hypothesis is proven wrong, it gives the researcher new dimension to think about his construct with a new perspective. ?Construct validity of a test can be proven in two ways. ?The construct of a test can be proven when the scores of a new test show correlation with the existing test designed to measure the same construct. ?Example:Anew test of intelligence should show strong correlation with other tests of intelligence or other similar constructs like span of attention. This is called convergent validity because two tests converge just like the two rivers converge into one. ?Divergent validity proves that the test is unrelated (divergent, different) to the constructs that are different but may be confused to be the same. ?Example: A test designed to measure depression, should not correlate too highly with anxiety. (Anxiety and depression are two similar but totally different constructs). If the scores on average show low correlation between the two constructs, the test is said to have discriminant validity. ?In other words a valid test of depression discriminates between the two similar but totally different constructs i.e. anxiety and depression. ?D.P. Campbell and Fiske suggested a method called multitrait-multimethod matrix to show the convergent and divergent validity of a test. Listen to me for the procedure ?To show convergent validity, we would expect our two measures of depression to correlate highly with each a. Convergent Validity: (Converge means the point where two lines meet) b. Discriminant (Divergent) Valdity: (Diverge means to split, to differentiate, to be separate)
  4. 4. /aleemashraf1/aleemashraf1 /aleemashraf1/ VALIDITY other (same trait but different methods). To show discriminant validity we would expect our true-false measure of depression not to correlate significantly with the true-false measure of anxiety (different traits but same method). ?It shows the extent to which a test score correlates with (confirms to) the concrete/real world/observable behavior or trait.There are two ways to show that relationship: ?When a test is matched with other observable behavior (criterion) at around about the same time as the test administration, it is called concurrent validity. ?Example: If someone scores high on a depression scale, does he really show signs and symptoms of depression. ?In this example the new depression scale is being validated with the well established observable criterion (signs and symptoms of depression) at the same time as the test. ?Criterion validity is usually established to eliminate the need of a behavioral examination which can be time consuming. ?Predictive validity refers to the degree to which a test can predict future performance. ?It is particularly applicable to achievement and personnel tests. ?Example: SAT (Pre-entry) test is said to have a good predictive validity if the GPA of the students at university later correlates with the SATscores. 2) Criterion Validity: (Prediction Validity) a. Concurrent Validity: (Concurrent means at the same time, simultaneously) b. Predictive Validity:
  5. 5. /aleemashraf1/aleemashraf1 /aleemashraf1/ VALIDITY ?Or an aptitude test has good predictive validity if it later correlates with the job performance. (Criterion here is the future success/performance of a student and employee respectively) ?If the scores on the criterion measure influence the future scores, then this condition is called criterion contamination. ?Example:An employer is influenced by the employee's low score on an aptitude test and gives him low rating. ?Or a teacher is influenced by the student's high SATscore and gives him/her high grades. ?Content validity examines whether the test includes the entire domain of the syllabus for which a test is designed. ?It is particularly applicable to achievement and personnel tests. ?In this we check if the items included in an academic test, do they represent the full syllabus required to master a course (relevance) if so, do they do it in the right proportion (coverage). ?Example: If five chapters of a subject were covered in the class, if the test covers them all with the equal coverage to all chapters then the test is said to be content valid. ?Or a general arithmetic reasoning test can only be content valid if it covers the full domain of arithmetic such as addition, subtraction, multiplication, division etc. ?Items on achievement and personnel tests are included with the consensus among subject specialists, teachers and employers respectively. Criterion contamination: 3) Content Validity: (Content relevance, content coverage)
  6. 6. /aleemashraf1/aleemashraf1 /aleemashraf1/ VALIDITY 4) Face Validity: (Appearance of a test to a common test taker) ?Face validity refers to whether a test looks valid (relevant) to the common test taker. Face validity is important for the reasons such as: ?Test takers take interest in taking it because it looks relevant to them. ?Test users buy it because it looks valid to them by the face of it. Face validity also has a few disadvantages for certain tests, such as: ?An integrity test may not yield accurate results because of faking issue. ?Tests with low face validity usually have low reliability such as Rorschache Inkblot test and HFD.