This document discusses the validity of using audience response systems, also known as clickers, for assessments in the classroom. It outlines how clickers can be used for taking attendance, enhancing instruction, and assessing learning through placement, formative, and summative assessments. The document also considers evidence of content, criterion, and construct validity when using clicker assessments and discusses how clicker assessments can positively impact student performance, motivation, attitudes, learning, and study habits.
Typically, audience response systems are used in classrooms to check attendance, enhance peer instruction and informally and formally assess student learning (Fies & Marshall, 2008). This literature review focuses on the use of audience response systems as a tool for assessments and how the validity of the assessment results may be affected. Additionally, teacher concerns and challenges regarding audience response systems are discussed. In the literature reviewed, audience response systems were used for all three forms of assessment:
Typically, audience response systems are used in classrooms to check attendance, enhance peer instruction and informally and formally assess student learning (Fies & Marshall, 2008). This literature review focuses on the use of audience response systems as a tool for assessments and how the validity of the assessment results may be affected. Additionally, teacher concerns and challenges regarding audience response systems are discussed. In the literature reviewed, audience response systems were used for all three forms of assessment:
Audience response systems, commonly known as “clickers,” are wireless, handheld devices individuals use to record responses to multiple-choice and true/false questions. These responses are simultaneously transmitted to a receiver, which is connected to a computer containing audience-response system software that analyzes and stores the data received (Figure 1). Audience response systems have been used in higher education classrooms for approximately ten years and are becoming more popular in secondary and elementary classes as technological tools used to help make classroom learning more effective and efficient for both students and instructors (Fies & Marshall, 2008).
According to Gronlund and Waugh (2009), types of considerations in determining the validity of assessment results include (1) content-related evidence; (2) criterion-related evidence; (3) construct-related evidence; and (4) consequences of using the assessment.Content-related evidenceWhen creating an assessment, the same item development process is used regardless of the delivery. With the use of audience response systems, the assessment itself is still the same; the delivery of the assessment is the only difference. Therefore, using an audience response system to deliver an assessment does not change the content of the assessment. If the sample of tasks created is representative of the domain, there is evidence of content validity regardless of the delivery of the assessment. Criterion-related evidenceUsing the assessment in either predictive or concurrent studies would allow for inference of criterion-related evidence of validity (Gronlund & Waugh, 2009). None of the articles reviewed, however, compared assessments for predictive or concurrent validity.
As mentioned earlier, evidence of construct validity can be affected by characteristics of the test such as scoring, format, and directions. With the use of audience response systems, these characteristics, according to the articles reviewed, help to make assessments more convenient and effective with time-saving technology.