Simple, Complex, and Compound Sentences Exercises.pdf
2741 Ch 13 PowerPoint.pptx
1. Fire and Emergency Services
Instructor
9th Edition
Chapter 13 — Test Item Construction
2. ‣ Describe common considerations for
test instruments.
Learning Objective 1
3. Common Considerations for All Tests
Test items must always be based on specific learning
objectives; Level II Instructors must consider
‣ Test formatting and item arrangement
‣ Test item level of cognition and difficulty
‣ Test instructions and time requirements
‣ Testing bias
6. ‣ Provide space for students to write their name and the
date on either the test sheet or a separate answer
sheet
‣ Provide a title or label at the top of the first page
‣ Number all tests and label different versions of the test
— This will help with score reporting and test security
‣ Number all pages of the test — Students will be able to
budget their time more wisely if they can see the
length of the test
Test Formatting and Item Arrangement
7. ‣ Provide clear instructions at the beginning of the test
and at the beginning of each section that uses a
different type of test item (such as multiple-choice,
matching, true-false, or fill-in-the-blank)
‣ Provide a sample test item, along with a sample
answer, to show students how to respond to each
item
‣ Number all items consecutively
‣ Single-space each test item, but double-space
between items
Test Formatting and Item Arrangement
8. Test Formatting and Item Arrangement
‣ State the point value of each test item, for example
‣ Multiple-choice: 1 point each
‣ Short-answer: 2 points each
‣ Use commonly understood terms
‣ For example, do not use abbreviations unless they
are placed in parentheses following the common
term
9. Test Formatting and Item Arrangement
‣ Test items must be arranged in a logical sequence, and
can be grouped into two categories
‣ Learning domain outcome
‣ Knowledge
‣ Comprehension
‣ Application
‣ Type of test item
‣ Multiple-choice
‣ Matching
‣ Short-answer
10. Test Formatting and Item Arrangement
‣ Instructors must make sure that the wording of one
test item does not reveal the answer to another test
item
‣ On computer-adaptive tests, such as those used in
EMS, sequencing may be organized so that students
can only progress to a more difficult question after
correctly answering a simpler one
12. Test Item Level of Cognition and
Difficulty
‣ Test items should evaluate the student’s ability at the
level within the taxonomy that corresponds to the
learning objective being evaluated; a variety of levels
can exist within a course
‣ The actual determination of test difficulty is the
responsibility of a Level III Instructor
‣ The instructor compiles scored tests, and evaluates
students’ performance on individual questions
‣ The difficulty level of each question is then saved
for future testing use and evaluation
13. Test Instructions
‣ Purpose of the test
‣ Method and means for recording answers
‣ Recommendation whether to guess when undecided
on an answer (in some cases, incorrect answers are
penalized more than not answering the question)
‣ Amount of time available to complete the test
16. Time Requirements for Answering
Certain Types of Questions
‣ Instructors can use these estimates to calculate how
much time students will need to complete the test
‣ May take the test, or ask another professional
member of the organization to take the test, to see
how much time is needed
‣ Tests should be an appropriate length to address
the learning objectives that the test is intended to
evaluate
17. Time Requirements for Answering
Certain Types of Questions
‣ When time is a restrictive factor, tests can emphasize
the most critical learning objectives and include a
sampling of less important objectives; this method of
test construction is called sampling
‣ The plan must be documented to prove that necessary
components were addressed
‣ The most critical objectives must be tested in each
version of the test while the less critical objectives
tested are included in a certain rotation
18. Testing Bias
‣ Test items and testing instruments should not favor or
penalize any particular group of students
‣ Ensuring that test questions very closely reflect the
materials being tested is the best way to avoid bias in
testing materials
‣ When students recognize that test items closely
resemble the information that they have studied, they
are more likely to perform confidently on tests
regardless of their gender, cultural, ethnic, or regional
backgrounds
19. Testing Bias
‣ In the fire and emergency services, bias is generally
limited to use of regional jargon and differences in
terminology
‣ Example: local governments in the U.S. and
Canada may be referred to as
‣ Counties or parishes
‣ Jurisdictions in legal terms
20. Testing Bias
‣ Similarly, some fire apparatus may be referred to as
a tanker or a tender depending on geographical
region or the differences between departments
‣ The terminology on the test should reflect the
terminology of the students and the materials from
which they studied or received training
21. ‣ Discuss various types of evaluation
instruments used in fire and emergency
service training.
Learning Objective 2
22. ‣ Deciding how to evaluate students can be a confusing
and difficult topic for the Level II Instructor
‣ Instructor needs to establish the appropriate evaluation
instrument to meet the parameters of the curriculum
and course being presented
‣ In addition, once the instructor determines the type of
instrument to use, they must then design the
instrument to be fair and unbiased
‣ This process takes time, and new Level II Instructors
may often underestimate the time commitment
Student Evaluation Instruments
25. Written Tests: Objective
‣ An objective test item is a question for which there is
only one correct answer
‣ Judgment of the instructor or evaluator is not
relevant and has no effect on assessment
‣ Objective items measure cognitive learning, but
typically only at the lower levels of remembering and
understanding
26. Written Tests: Objective
‣ Properly constructed objective test items can also be
used to measure higher levels of cognitive learning
such as evaluation or creation
‣ Three main types of objective questions
‣ Multiple-choice
‣ True or false
‣ Matching
27. Written Tests: Subjective
‣ A subjective test item has no single correct answer;
the evaluator’s judgment may therefore affect
assessment
‣ Subjective items are an effective way of measuring
higher cognitive levels
‣ They allow students the freedom to organize,
analyze, revise, redesign, or evaluate a problem
28. Written Tests: Subjective
‣ The strength of a student’s response to these items
depends on a variety of factors, such as
‣ How well they communicate their ideas
‣ Personal opinions of the evaluator
29. Written Tests: Subjective
‣ There are three main types of subjective test items
‣ Short-answer or completion
‣ Essay
‣ Interpretive exercise
30. Written Tests: Multiple Choice
A multiple-choice test item consists of either a
question or an incomplete statement, called the stem,
plus a list of several possible responses, which are
referred to as choices or alternatives
31. Written Tests: Multiple Choice
‣ Students are tasked to read the stem and select the
correct response from the list of alternatives
‣ The correct choice is known as the answer and the
remaining choices are called distractors
‣ Distractors are used to discriminate between students
who understand the subject matter well and those
who are uncertain of the correct answer
‣ Distractors are not meant to trick, confuse, or mislead
students
32. ‣ Write the stem in the form of a direct question or an
incomplete sentence that measures only one learning
objective
‣ Write a clear, brief stem that contains most of the
wording for the test item
‣ This helps to avoid placing repeated words in the
alternatives
‣ Write positive questions as much as possible; be
consistent in labeling negative words if and when
negative statements are used
Written Tests: Multiple Choice
Guidelines
33. ‣ Provide at least three plausible, attractive distractors
‣ Phrase the choices so that they are parallel and
grammatically consistent with the stem
‣ Place correct answers in varied positions among the
A, B, C, and D choices
‣ Place each choice on a separate, indented line, and
in a single column
‣ Begin responses with capital letters when the stem is
a complete question
Written Tests: Multiple Choice
Guidelines
34. ‣ Begin responses with lowercase letters when the
stem is an incomplete sentence
‣ Do not include choices that are obviously wrong or
intended to be humorous
‣ Make sure that stems and alternatives do not give
students grammatical clues as to the correct
response
‣ Make all alternatives close to the same length
Written Tests: Multiple Choice
Guidelines
35. ‣ Avoid using the phrases
‣ All of the above
‣ None of the above
‣ Do not test trivial ideas or information
‣ Use correct grammar and punctuation
Written Tests: Multiple Choice
Guidelines
36. Written Tests: Multiple Choice
Disadvantages
‣ They are not well suited to measuring certain
cognitive skills, such as organizing and presenting
ideas; essay tests are more effective for this purpose
‣ Depending on the test writer’s skill, this type of test
may not include different difficulty-level test items
that measure a variety of cognitive learning levels
‣ Creating appropriate and plausible distractors for
each stem can require significant time and thought
‣ Students who do not know the material may still be
able to guess the correct answer
37. ‣ The true-false test item is
a single statement that
the student must
determine to be either
true or false
‣ It is difficult to construct
a statement that is
completely true or
completely false
Written Tests: True-False
38. Written Tests: True-False
‣ True statements should be based on facts
‣ False statements should be based on common
misconceptions of the facts
‣ In addition to the traditional true-false test items,
there are also modified true-false test items
‣ Modified true-false items ask the student to explain
why an item is false or to rewrite the item to make it
true
39. Written Tests: True-False
‣ One limitation of true-false questions
‣ Students tend to remember the false items on the
test as being true, known as the negative
suggestion effect
‣ Instructors should review the correct answers to
true-false questions with students after scoring the
test to help combat this effect
40. Written Tests: True-False Guidelines
‣ Write the words True and False at the left margin if
students must mark their answers on the test paper
‣ On computer-scored answer sheets
‣ True may be assigned to A
‣ False is assigned to B
‣ Provide clear instructions so that students know how
to respond to each statement
41. Written Tests: True-False Guidelines
‣ Create enough test items to provide reliable results
‣ For reliability purposes, more true/false items are
needed than the number used for multiple-choice
items
‣ A large number of test items minimizes the
possibility of guessing the correct answers
‣ Distribute true and false items randomly
42. Written Tests: True-False Guidelines
‣ Avoid determiners (words that indicate a specific
answer) that provide unwarranted clues
‣ Words such as usually, generally, often, or
sometimes are most likely to appear in true
statements
‣ The words never, all, always, or none are more
likely to be found in false statements
‣ Avoid creating items that could trick or mislead
students into making a mistake
43. Written Tests: True-False Guidelines
‣ Ensure only one correct answer is possible
‣ Avoid double-negative test items; they are very
confusing to students and do not accurately measure
knowledge
‣ Avoid using personal pronouns such as "you“
‣ Do not use test items that test trivia or obscure facts
‣ Develop test items that require students to think
about what they have learned, rather than merely
remember it
44. Written Tests: True-False Guidelines
‣ Avoid unusually long or short test items, because the
length may be a clue; true items are often longer
than false items, because they include a justification
‣ Create brief, simply stated test items that deal with a
single concept; avoid lengthy, complex items that
address more than one concept
‣ Avoid quoting information directly from the textbook
45. Written Tests: Matching
‣ Matching test items
consist of two parallel
columns of words,
phrases, images, or a
combination of these
‣ In the most common
example, students must
match a word from the
left column with its
definition from the right
column
46. Written Tests: Matching
The content of a
matching test item
must consist of similar
material, items, or
information
48. Written Tests: Matching Guidelines
‣ Avoid placing each group of prompts and the list of
responses on more than one page
‣ Separate matching sections into sets of five problems
and responses when using computer or mechanically
scored answer sheets
‣ Consider preparing one more response than there are
prompts; the extra response requires more precise
knowledge and prevents students from finding an
answer by eliminating all the other possible answers
49. Written Tests: Matching Guidelines
‣ Number the problem statements; place an answer line to
the left of each number unless a separate answer sheet is
used
‣ Use letters for each response
‣ Arrange problem statements and responses into two
columns
‣ Problem statements on the left side of the page
‣ Responses on the right
‣ Columns may be titled with appropriate headings, such as
Tools and Uses, or Symptoms and Treatments
51. NOTE
‣ Instructors should be advised that
matching test items may be more
effectively and efficiently written as a
series of multiple-choice questions.
52. Written Tests:
Short-answer/Completion
‣ A short-answer item is a
question for which students
must write a correct response
‣ To do so, they must recall
previously learned
information, apply relevant
principles, or understand
methods or procedures
‣ Short-answer items are often
subjective
53. Written Tests:
Short-answer/Completion
‣ A completion item should be objective
‣ This type of test item is a statement in which key
words are replaced with an underlined blank space
that students are tasked to fill in
55. Written Tests:
Short-answer/Completion Guidelines
‣ On completion test items, create short, direct
statements for which only one answer is possible
‣ Avoid long statements with a string of blanks to fill
‣ Start with a direct question and change it to an
incomplete statement
‣ Make sure that the desired response is a key point in
the lesson
‣ Arrange the statement with the blanks at or near the
end of the sentence
56. Written Tests:
Short-answer/Completion Guidelines
‣ Avoid statements that call for answers with more than
one word, phrase, or number
‣ Eliminate unnecessary clues, such as answer blanks that
vary in length or the use of the words "a" or "an"
preceding the blank
‣ Write a rubric or detailed answer sheet so that the
scorer understands the full extent of possible,
acceptable answers to the questions
57. Written Tests: Essay
‣ Like short-answer test
items, essays are
subjective
‣ Students must construct
an in-depth answer on a
topic or question related
to a key aspect of the
course material
58. Written Tests: Essay
‣ The strength of this item type is that it tests the students’
higher level cognitive processes
‣ Students are expected to demonstrate the ability to analyze
a topic, create a solution to a problem, or evaluate a
system or process
‣ Essay tests eliminate guessing, because students must
know the material thoroughly in order to write an effective
essay
‣ Creative students often prefer this type of test item
because it allows them a forum to express their perspective
of a topic
59. Written Tests: Essay Disadvantages
‣ Essays are time-consuming for students to complete and
instructors to score
‣ Differences in students’ writing ability, penmanship,
spelling, and grammar may affect an instructor’s ability
to easily score the test
‣ Students who have difficulty writing or write slowly will
be at a disadvantage, especially in a timed test
60. Written Tests: Essay Guidelines
‣ Choose essay topics that reflect key aspects of the
course material
‣ Create a rubric that establishes clear scoring guidelines
‣ For each essay question, provide clear instructions that
define how students should respond, how much time
they should spend responding and how many pages or
paragraphs each response should be
‣ Provide sufficient time for students to respond to all
questions
61. Written Tests: Interpretive Exercises
‣ The interpretive exercise is another subjective test
item that measures higher level cognitive processes
‣ An exercise consists of introductory material, typically
numerical data, a graph, or a paragraph of text,
followed by a series of test items
‣ Students read the text or look at the illustrations,
then answer the test items, which may be any of the
types described in this chapter
62. Written Tests: Interpretive Exercises
Rules
‣ Make sure that all introductory material relates to key
learning objectives, and is as concise as possible
‣ Apply relevant guidelines for effective item
construction for each test item
‣ Use test items that require the same type of
performance that is listed in the test specifications
for the various learning objectives
‣ Create original introductory material unfamiliar to
students
63. Written Tests: Interpretive Exercises
Rules
‣ Ensure that the introductory material does not give
away the answer to any of the test items
‣ Encourage students to read the introductory material
to be able to answer test items
‣ Provide enough test items, using a variety of item
types, to effectively measure students’ understanding
of the material
64. Oral Tests
Open or closed questions
‣ When the purpose of the test is to determine
knowledge, the questions should be closed, requiring
only a single brief answer
‣ When the purpose is to determine how a student
responds under pressure, the question should judge
both accuracy and presentation; In this case, the
questions should be open, permitting longer answers
that may lead to further questions
65. Oral Tests
‣ Oral tests can be very stressful for students
‣ Instructors should provide a relaxed,
comfortable atmosphere for the presentation
of oral tests
67. Oral Tests
‣ Oral tests are highly subjective, especially when the
questions may be answered a number of ways
‣ To reduce evaluator bias, test developers should
provide a scoring rubric that lists all possible correct
answers
‣ An oral test is the most valid and reliable way to test a
student’s ability to verbally communicate ideas,
concepts, and processes; may be the best measure of a
student’s judgment and thought processes
70. Performance (Skills) Tests
‣ Assessment is based on either a speed standard such
as timed performance, a quality standard such as
minimum acceptable performance, or both
‣ Performance tests require students to demonstrate
psychomotor proficiency after appropriate practice or
drill sessions
‣ Tests must take place under controlled conditions so
instructors can make reliable, valid judgments about
student performance
71. Performance (Skills) Tests:
Guidelines
‣ Specify performance objectives to be measured
‣ Select rating factors on which the test will be judged
‣ Provide written instructions that clearly explain the
test situation
‣ Confirm a new performance test with other
instructors or previous students before administering
it to students
72. Performance (Skills) Tests:
Guidelines
‣ Use more than one test evaluator
‣ Follow established procedures when administering
the test
‣ Make a score distribution chart after tests have been
administered and graded
‣ Rotate team members to every position for team
evaluation ratings
83. Is designed to
‣ Determine readiness for instruction or placement in
the appropriate instructional level (prescriptive or
placement test)
‣ Measure improved progress or identify learning
problems that are hampering progress (formative
or progress test)
‣ Rate terminal performance (summative or
comprehensive test)
Determining Test Purpose and
Classification: Planning Considerations
84. ‣ Whether the test measures technical knowledge
retention and recall in the cognitive domain (written
or oral tests)
‣ Measures manipulative skills in the psychomotor
domain (performance or skill tests)
‣ Measures behavioral changes in attitude, values, or
beliefs in the affective domain (written or oral tests)
Determining Test Purpose and
Classification: Planning Considerations