The document provides guidance on writing effective multiple choice questions for assessing students in information technology education. It discusses using multiple choice questions to test higher levels of cognition beyond just knowledge, such as comprehension, application, and analysis. It also covers best practices for writing high quality multiple choice questions, including using clear grammar, an appropriate number of options, plausible distractors, and testing the desired level of cognition. The goal is to provide IT teachers with information to help them construct multiple choice questions that maintain the integrity of their assessments.
Using mcq for effective it education woodfordRipudaman Singh
This document provides guidance on using multiple choice questions effectively in information technology education. It discusses challenges such as larger class sizes and trimester study periods that have led to increased use of multiple choice assessments. The document outlines factors to consider when writing high-quality multiple choice questions, such as grammar, number of options, order of questions and answers. It also addresses criticisms that multiple choice questions only test recall and provides examples of questions that can assess higher-order thinking skills based on Bloom's taxonomy, such as comprehension, application and analysis. The overall aim is to help IT educators construct multiple choice tests that maintain the integrity and quality of assessment.
Dear Friend,
ACC GUIDE prepared by our experts in accordance with the guidelines of the ACC Entrance Exam New Pattern is attached here for better understanding and planning of your Exam preparations. Our systematic approach will help you gain confidence and make you ready to face the ACC examination with perfection.
Using Knowledge Building Forums in EFL Classroms - FIETxs2019ARGET URV
1) The document describes a study that examined the impact of using Knowledge Building forums on the development of English language skills for Spanish students.
2) Sixty-seven Spanish students participated in the study, engaging with Knowledge Building forums and completing pre- and post-tests of their English abilities.
3) The results showed that collaborative writing in the forums significantly improved students' English writing skills and comprehension, but did not necessarily improve their vocabulary or specific grammar skills.
This mark scheme provides guidance for examiners marking an English Literature exam assessing students' analysis and comparison of poems. It includes 6 mark bands with descriptors of the skills and understanding demonstrated by responses scoring in each band. For each question, indicative content lists concepts and literary techniques that strong answers may discuss for each assessment objective. The document explains how examiners should apply the mark scheme and annotate responses to arrive at a final mark.
The chapter discusses qualitative research methods including focus groups, depth interviews, and projective techniques. Focus groups involve interviewing groups of 6-12 people to explore views in a group setting. Depth interviews use open-ended questions to understand motivations and attitudes. Projective techniques indirectly explore subconscious motivations through activities like word associations. Online methods can reduce costs but lack control over environment.
This study aims to understand how anxiety as a variable contributes to affect the ability in comprehending English texts evaluated on students’ performance on various test types, namely 1) multiple-choice tests (MC), 2) fill-in-the-blank tests (FTB), and 3) true/false tests (TF). This research used ex post facto method, with responses collected from eleventh grade high school students in Jakarta. Respondents worked on tests with similar material, although at different times. Students’ anxiety was measured based on behavioral and cognitive characteristics listed on the questionnaire given to respondents along with the English test. The results pointed out that there was indeed a difference of students’ anxiety levels in working on the various English text comprehension tests given. The results of the English text comprehension tests were generally influenced by the anxiety variable, although the effect was found to be small. Teachers are recommended to apply multiple variations of ability evaluation in measuring the students’ English text comprehension by using multiple test modes.
This document provides an overview of matrix reasoning tests and strategies for solving Raven's Progressive Matrices. It discusses:
- Matrix reasoning tests measure fluid intelligence and are used widely as non-verbal IQ tests. Raven's Matrices are among the most well-known types.
- Raven's Matrices come in three levels - Standard, Coloured, and Advanced - with increasing difficulty. They involve identifying missing elements that complete patterns.
- Strategies for solving Raven's Matrices involve learning the five basic rule types problems can involve, either alone or combined, such as constant rows or quantitative progressions.
- Training working memory capacity alongside learning strategies can help maximize performance on matrix reasoning tests
The document describes plans to revise instruction on adding fractions with unlike denominators based on formative assessment results. Key revisions include:
1) Providing more opportunities for students to model and manipulate fractions into simplest form using concrete models and drawings before using the abstract process.
2) Giving targeted small group instruction to help students visualize how finding the greatest common factor simplifies a fraction.
3) Analyzing pre- and post-test results and practice problems to identify weaknesses and tailor remedial instruction accordingly.
Using mcq for effective it education woodfordRipudaman Singh
This document provides guidance on using multiple choice questions effectively in information technology education. It discusses challenges such as larger class sizes and trimester study periods that have led to increased use of multiple choice assessments. The document outlines factors to consider when writing high-quality multiple choice questions, such as grammar, number of options, order of questions and answers. It also addresses criticisms that multiple choice questions only test recall and provides examples of questions that can assess higher-order thinking skills based on Bloom's taxonomy, such as comprehension, application and analysis. The overall aim is to help IT educators construct multiple choice tests that maintain the integrity and quality of assessment.
Dear Friend,
ACC GUIDE prepared by our experts in accordance with the guidelines of the ACC Entrance Exam New Pattern is attached here for better understanding and planning of your Exam preparations. Our systematic approach will help you gain confidence and make you ready to face the ACC examination with perfection.
Using Knowledge Building Forums in EFL Classroms - FIETxs2019ARGET URV
1) The document describes a study that examined the impact of using Knowledge Building forums on the development of English language skills for Spanish students.
2) Sixty-seven Spanish students participated in the study, engaging with Knowledge Building forums and completing pre- and post-tests of their English abilities.
3) The results showed that collaborative writing in the forums significantly improved students' English writing skills and comprehension, but did not necessarily improve their vocabulary or specific grammar skills.
This mark scheme provides guidance for examiners marking an English Literature exam assessing students' analysis and comparison of poems. It includes 6 mark bands with descriptors of the skills and understanding demonstrated by responses scoring in each band. For each question, indicative content lists concepts and literary techniques that strong answers may discuss for each assessment objective. The document explains how examiners should apply the mark scheme and annotate responses to arrive at a final mark.
The chapter discusses qualitative research methods including focus groups, depth interviews, and projective techniques. Focus groups involve interviewing groups of 6-12 people to explore views in a group setting. Depth interviews use open-ended questions to understand motivations and attitudes. Projective techniques indirectly explore subconscious motivations through activities like word associations. Online methods can reduce costs but lack control over environment.
This study aims to understand how anxiety as a variable contributes to affect the ability in comprehending English texts evaluated on students’ performance on various test types, namely 1) multiple-choice tests (MC), 2) fill-in-the-blank tests (FTB), and 3) true/false tests (TF). This research used ex post facto method, with responses collected from eleventh grade high school students in Jakarta. Respondents worked on tests with similar material, although at different times. Students’ anxiety was measured based on behavioral and cognitive characteristics listed on the questionnaire given to respondents along with the English test. The results pointed out that there was indeed a difference of students’ anxiety levels in working on the various English text comprehension tests given. The results of the English text comprehension tests were generally influenced by the anxiety variable, although the effect was found to be small. Teachers are recommended to apply multiple variations of ability evaluation in measuring the students’ English text comprehension by using multiple test modes.
This document provides an overview of matrix reasoning tests and strategies for solving Raven's Progressive Matrices. It discusses:
- Matrix reasoning tests measure fluid intelligence and are used widely as non-verbal IQ tests. Raven's Matrices are among the most well-known types.
- Raven's Matrices come in three levels - Standard, Coloured, and Advanced - with increasing difficulty. They involve identifying missing elements that complete patterns.
- Strategies for solving Raven's Matrices involve learning the five basic rule types problems can involve, either alone or combined, such as constant rows or quantitative progressions.
- Training working memory capacity alongside learning strategies can help maximize performance on matrix reasoning tests
The document describes plans to revise instruction on adding fractions with unlike denominators based on formative assessment results. Key revisions include:
1) Providing more opportunities for students to model and manipulate fractions into simplest form using concrete models and drawings before using the abstract process.
2) Giving targeted small group instruction to help students visualize how finding the greatest common factor simplifies a fraction.
3) Analyzing pre- and post-test results and practice problems to identify weaknesses and tailor remedial instruction accordingly.
This document provides the mark scheme for Section A of the English Literature exam on poetry. It includes 6 mark bands with increasing levels of achievement. For each band, it lists the skills and understanding candidates may demonstrate in their response. It also provides sample exam questions and indicative content that responses could include. The summary evaluates the key details and skills assessed in the exam.
NCCE 2013 - The Smarter Balanced System for Improving Teaching and LearningKaren F
This document discusses the Smarter Balanced system for improving teaching and learning through computer-based assessments. It outlines the system's minimum technology requirements, technology planning process, and data available to member states on technology readiness. It also provides a link to sample assessment items and discusses what is changing in the mathematics and English/language arts assessments to better align with the Common Core State Standards.
This document contains a specimen exam paper for the Higher Modern Studies exam in Scotland. It is divided into three sections worth a total of 52 marks. Section 1 is on democracy in Scotland and the UK and is worth 20 marks. Candidates must choose one question from three options examining factors influencing voting behavior and implications of leaving the EU. Section 2 covers social issues in the UK, worth 20 marks, and includes questions on causes of inequality and crime. Section 3 addresses international issues worth 12 marks, examining world powers and issues. For each section, candidates must choose one question from the options provided. The document provides the structure, requirements, and marking guidelines for the exam.
Educational Psychology Developing Learners 8th Edition Ormrod Test Bankkynep
Full download http://alibabadownload.com/product/educational-psychology-developing-learners-8th-edition-ormrod-test-bank/
Educational Psychology Developing Learners 8th Edition Ormrod Test Bank
Practical Language Testing Glenn Fulchertranslatoran
Practical LanguageTesting
Glenn Fulcher
Specifications for testing and teaching.
A sample detailed specification for a
reading test.
In this section we present an example of an architecture for a reading test. This includes
the test framework that presents the test purpose, the target test takers, the criterion
domain and the rationale for the test content. The architecture is annotated with explanations
in text boxes. This is a detailed test specification. The complexities of coding
in test specifications of this kind are usually necessary in the design and assembly of
high-stakes tests where it is essential to achieve parallel forms. There are problems with
this type of specification for use in classroom assessment, which we deal with in Section
4 below.
The document provides an examination report for the 2010 Business English Certificate (BEC) Vantage exam. It includes:
1) An overview of exam grading and candidate performance on each paper.
2) Details on the format and content of the Reading paper, which contains 5 parts testing different reading skills.
3) Comments on candidate performance for each part of the Reading paper, identifying questions some candidates found more challenging.
The document provides information about Western Academy's GRE test preparation program. It discusses the new GRE test structure, the academy's 10 stage program to achieve a high GRE score, and strategies for each GRE section. The program includes classroom coaching, 20 online practice tests, assistance with university admissions and scholarships, and help with visa and travel processes. The goal is to improve students' speed and accuracy in order to score highly on the revised GRE general test.
This summary provides an overview of two units on language testing and assessment:
Unit B4 discusses test specification design, including elements that should be included like samples and guiding language. It also examines the distinction between prompt and response attributes.
Unit C4 focuses on how test specifications evolve over time. It provides examples of a role-play assessment and how the scoring criteria and use of written plans changed between versions as instructors debated and tested the requirements. The unit proposes applying a reverse engineering workshop to critically analyze existing test tasks and improve testing practices.
The document provides the mark scheme for an English Literature exam assessing poetry across time. It outlines the assessment objectives, mark bands, and evaluation criteria used to assess students' responses. Responses will be judged on their ability to critically respond to texts, analyze language and literary techniques, make comparisons between poems, and relate poems to their contexts. The mark scheme provides a template to evaluate responses based on the level of analysis, interpretation, and comparison demonstrated.
This document summarizes the key activities and discussions from a workshop on developing VSTEP listening test items in Hanoi, Vietnam in December 2015. It describes four main activities: 1) familiarizing participants with CEFR listening descriptors, 2) defining the test construct, 3) reviewing test specifications, and 4) practicing item writing techniques. It also covers preparing listening texts, working with native speakers, and a hands-on practice session where participants wrote test items using short base texts. Throughout, it emphasizes expert approaches and best practices for writing high-quality listening test items.
This document presents a method called Joint Sentiment and Topic Detection (JST) that can simultaneously detect sentiment and topic from text without requiring labeled training data. JST extends the Latent Dirichlet Allocation (LDA) topic model by adding an additional sentiment layer. It assumes words are generated from a joint distribution conditioned on both a sentiment label and topic. The document evaluates JST on movie reviews and product reviews using domain independent sentiment lexicons as prior information. Experimental results show JST can accurately classify sentiment at the document level and detect topics for different domains.
UGC NET Model paper (paper-1) Questions with AnswersRaja Adapa
This document contains 45 multiple choice questions related to education. The questions cover topics such as research methodology, teaching-learning processes, statistics, logic, and educational technology. For each question there are 4 answer options labelled A, B, C, and D. The correct answer is to be chosen from among these 4 options.
This document provides an overview of content analysis as a research technique. It defines content analysis as objective, systematic analysis and categorization of communication content. The workshop covers the procedures of content analysis, including design, unitizing, sampling, coding, drawing inferences and validation. Examples of using content analysis to analyze text data from interviews are presented, showing coding categories, frequency calculations and correlations. Content analysis is described as a useful technique for making inferences from qualitative data in an objective and replicable manner.
This chapter discusses fieldwork and data collection in marketing research. It covers the selection, training, supervision, validation and evaluation of field workers who collect data. Field workers are selected based on qualifications like communication skills, appearance and experience. They undergo training in techniques for initial contact, asking questions, probing, recording answers and terminating interviews. Supervision involves quality control, sampling control and preventing cheating. Fieldwork is validated by re-contacting respondents. Evaluation looks at costs, response rates and data quality.
This document provides guidance on writing effective multiple choice questions. It discusses the advantages and disadvantages of multiple choice questions, guidelines for constructing item stems and alternatives, and examples of questions at different cognitive levels. The intended learning outcomes are to explain the strengths and weaknesses of multiple choice exams, evaluate existing multiple choice items, and create effective multiple choice items that measure various learning levels. Participants are engaged in revision activities to practice applying the guidelines.
Writing good multiple choice test questionsenglishonecfl
Multiple choice tests can effectively and reliably assess student learning when questions are carefully constructed. An effective multiple choice question consists of a clear problem or stem and answer options that include one correct response and plausible distractors. Distractors should represent common student errors to best test understanding. Question structure and wording should avoid inadvertently signaling the right answer. Higher-order thinking can be assessed by requiring students to apply, analyze, or evaluate concepts in their response.
The document provides guidance on writing effective multiple choice test questions. It discusses characteristics of good test questions such as being clear, concise, independent of each other, and measuring learning objectives. The document outlines best practices for constructing question stems and response options, including making sure there is only one right answer, responses are parallel in structure, and don't provide clues to the right answer. It also discusses using multiple choice questions to test higher-order thinking by focusing on application, analysis, and evaluation in the question and responses.
The document provides guidelines for writing test items or questions. It defines key terms related to test development such as item, item writing, item pool, test, and task. It also describes different item formats such as dichotomous, polytomous, checklists, and Likert scales. For multiple choice items, it explains the components of the stem, lead-in statement, answer options, correct answer, and distractors. The document outlines prerequisites for item writing and provides guidelines for writing clear, unambiguous items that avoid trick questions and guessing. It suggests using Bloom's Taxonomy to develop items testing different cognitive levels and provides examples of terms that can be used to frame item questions.
The document provides guidelines for writing effective test items. It defines key terms related to item writing such as item, item writing, item pool, and test. It also describes different item formats including dichotomous, polytomous, checklists, and Likert scales. The document outlines best practices for writing multiple choice, true-false, matching, short answer, and oral examination items. It emphasizes the importance of clarity, avoiding trick questions, using a variety of question types and cognitive levels, and carefully constructing item stems, options, and distractors. Adhering to these guidelines helps ensure items are valid and reliable measures of student learning.
The document provides guidance on writing effective test questions. It discusses assessing different cognitive levels, using a variety of question types, and matching question types to learning objectives. Specific tips are given for creating multiple choice, true/false, and instructor-marked questions to effectively evaluate learner progress and course effectiveness. Practical considerations like available time and technology are also addressed.
Multiple choice questions can assess different levels of knowledge from simple recall to interpretation and problem solving. They provide flexibility through variations like correct answer, best answer, and interpretive exercises using stimulus materials. Analysis of multiple choice questions focuses on scoring models to determine student achievement and item analysis to evaluate how well questions functioned.
This document provides the mark scheme for Section A of the English Literature exam on poetry. It includes 6 mark bands with increasing levels of achievement. For each band, it lists the skills and understanding candidates may demonstrate in their response. It also provides sample exam questions and indicative content that responses could include. The summary evaluates the key details and skills assessed in the exam.
NCCE 2013 - The Smarter Balanced System for Improving Teaching and LearningKaren F
This document discusses the Smarter Balanced system for improving teaching and learning through computer-based assessments. It outlines the system's minimum technology requirements, technology planning process, and data available to member states on technology readiness. It also provides a link to sample assessment items and discusses what is changing in the mathematics and English/language arts assessments to better align with the Common Core State Standards.
This document contains a specimen exam paper for the Higher Modern Studies exam in Scotland. It is divided into three sections worth a total of 52 marks. Section 1 is on democracy in Scotland and the UK and is worth 20 marks. Candidates must choose one question from three options examining factors influencing voting behavior and implications of leaving the EU. Section 2 covers social issues in the UK, worth 20 marks, and includes questions on causes of inequality and crime. Section 3 addresses international issues worth 12 marks, examining world powers and issues. For each section, candidates must choose one question from the options provided. The document provides the structure, requirements, and marking guidelines for the exam.
Educational Psychology Developing Learners 8th Edition Ormrod Test Bankkynep
Full download http://alibabadownload.com/product/educational-psychology-developing-learners-8th-edition-ormrod-test-bank/
Educational Psychology Developing Learners 8th Edition Ormrod Test Bank
Practical Language Testing Glenn Fulchertranslatoran
Practical LanguageTesting
Glenn Fulcher
Specifications for testing and teaching.
A sample detailed specification for a
reading test.
In this section we present an example of an architecture for a reading test. This includes
the test framework that presents the test purpose, the target test takers, the criterion
domain and the rationale for the test content. The architecture is annotated with explanations
in text boxes. This is a detailed test specification. The complexities of coding
in test specifications of this kind are usually necessary in the design and assembly of
high-stakes tests where it is essential to achieve parallel forms. There are problems with
this type of specification for use in classroom assessment, which we deal with in Section
4 below.
The document provides an examination report for the 2010 Business English Certificate (BEC) Vantage exam. It includes:
1) An overview of exam grading and candidate performance on each paper.
2) Details on the format and content of the Reading paper, which contains 5 parts testing different reading skills.
3) Comments on candidate performance for each part of the Reading paper, identifying questions some candidates found more challenging.
The document provides information about Western Academy's GRE test preparation program. It discusses the new GRE test structure, the academy's 10 stage program to achieve a high GRE score, and strategies for each GRE section. The program includes classroom coaching, 20 online practice tests, assistance with university admissions and scholarships, and help with visa and travel processes. The goal is to improve students' speed and accuracy in order to score highly on the revised GRE general test.
This summary provides an overview of two units on language testing and assessment:
Unit B4 discusses test specification design, including elements that should be included like samples and guiding language. It also examines the distinction between prompt and response attributes.
Unit C4 focuses on how test specifications evolve over time. It provides examples of a role-play assessment and how the scoring criteria and use of written plans changed between versions as instructors debated and tested the requirements. The unit proposes applying a reverse engineering workshop to critically analyze existing test tasks and improve testing practices.
The document provides the mark scheme for an English Literature exam assessing poetry across time. It outlines the assessment objectives, mark bands, and evaluation criteria used to assess students' responses. Responses will be judged on their ability to critically respond to texts, analyze language and literary techniques, make comparisons between poems, and relate poems to their contexts. The mark scheme provides a template to evaluate responses based on the level of analysis, interpretation, and comparison demonstrated.
This document summarizes the key activities and discussions from a workshop on developing VSTEP listening test items in Hanoi, Vietnam in December 2015. It describes four main activities: 1) familiarizing participants with CEFR listening descriptors, 2) defining the test construct, 3) reviewing test specifications, and 4) practicing item writing techniques. It also covers preparing listening texts, working with native speakers, and a hands-on practice session where participants wrote test items using short base texts. Throughout, it emphasizes expert approaches and best practices for writing high-quality listening test items.
This document presents a method called Joint Sentiment and Topic Detection (JST) that can simultaneously detect sentiment and topic from text without requiring labeled training data. JST extends the Latent Dirichlet Allocation (LDA) topic model by adding an additional sentiment layer. It assumes words are generated from a joint distribution conditioned on both a sentiment label and topic. The document evaluates JST on movie reviews and product reviews using domain independent sentiment lexicons as prior information. Experimental results show JST can accurately classify sentiment at the document level and detect topics for different domains.
UGC NET Model paper (paper-1) Questions with AnswersRaja Adapa
This document contains 45 multiple choice questions related to education. The questions cover topics such as research methodology, teaching-learning processes, statistics, logic, and educational technology. For each question there are 4 answer options labelled A, B, C, and D. The correct answer is to be chosen from among these 4 options.
This document provides an overview of content analysis as a research technique. It defines content analysis as objective, systematic analysis and categorization of communication content. The workshop covers the procedures of content analysis, including design, unitizing, sampling, coding, drawing inferences and validation. Examples of using content analysis to analyze text data from interviews are presented, showing coding categories, frequency calculations and correlations. Content analysis is described as a useful technique for making inferences from qualitative data in an objective and replicable manner.
This chapter discusses fieldwork and data collection in marketing research. It covers the selection, training, supervision, validation and evaluation of field workers who collect data. Field workers are selected based on qualifications like communication skills, appearance and experience. They undergo training in techniques for initial contact, asking questions, probing, recording answers and terminating interviews. Supervision involves quality control, sampling control and preventing cheating. Fieldwork is validated by re-contacting respondents. Evaluation looks at costs, response rates and data quality.
This document provides guidance on writing effective multiple choice questions. It discusses the advantages and disadvantages of multiple choice questions, guidelines for constructing item stems and alternatives, and examples of questions at different cognitive levels. The intended learning outcomes are to explain the strengths and weaknesses of multiple choice exams, evaluate existing multiple choice items, and create effective multiple choice items that measure various learning levels. Participants are engaged in revision activities to practice applying the guidelines.
Writing good multiple choice test questionsenglishonecfl
Multiple choice tests can effectively and reliably assess student learning when questions are carefully constructed. An effective multiple choice question consists of a clear problem or stem and answer options that include one correct response and plausible distractors. Distractors should represent common student errors to best test understanding. Question structure and wording should avoid inadvertently signaling the right answer. Higher-order thinking can be assessed by requiring students to apply, analyze, or evaluate concepts in their response.
The document provides guidance on writing effective multiple choice test questions. It discusses characteristics of good test questions such as being clear, concise, independent of each other, and measuring learning objectives. The document outlines best practices for constructing question stems and response options, including making sure there is only one right answer, responses are parallel in structure, and don't provide clues to the right answer. It also discusses using multiple choice questions to test higher-order thinking by focusing on application, analysis, and evaluation in the question and responses.
The document provides guidelines for writing test items or questions. It defines key terms related to test development such as item, item writing, item pool, test, and task. It also describes different item formats such as dichotomous, polytomous, checklists, and Likert scales. For multiple choice items, it explains the components of the stem, lead-in statement, answer options, correct answer, and distractors. The document outlines prerequisites for item writing and provides guidelines for writing clear, unambiguous items that avoid trick questions and guessing. It suggests using Bloom's Taxonomy to develop items testing different cognitive levels and provides examples of terms that can be used to frame item questions.
The document provides guidelines for writing effective test items. It defines key terms related to item writing such as item, item writing, item pool, and test. It also describes different item formats including dichotomous, polytomous, checklists, and Likert scales. The document outlines best practices for writing multiple choice, true-false, matching, short answer, and oral examination items. It emphasizes the importance of clarity, avoiding trick questions, using a variety of question types and cognitive levels, and carefully constructing item stems, options, and distractors. Adhering to these guidelines helps ensure items are valid and reliable measures of student learning.
The document provides guidance on writing effective test questions. It discusses assessing different cognitive levels, using a variety of question types, and matching question types to learning objectives. Specific tips are given for creating multiple choice, true/false, and instructor-marked questions to effectively evaluate learner progress and course effectiveness. Practical considerations like available time and technology are also addressed.
Multiple choice questions can assess different levels of knowledge from simple recall to interpretation and problem solving. They provide flexibility through variations like correct answer, best answer, and interpretive exercises using stimulus materials. Analysis of multiple choice questions focuses on scoring models to determine student achievement and item analysis to evaluate how well questions functioned.
Constructing fair tests that give teachers accurate information about students' learning is important. A table of specification helps organize test planning and content validity by determining what content will be covered. Rubrics can also help with validity when used appropriately. Multiple choice tests can be valid for assessing certain cognitive levels like knowledge and comprehension, but other assessment types may better measure skills and higher-level thinking. Teachers should consider cognitive level and learning objectives when choosing assessments.
This document provides specifications for a reading test designed to assess Grade 4 ESL students in Baghdad, Iraq. It outlines the purpose of the test as measuring students' reading comprehension performance based on the curriculum from the previous semester. It describes the test takers as Grade 4 ESL students and notes the test is designed to be accessible for their level. It discusses the test design, including using multiple choice, true/false, and matching questions to assess reading comprehension. It provides details on how to construct different question types and how the test will be scored.
Here are the key learning outcomes this exercise aims to assess:
- Students will develop an understanding of historical events from different perspectives by taking on roles in a Civil War battle reenactment.
- Students will learn about and experience aspects of daily life for Union and Confederate soldiers through activities like marching, setting up camp, cooking, and scouting.
- Students will gain procedural knowledge about the tactics and strategies employed in a historically representative Civil War battle through direct participation in a reenactment.
2. Design 5-10 multiple choice or true/false questions to assess student learning related to the reenactment experience.
This document provides guidance on constructing objective test items for different formats, including short answer, true/false, matching, and multiple choice. It discusses the characteristics, uses, advantages, limitations, and suggestions for writing each type of item to effectively measure student learning outcomes. Multiple choice items can measure both simple and complex outcomes like knowledge, understanding, and application. While objective tests are limited in scope, the multiple choice format in particular allows for flexibility in content and reliable, structured assessment when written carefully according to the guidelines.
This document provides guidance on designing and administering questionnaires. It discusses key aspects of questionnaire design including question types, question wording, layout, piloting the questionnaire, distribution, non-response follow up, and data analysis. The document emphasizes that questionnaires require careful planning and design to ensure clear, unbiased questions and a user-friendly format in order to obtain high-quality responses. Piloting the questionnaire is also highlighted as an important step to identify and address any issues before full distribution.
1. The document outlines a daily lesson log for a Grade 12 Practical Research 2 class. It details the objectives, content, procedures, and assessment for lessons on inquiry and research held from August 29-31, 2023.
2. The lessons cover defining and differentiating inquiry from research, qualitative from quantitative research, and experimental from non-experimental research. Activities include class discussions, group work, quizzes and developing research questions.
3. The goal is for students to understand the characteristics, strengths, weaknesses and kinds of quantitative research by the end of the sessions. Assessment of learning is conducted through formative quizzes to check comprehension of concepts.
Descriptive statistics research survey analysis (part 2)Ghivitha Kalimuthu
The document discusses surveys and how to design effective surveys. It provides information on different types of surveys, including interviews and questionnaires. It also discusses important characteristics of good survey design such as relevance, accuracy, and developing clear questions. Examples of different types of survey questions are given, such as yes/no, multiple choice, rating scales. The document emphasizes that surveys are commonly used in education research to obtain information, investigate topics, and ask opinions.
Descriptive statistics research survey analysis (part 2)Ghivitha Kalimuthu
The document discusses surveys and how to design effective surveys. It provides information on what surveys are, their purpose, and common forms like interviews and questionnaires. It also discusses key aspects of writing good surveys like relevance, accuracy, and developing focused questions. Different types of survey questions are explained like yes/no, multiple choice, rating scales. Examples of surveys from various domains are also provided. Guidelines are given for writing effective survey items and common mistakes to avoid. Finally, it discusses how to interpret and present survey results in tables and numbers to draw meaningful insights.
The document discusses various types of assessment questions that can be used for computer-aided assessment (CAA) and their suitability for different levels of Bloom's Taxonomy. It describes the advantages and disadvantages of true/false, matching, multiple choice, short answer, calculation, essay, "problem based", simulation, and performance questions. The document recommends that any CAA system developed by the university should be easy to use, supported, standards-based, secure, and scalable to meet the needs of assessing higher-order thinking skills.
The document provides guidance on constructing effective multiple-choice tests. It discusses the strengths and limitations of multiple-choice tests, and guidelines for writing test items. It emphasizes writing clear stems and alternatives that assess different cognitive levels. Distractors should be plausible but incorrect. The summary effectively captures the key topics and purpose of the document in a concise manner.
This document provides guidelines for writing effective essay questions to assess student learning. It defines essay questions and outlines the main types: restricted response and extended response. Guidelines are given for constructing clear questions that assess higher-order thinking and provide criteria for grading. Both advantages and disadvantages of essay questions are discussed. Overall, the document advocates that essay questions can effectively evaluate students' reasoning and analytical abilities when guidelines are followed to create valid, reliable and fair assessment.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Woodford
1. Using multiple choice questions effectively in
Information Technology education
Karyn Woodford and Peter Bancroft
School of Software Engineering and Data Communications
Queensland University of Technology
As academics are confronted with problems such as larger classes and the introduction of a
trimester year of study, it has become increasingly necessary to search for alternative forms of
assessment. This is certainly the case in Information Technology (IT), where more lecturers are
using multiple choice questions as a matter of expediency and in some instances, the quality of the
assessment is being neglected. This paper provides guidance for IT lecturers who wish to write
effective tests containing good multiple choice questions. Some of the points raised are founded in
the long history of research into this form of assessment but IT lecturers are, in general, unlikely to
be familiar with many of the matters discussed. The paper also considers the major criticism of
multiple choice questions (that they do not test anything more than just straight recall of facts) and
examines ways of overcoming this misconception. It is our aim to raise awareness of these issues
in IT education, but teachers in other disciplines may also find the material useful.
Keywords: large classes, IT, multiple choice tests, multiple choice questions
Introduction
Each year growing numbers of students are enrolling in tertiary studies and teachers are facing increased
time pressures in setting and marking assessment items. In Information Technology, even with recent
downturns in enrolments, teachers of first year units are still faced with classes in excess of 250 students.
In other disciplines, alternative testing techniques such as multiple choice questions have long been used
to help alleviate the problem and there is an awareness of the extensive body of research in the area. With
IT lecturers now making widespread use of multiple choice questions (Lister, 2001; Carter, Ala-Mutka,
Fuller, Dick, English, Fone & Sheard, 2003), this paper will provide some guidelines to assist in the
construction of well written questions. In addition to the belief that multiple choice questions are often not
well constructed (Paxton, 2000), this form of assessment still faces criticism due to the belief that it does
not test anything deeper than a superficial memorising of facts. We contend that it is possible to construct
multiple choice questions that are able to test higher levels of cognition. The following diagram
represents the levels within the cognitive domain as identified by Bloom (1956). The simple recall of
facts is at the lowest level, increasing to the evaluation skills at the top.
Evaluation
Synthesis
Analysis
Application Higher Cognition
Comprehension
Knowledge
Figure 1: Bloom’s levels of cognition
In the paper we address the problem of how multiple choice questions can test more than just knowledge
of a subject. Specifically we discuss the comprehension, application and analysis levels of cognition, and
give examples of multiple choice questions to test students at these levels.
948
2. Woodford & Bancroft
Firstly we review the terminology used to describe multiple choice questions and suggest methods for
measuring their effectiveness, discussing a range of factors that should be considered when composing
questions, such as:
• Is the grammar and wording of a question correct?
• How many alternative answers should be used?
• Should negatives and double negatives be used in questions?
• Is it valid to have multiple correct answers?
• How should the questions be ordered (by topic or randomly)?
• Should the alternative answers be listed in any particular order?
• Should ‘All of the above’ or ‘None of the above’ be used?
• Are the distracters plausible?
The contribution of this paper is that it will provide guidance for IT teachers who want to set multiple
choice questions while maintaining the integrity of their assessment.
Writing effective multiple choice questions
The parts of a multiple choice question
Common terminology (Isaacs 1994) for describing the separate parts of a multiple choice question is
illustrated in the following example:
stem
The complexity of insertion sort is:
(a) O(n)
(b) O(n2) key
options
(c) O(logn)
(d) O(2n)
distracter
Figure 2: The parts of a multiple choice question
A single multiple choice question, such as the one above, is known as an item. The stem is the text that
states the question, in this case ‘The complexity of insertion sort is’. The possible answers (correct
answer plus incorrect answers) are called options. The correct answer (in this case b) is called the key,
whilst the incorrect answers (a, c and d) are called distracters.
What is an effective question?
A simple measure of the effectiveness of a question is provided by the distribution of student responses
amongst the options. If too many students select the correct answer, then perhaps the distracters are not
convincing. If very few students answer correctly, then the question may not be clear or a deliberately
misleading distracter may have been used. The proportion of students answering a question correctly is
called its facility. Whilst there are no hard and fast rules about an item’s facility, it may be appropriate to
have a range somewhere between 0.4 and 0.6 when the goal of the test is to rank students in order. A
facility of 0.8 or higher may be more appropriate for tests which are primarily formative (Isaacs, 1994).
Another measure of a question’s effectiveness is whether the question tests the desired level of cognition
(as described in Figure 1 above).
Limitations of multiple choice questions
The traditional style of multiple choice questions - a simple stem question with a key and distracters - has
its limitations. A student may select the correct answer by knowing that answer is correct or by
eliminating all of the other options. While this may initially seem acceptable, it does not necessarily test
the students’ full knowledge of the subject - knowing one option is correct does not guarantee they know
949
3. Woodford & Bancroft
that the others are incorrect. Similarly, working out the correct answer by a process of elimination does
not demonstrate that the student necessarily knows the solution - if faced with that single answer in a true
/ false environment, they may not have known that it was correct. This limitation may be overcome by
using a different style of multiple choice question (see later).
Factors affecting the validity of multiple choice questions
When writing good multiple choice questions there are several factors to consider - some relate to the
actual question whilst some relate to the options (key and distracters).
Correct grammar and wording
The use of incorrect grammar in the stem of a question can often allow students to exclude an option
immediately. Consider the following question.
The code fragment ‘char * p’ is a way of declaring a
(a) pointer to a char.
(b) array of strings.
(c) pointer to a char or an array of strings.
A test wise student may identify option (b) as being incorrect as it starts with a vowel and the stem ends
with ‘a’ and not ‘an’. To avoid cueing a student in this manner, the options should include the article:
The code fragment ‘char * p’ is a way of declaring
(a) a pointer to a char.
(b) an array of strings.
(c) a pointer to a char or an array of strings.
There are several other grammatical considerations (Wilson & Coyle, 1991 ):
• ensuring the stem and options are worded in the same tense;
• avoiding additional qualifying words or phrases to the key (a test wise student will often identify a
longer, more precise answer as the correct option); and
• using similar wording in all options, particularly making sure that the key does not sound like it is
directly from a text book.
Number of options
The number of options is one of the most fiercely debated issues amongst supporters of the multiple
choice question. Strong arguments have been made for 3, 4 and 5 options. Those who argue for 5 option
tests believe that 3 or even 4 option tests increase the probability of a student guessing the correct answer
to an unacceptably high level. Those who argue for 3 option tests claim that their tests can be as effective
as a 4 or 5 option test, as the additional distracters are likely to be less believable. The arguments for 3
option and 4 or 5 option tests are considered below, along with a brief discussion on removing non-
functioning items. Once the number of desired options is decided, it is advisable to use this number of
options for every item in the examination to reduce the possibility of careless mistakes.
Three options
A well written multiple choice question with three options (one key and two distracters) can be at least as
effective as a question with four options. According to Haladyna and Downing (1993) roughly two thirds
of all multiple choice questions have just one or two effectively performing distracters. In their study they
found that the percentage of questions with three effectively performing distracters ranged from 1.1% to
8.4%, and that in a 200 item test, where the questions had 5 options, there was not one question with four
effectively performing distracters.
The argument for three options therefore is that the time taken to write third and possibly fourth distracter
(to make a 4 or 5 option test) is not time well spent when those distracters will most likely be ineffective.
In Sidick and Barrett (1994) it is suggested that if it takes 5 minutes to construct each distracter, removing
950
4. Woodford & Bancroft
the need for a third and fourth distracter will save ten minutes per question. Over 100 questions, this will
save more than 16 hours of work. Supporters of 4 or 5 option tests would argue that any time saved would
be negated by a decrease in test reliability and validity. Bruno and Dirkzwager (1995) find that, although
reliability and validity are improved by increasing the number of alternatives per item, the improvement
is only marginal for more than three alternatives.
Four or five option
The most significant argument against three option multiple choice tests is that the chance of guessing the
correct answer is 33%, as compared to 25% for 4 option and 20% for 5 option exams. It is argued that if
effective distracters can be written, the overall benefit of the lower chance of guessing outweighs the
extra time to construct more options. However, if a distracter is non-functioning (if less than 5% of
students choose it) then that distracter is probably so implausible that it appeals only to those making
random guesses (Haladyna & Downing 1993).
Removing non-functioning options
Removing a non-functioning distracter (i.e. an infrequently selected one) can improve the effectiveness of
the test. In Cizek and O’Day (1994) a study of 32 multiple choice questions on two different papers was
undertaken. One paper had 5 option items, whilst the other paper contained 4 option items, a non-
functioning item from the identical 5 option item having been removed. The study concluded that when a
non-functioning option was removed, the result was a slight, non-significant increase in item difficulty,
and that the test with 4 option items was just as reliable when compared to the 5 option item test.
‘Not’ and the use of double negatives
Asking a student to select which option is not consistent with the stem can be an effective test of their
understanding of material but teachers should be very careful when using ‘not’ to ensure that it is very
obvious to the student. A student who is reading too quickly may miss the ‘not’ keyword and therefore
the entire meaning of the question. It is suggested that when ‘not’ is used, it should be made to stand out,
with formatting such as bold, italics or capitals.
Whilst the use of ‘not’ can be very effective, teachers should avoid the use of double negatives in their
questions, as it makes the question and options much more difficult to interpret and understand.
Multiple correct answers
As discussed previously, multiple choice questions have some limitations - specifically that a student may
be able to deduce a correct answer, without fully understanding the material. Having multiple correct
answers helps eliminate this issue but it is generally agreed that multiple choice questions with more than
one key are not an effective means of assessment.
A hybrid of the multiple answer and the conservative formats can be achieved, by listing the ‘answers’
then giving possible combinations of correct answers, as in the following example:
Which of the following statements initialises x to be a pointer?
(i) int *x = NULL; (ii) int x[ ] = {1,2,3}; (iii) char *x = ‘itb421’;
(a) (i) only
(b) (i) and (ii) only
(c) (i), (ii) and (iii)
(d) (i) and (iii) only
In this format the student has to know the correct combination of answers. There is still a possibility that
if they know one of the answers is incorrect then this may exclude one (or more) options, but by applying
this hybrid format, a more thorough test of their knowledge is achieved.
Order of questions
At issue here is whether questions should be in the same order as the material was taught, or scrambled.
In Geiger and Simons (1994), the results of studies indicate that the ordering of questions makes no
difference to the time taken to complete the examination, or to the results but it may have an effect on
951
5. Woodford & Bancroft
student attitude. The authors suggest that the reason why question ordering does not have much impact is
that most students seem to employ their own form of scrambling, answering the questions they are
confident with, and going back to others later.
Order of options
It is recommended that options be arranged in some logical pattern – however, patterns among the keys
within a multiple choice test should be avoided (for example, having a repeating ABCD sequence). To
ensure that there is no pattern to the keys, it might be advantageous to apply some kind of constraint on
the options (for example, put them in alphabetical order) (Wilson & Coyle, 1991).
Use of all of the above and none of the above
The option ‘all of the above’ should be used very cautiously, if not completely avoided. Students who are
able to identify two alternatives as correct without knowing that other options are correct will be able to
deduce that ‘all of the above’ is the answer. In a 3 option test this will not unfairly advantage the student
but in a 4 or 5 option test a student may be able to deduce that the answer is ‘all of the above’ without
knowing that one or even two options are correct. Alternatively, students can eliminate ‘all of the above’
by observing that any one alternative is wrong (Hansen & Dexter, 1997). An additional argument against
the use of ‘all of the above’ is that for it to be correct, there must be multiple correct answers which we
have already argued against.
The use of ‘none of the above’ is more widely accepted as an effective option. It can make the question
more difficult and less discriminating, and unlike ‘all of the above’, there is no way for a student to
indirectly deduce the answer. For example, in a 4 option test, knowing that two answers are incorrect will
not highlight ‘none of the above’ as the answer, as the student must be able to eliminate all answers to
select ‘none of the above’ as the correct option.
In Knowles and Welch (1992) a study found that using ‘none of the above’ as an option does not result in
items of lesser quality than those items that refrain from using it as an option.
Writing plausible distracters
An important consideration in writing multiple choice questions is that the distracters are plausible.
Poorly written distracters could easily cue a student to the correct answer. For example, if a question
asked a student,
Given this undirected graph, what would be the result of
a depth first iterative traversal starting at node E?
(a) EABCFDG
(b) EDBFCG
(c) EDBGFCA
(d) EADBCFG
(e) EGDCFBA
certain distracters would be ineffective - a distracter that did not include every node would be clearly
wrong (option b). Most students would also realise that the second node in a traversal would usually be
one close to the starting node, so writing an option that jumps suddenly to the other ‘end’ of the graph
may also be easily discarded (option e).
When writing distracters for this question, a teacher should consider the types of mistakes associated with
a poor understanding of the algorithm and attempt to offer distracters that include these errors.
Additionally, an option containing the answer to a similar type of question could be a good distracter - for
example, in this traversal question a distracter could contain the correct result for a depth first recursive
traversal (option a) or a breadth first traversal (option d). Only a student who knows the correct algorithm
and is able to apply it to the graph will be able to determine which of the plausible options (a, c or d) is
the actual key.
952
6. Woodford & Bancroft
Testing more than just recall
The main advantage of multiple choice tests is obvious - they result in a significant reduction of marking
for teachers. One of the greatest criticisms of using this type of questioning is that it only tests facts that
students can learn by rote. An extension of this argument is the contention that whilst multiple choice
questions may be useful for formative assessment and perhaps even mid-semester examinations, they
have no place in examinations where the student should be tested on more than just their ability to recall
facts. We believe that well written multiple choice questions can test up to the sub-synthesis levels of
cognition, that is, knowledge, comprehension, application and analysis. It should be noted that whilst we
are arguing in favour of using multiple choice questions to test more than just recall, there is always a
place for testing knowledge, including fundamental facts that every student of a subject should know.
Testing comprehension
To test students at the comprehension level, we should present questions that require them to understand
information, translate knowledge into a new context, interpret facts and predict consequences. In IT, we
could ask students to interpret code or predict the result of a particular change to a data structure, for
example, as in the following question:
A minimum heap functions almost identically to the maximum heap studied in class – the
only difference being that a minimum heap requires that the item in each node is smaller
than the items in its children. Given this information, what method(s) would need to be
amended to change our implementation to a minimum heap?
(a) insert( ) and delete( )
(b) siftUp( ) and siftDown( )
(c) buildHeap
(d) none of the above
This question tests that the student understands the implementation of the maximum heap, and also asks
them to translate some pre-existing knowledge into the new context of a minimum heap.
Testing application
The application level requires solving problems by applying acquired knowledge, facts, techniques and
rules. To test a student’s application of knowledge in a subject, they could be asked, for example, to apply
a known algorithm to some data.
In Computer Science subjects there are many opportunities to test at the application level, for example
asking the student to apply:
• searching and sorting algorithms,
• ADT-specific algorithms (eg AVL-Tree rotations, Hash Table insertions)
• other algorithms (eg Dijkstra)
The question below tests application of knowledge by asking the student to apply a known algorithm.
Consider the given AVL Tree. What
kind of rotation would be needed to
rebalance this tree if the value ‘H’ was
inserted?
(a) no rotation required
(b) RL
(c) LR
Testing analysis
Analysis requires the examination of information, breaking it into parts by identifying motives or causes;
identifying patterns; making inferences; finding the underlying structure and identifying relationships.
953
7. Woodford & Bancroft
Asking a student to analyse the effect of some code on a given data structure, or identify patterns in the
way an ADT processes information are a good way to test their ability to analyse. Asking these questions
in a multiple choice format can be very difficult. If you asked a student “What effect does the above code
have on our DataSet?” the distracters may give themselves away – the student may easily be able to see
that the code is not doing what the distracter claims.
There are a several alternatives to this approach. For example, asking the student whether the code will
have the desired effect may allow the writing of more plausible distracters, or alternatively, asking them
to analyse some code and then make a comparison with some known code. For example:
Consider the code below which could be used to find the largest element into our sorted,
singly linked list called SD2LinkedList. This code would fit one of the processing patterns
that we studied in class. Which of the following methods fits the same pattern as this new
code?
(a) union
(b) hasElement
(c) exclude
(d) isSubsetOf
This question not only tests the student’s ability to analyse the new code, but also their knowledge of
existing code and their ability to compare the way in which the given code processes data compared to
that existing code.
Another method of testing a student’s higher cognitive skills is through the use of linked sequential
questions which allows the examiner to build on a concept. An example of this method would be to ask a
number of questions each of which makes a small change to a piece of code, and to ask what effect that
change would have on the functioning of a program. The student could be required to use the outcome of
each question to answer the subsequent question. Using this technique, care needs to be taken to avoid
unfairly penalising the student through accumulated or sequential errors.
Conclusion and future work
In this paper, we have attempted to raise the awareness of Information Technology teachers to the vast
amount of research that has been undertaken into writing multiple choice questions. We have discussed
the terminology used to describe multiple choice questions and their limitations, as well as a range of
factors that should be considered when composing questions.
Further, we have described how multiple choice questions can be used to test more than straight recall of
facts. We gave specific examples which test students’ comprehension of knowledge and their ability to
apply and analyse that knowledge and we suggest that sequentially dependent questions also facilitate
testing of higher cognition. Being able to set good questions which test higher cognition allows teachers
to use multiple choice questions in end of semester summative tests with confidence, not just as a
convenience for low valued mid-semester tests and formative assessment
In other related work, the authors are implementing a web based multiple choice management system. A
stand alone prototype of this system (Rhodes, Bower & Bancroft 2004) is currently in use, while the web
based system will allow further features, including concurrent access and automatic generation of paper
based examinations.
References
Bloom, B.S. (Ed) (1956). Taxonomy of educational objectives. Handbook 1. The cognitive domain. New
York: McKay.
Bruno, J.E. & Dirkzwager, A. (1995). Determining the optimal number of alternatives to a multiple-
choice test item: An information theoretic. Educational & Psychological Measurement, 55(6), 959-
966.
954