• Save
Test Maker
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Test Maker

on

  • 150 views

Xaurum On line Testing Tool

Xaurum On line Testing Tool

Statistics

Views

Total Views
150
Views on SlideShare
150
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Test Maker Presentation Transcript

  • 1. 1Test Maker home pageLanguage selector
  • 2. 2Create a new test
  • 3. 3Open existing test: two ways• links to the last opened tests• List of all tests (sortable list)
  • 4. 4Currently opened test: quick infoNote all the new tabs which appear after test opening
  • 5. 5Design : the 3 panels1. Object selection: list of object types you can add to the test• Questions (8 different types of question)• Design items• Questions from existing tests or library sections2. Test tree: the current hierarchical list of objects already in the test3. Object designer: the designer tool of an objectCurrently selected item in the tree
  • 6. 6Design : Object selection1. New object selection panel (left)A simple click on an object inserts an instance of such an item in the Test tree, just after thecurrently selected item or at the end of the tree if there is no selection.Currently selected item in the tree.New object inserted after.
  • 7. 7Design : Test tree3. Test tree panel (right)Allows to remove items from the test and reordering of the items in the tree.
  • 8. 8Design : Object designer2. Object designer panel (middle)Tool to edit properties and functionnalities of the object. Each type of question has its owndesigner. Note the three tabs:• Design view : designer• Student view : a raw representation of what the student will see• Answer view : the correct answer
  • 9. 9Design : Object designerDesign view : designerStudent view : a raw representation of what thestudent will see. Notice the numbering, dispositionfor exampleAnswer view : the correct answers
  • 10. 10Question : Multiple Choices (MC)Points: maximum scoreobtained when answer(s) is(are) correctDifficulty: arbitrary questiondifficulty indicatorNumbering: numbering typedisplayed before the optionbuttonsDisposition: vertical,horizontal, dropdownRandomize: if checked, theproposed options are shuffledHints: Hint to display for thequestion (see option « ShowHints » in the « Presentationattributes »)Feedbacks: displayed aftersubmission (see option « ShowFeedbacks » in the « Presentationattributes » )Question text: don’t forget it !Image: to illustrate the question
  • 11. 11Question : Multiple Choices (MC) (following)Add as many options asneeded and select the correctone
  • 12. 12Question : Multi-Select (MS)Scoring: two ways of scoringthe answers1.All or nothing: the points areattributed if all and only thecorrect options are checked bythe student, else the studentgets 0 point.2.Weighted: each correctlychecked option receives partialcredit; an equal measure ofpartial credit is deducted foreach incorrect answer.(partial credit =Points / #Cwhere #C is the totalnumber of expected correctanswers.)
  • 13. 13Question : Multi-Select (MS) (following)Add as many options asneeded and select the correctones
  • 14. 14Question : True/False (TF)Enter the text displayed for bothoption and select the correct one.
  • 15. 15Question : Ordering (ORD)Scoring: two ways of scoringthe answers1.All or nothing: the points areattributed if all the items arecorrectly sorted, else thestudents receives 0 point2.Weighted: for each item incorrect position, the studentreceives partial credit;(partial credit of itemi = Points * Wi / Wtotalwhere Wi is theweight of item i and Wtotal is thesum of the weight of all items)
  • 16. 16Question : Ordering (ORD) (following)Add as many items as neededSpecify the correct position ofeach itemEventually, set a relative weightattributed to the item (default = 1)Test interface for Ordering question
  • 17. 17Question : Matching (MAT)Scoring: two ways of scoringthe answers1.All or nothing: the points areattributed if all the items arecorrectly matched, else thestudents receives 0 point2.Weighted: for each correctmatching, the student receivespartial credit;(partial credit of itemi = Points * Wi / Wtotalwhere Wi is theweight of item i and Wtotal is thesum of the weight of all items)
  • 18. 18Question : Matching (MAT) (following)Add as many matching pairs asneededEventually, assign a relativeweight to the matching pair(default = 1)In order to increase difficulty,you can add so called« distractors », which are nonassociated items (all distractorsmust be added in the same list)Test interface for Matching question
  • 19. 19Question : Free Text (FT)A Free text question requires amanual grading.Instructor notes: informationdisplayed to the instructorduring the manual gradingprocess.
  • 20. 20Question : Short Answers (SA)# required answers: number ofexpected answersScoring: two ways of scoring theanswers1.All or nothing: the points areattributed if all the expectedanswers are entered, else thestudent receives 0 point.2.Weighted: for each correctanswer entered, the studentreceives partial credit;(partial credit = Points /#Cwhere #C is the totalnumber of expected answers)
  • 21. 21Question : Short Answers (SA) (following)Add as many correct answers asneeded (at least # required answers)Indicate if answer is case sensitive(default no)Enter eventual alternate (variant)answers with their relative weight)
  • 22. 22Question : Fill In the Blanks (FIB)Scoring: two ways of scoring theanswers1.All or nothing: the points areattributed if all the answers areentered, else the student receives0 point.2.Weighted: for each correctanswer entered, the studentreceives partial credit;(partial credit of answeri = Points * Wi / Wtotalwhere Wi is the weightof answer i and Wtotal is the sum ofthe weight of all answers)
  • 23. 23Question : Fill In the Blanks (FIB) (following)Add as many text blocks and blankbocks as needed.Set a relative weight for the answer(default = 1)Indicate if answer is case sensitive(default no)Enter eventual alternate (variant)answers with their relative weight)Test interface for Fill in theblanks question
  • 24. 24Design item: Text blockAllows to add additional text and imagesTest interface
  • 25. 25Summary: quick view on the test itemsJump to the object designer
  • 26. 26Answers sheet: a « paper » view on the test and answersPDF export
  • 27. 27Preview: See what the student will getFrom this screen, start a test attempt in preview mode.You may ask to track this « Preview » attempt in order to see it in tracking and statisticsreports.Note : As soon as attempts are tracked, modifications to the questions in the test are restricted to minor changes. But« Preview »attempts can be deleted (see following point) in order to be able to update the test.Tracked « Preview » attempts can be deleted.
  • 28. 28Preview: what the student will get
  • 29. 29Preview: help
  • 30. 30Settings: default attributes of the test used during « publication »
  • 31. 31Settings: GeneralName: used to easily identify the test inside TestManager. Also used during publicationDescription: additional description for internal useStarting page: design of the starting page of the test
  • 32. 32Settings: PresentationQuestions per page: best practice : 1 per page (0 means all the questions on the same page)Order of questions: Preset : order set at design timeShow submit button on each page: or only when last page has been reachedShow Hints: (see next slide)Show Feedbacks: feedbacks will be displayed after submission if «Show corrections» is checked(see Submission tab)
  • 33. 33Settings: GradingGrading is the process of giving an overall grade (quotation) to the test after its submission. It’sbased on the scores obtained for each question and uses one of the 4 grading schemesavailable. Grading can be automatic or manual.Grading at submit: the test is automatically graded after the submission. If not checked, gradingmust be done manually (for example if some questions require manual scoring, test is bestgraded manually)Grading scheme: select the grading scheme to apply and the grade/value/percent to pass,depending on the scheme
  • 34. 34Settings: Grading (following)Numeric: numeric scale (min and max values); value to pass. The global score of the test istransformed according to the scale to determine the pass status.Pass/Fail: names of the two grades can be changed; percentage to passLetters A-E: limits can be changed; grade to passCustom: like previous but with custom defined grades (maximum ten); grade to pass
  • 35. 35Settings: AttemptsAttempts allowed: Maximun number of attempt allowed by the student; default = 1(note: CC publications allow only one attempt by Test event. LP publications allow multipleattempts by Trainig Plan but only with overal grading « Best attempt » )Delay before new attempt: in day(s), or hour(s)Overall grading: if more than 1 attempt allowed, how to compute the overall grading.Warning!: all methods are not necessary available for some publication types sincethey could lower an already attributed grade.
  • 36. 36Settings: RestrictionsBackward allowed: backward navigation allowed or not in the testMaximum Time: maximum time (minutes) allowed to complete the test. Student is warned whenmaximum time is elapsed.Bonus Time: when maximum time is elapsed, the bonus time allows the student to terminate andsubmit the test before definitive end.Late submission: fixes the way to handle the late submission (after Max + Bonus times).- Allow normal submission: student may submit the test normally ; it’s only flagged as «late»- Auto submission: the test is automatically submitted, terminated and flagged as «late»- Enforce limit: the test is automatically terminated, not submittedStart and End date & time: not applicable in CC publications
  • 37. 37Settings: SubmissionWhat is shown to the student after Test submission
  • 38. 38PublicationTo allow students to enter a Test, it must be published. A Publication defines settings for the particularenvironment where the test will run. Those attributes are inherited from the Test settings and can beoverwritten.Moreover, during publication some actions are automatically done to prepare the specific environment(for example Test Categories are created or updated in CC environment or Catalog entries performedin LP environment)
  • 39. 39Publication: CC environment
  • 40. 40Publication: CC environmentA test event can now becreated with this categoryThe publication triggered the creation ofa Test Category in CC environment
  • 41. 41Publication: LP environmentOnly one LP publication can be created for a testSpecify the language for which the corresponding LP coursesmust be generated and set their names .No new LPpublication can becreated for the test
  • 42. 42Publication: LP environmentThe publication triggered the creation ofAICC courses in the LP catalog
  • 43. 43Tracking: all attempts, even not submitted
  • 44. 44Score distributionScore distribution: number of students aggregated by score (in %) for submitted attempts
  • 45. 45Question resultsQuestion results : how the students answered each question (for submitted attempts)
  • 46. 46Score answers & Grade attemptGrade: Screen to select Test attempts. Displays the most important informations aboutgradingJump to modify score of questions in an attempt
  • 47. 47Score answers & Grade attemptTypically, a Free Text question requires manual scoring. The « Notes for instructor » appearshere to help to score the answer.Updating the score of an answer automatically (re)grades the attempt.
  • 48. 48Score answers & Grade attemptFor a publication with « Grade at submit » attribute not checked, the icon « Grade attempt »appears for an attempt after submission. The attempt can be graded by updating the score of anyquestion in it (see previous slide) or, without modying the scores, by clicking this icon.
  • 49. 49Score answers & Grade attemptFor a CC publication : the « blue » line displays the grade and status stored in CC environment.For publication with « Grade at submit » attribute checked, the grade and status are automaticallysent to CC environment when test is submitted.If an attempt is (re)graded manually, the new/updated grade and status must/can be sent to CCenvironment here.
  • 50. 50Score answers & Grade attemptFor a LP publication : the « blue » line displays the score and status stored in LP environment.For publication with « Grade at submit » attribute checked, the grade and status are automaticallysent to LP environment when test is submitted.If an attempt is (re)graded manually, the new/updated score and status must/can be sent to LPenvironment here.
  • 51. 51Delete & Copy testA test can be deleted, with all its questions, if there is no resultcollected for it (including tracked « preview » attempts)A test can be copied, with all its questions, to a create a newone. All questions are duplicated and can be maintainedindependently.
  • 52. 52Questions LibraryTest Makers allows to create a repository (library) of questions, organized in sections.A library section can be used as a random section in a test. A fixed number of questions will be randomly pickedfrom the library section at runtime.A section already used as a random section in a test cannot be deleted anymore.Questions of the library can also be individually copied in a test.Click here to add a new section in the library. Give it a name and populate it with questions
  • 53. 53Questions Library (following)Questions of the library section already used in attempts cannot be deleted anymoreQuestions of a library section can be « deactivated ». They will not be used anymore in runtime.This is useful when a question has already been used in attempts but is now obsolete.
  • 54. 54Random sectionClick here to add a random section in a test.•Give it a name•Select the Library section to attach•Set the number of questions to pick up randomly at runtime•Set the points attributed to each question
  • 55. 55Random section (following)Summary of a test with a random sectionAnswers sheet of a test with a random section
  • 56. 56Update of a test: restrictionsAs soon as results have been collected for a test, some updates are not possible any more
  • 57. 57Update of a test: restrictions (following)Any update which does not affect the structure and scoring of the questions.• Update of the question texts• Update of the option texts in MC,MS,T/F questions (but no option can be added or removed)• Update of the item texts in ORD questions (but no item can be added or removed)• Update of the choice and match texts in MAT questions (but no item can be added or removed and nodistractors can be added or removed)• Updates of the answer texts in SA and FIB questions (but no answer can be added or removed)• Display aspects (disposition, numbering, randomize)•…Any update which affects the structure and scoring of the questions.• Adding or removing question• Adding or removing options, possible answers, items, … depending on the question type• Changing the type of a question• Changing the points attributed to a question• Changing the eventual scoring mode (“weighted” or “All or nothing”) of a question• Changing the “correct” options, the correct order, the correct matching, … depending on the question type•…
  • 58. 58Update of a test: restrictions (following)Best practice1.Creating a pilot version2.Test the pilot version with tracked/or not tracked “Preview” attempts inside Test Maker3.Tracked preview attempts can always be deleted. The test is then fully updatable once again.4.“Preview” attempts outside Test Maker, by mailing to a Pilot target group.5.Tracked preview attempts can always be deleted. The test is then fully updatable once again.6. Repeat previous steps until all appears to be good.7.Publish the Test (V1)8.Even after real results have been collected in LP or CC environment, restricted updates are stillpossible as seen previously.9.If major structural changes become necessary, create a new version of the Test, startingeventually from a copy of the obsolete one.