tools of research


Published on

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

tools of research

  2. 2.  A technique for obtaining information from subjects. A series of question ask to individuals to obtainstatistically useful information about a given topic.
  3. 3.  relativelly economical has the same question for all subject can ensure anonimity
  4. 4. RequirementsDetermine the type of informationLimit responsesFigure out who, what, when and where youre going todistribute your questionnaire.Straigh forward& clearQuantityState introduction& purposeSimple, short & in a pageDouble checkProfesional, easy read, & understandable
  5. 5. A. Decision about question contentB. Decision about question wordingC. Decision about form of response to the questionD. Decision about the place of the question in thesequence
  6. 6. TypesClosed-endedOpen- endedContingencyMatrixNo option answer/predefinedcategories are suggested.A question that is answered only ifrespondent gives a particularresponse to a previous question.identical response are assignnedto multipe questions.Respondents’ answers are limitedto a fixed set of responses.
  7. 7.  Provide for marking a yes or no, a short response, or checking anitem from a list of suggested responses. Why did you choose your graduate work at this University?Rank:a. Advice of friend.b. Scholarship aid.c. Reputation of University.d. other
  8. 8.  Completely unstructured, free response in therespondent’s own word, no clues are given. Why did you choose your graduate work at thisUniversity?Answer :_________________________________________________________________________________________________________
  9. 9. DifferencesClosed-Endeda. highly structuredb. Generate frequencies of responsec. Amenable to statistical treatment &analysisd. More focuse. To the pointOpen- endeda. Unstructuredb. Suit for complex issuec. respondents can explore more their thoughtsd. Not limited of pre-set categoriese. Lead to irrelevant & redudant informationchracteristics
  10. 10. • Who are interviewed and theirbackground /experiences.Factual(Demographic)• What they do, or did in thepast.Behavioural• Attitudes, opinions, beliefs,interests, and values.Attitudinal
  11. 11. 1. Significant topic2. Single source question3. Short, simple and understandable4. Attractive in appearance, neat arrangement andclearly duplicated5. clear and complete direction, simple and clearquestions, and single idea or concept6. good psychological order and it proceeds general tomore specific responses7. objective questions8. easy to tabulate and interpret
  12. 12. The question of content validityis: “Do the items sample asignificant aspect of thepurpose of the investigation?”Clearly defined meaning inquestionnairesGet help from colleagues orexperts from the field ofquestionsThe panel of experts can ratethe instrument to know howeffectively it samples significantaspects of its purposeEstimating the predictivevalidity of a questionnaire by afollow-up observation ofrespondent behavior at thepresent time or at some time inthe futureValidityand reliabilityof questionnaires
  13. 13.  The information form that attempts to measure theattitude or belief of an individual. How people feel, or what they believe is their attitude. Butit’s difficult, if not impossible, to describe and measureattitude. Researchers must depend upon what people sayare their beliefs and feelings. This is the area of opinion.Through the use of questions, or by getting people’sexpressed reaction to statement, a sample of their opinionsis obtained.
  14. 14. 1. Thurstone Technique( equal – appearing Interval Scales)2. Likert Technique( Summated RatingScale )
  15. 15.  The list of statement is given to the subjects, who areasked to check the statements with which they agree.The median value of the statements that they checkestablishes their score, or quantifies their opinion
  16. 16. The second method, the Likert Method of Summated Ratings, which can becarried out without the panel of judges, has yielded scores very similar tothose obtained by the Thrustone method. The coefisient correlation betweenthe two scales was reported as +.92 in one study. Since the Likert Scale takesless time to construct, it offers an interesting possibility for the student ofopinion research.The Likert scaling technique assigns a scale value to each of the fiveresponses. Thus, the instrument yields a total score for each respondent and adiscussion af each individual item, while possible, is not necessary.
  17. 17. a. Strongly agree 5b. Agree 4c. Undecided 3d. Disagree 2e. Strongly disagree 1For statements opposing this point of view, the items are scoresIn the opposite order:Scale valuea. Strongly agree 1b. Agree 2c. Undecided 3d. Disagree 4e. Strongly disagree 5
  18. 18. 1. Heaven doesn’t exist as an actual placeor location2. God sometimes sets aside natural law,performing miracles3. Hell does not exist4. The devil exists as an actual person5. God is a cosmic force, rather than anactual person6. There is a final day judgment for allwho have lived on the earth.a b c d ea b c d ea b c d ea b c d ea b c d ea b c d e
  19. 19. If the opinionnaire consisted of 30 statements or items,the following score value would be revealing:30 x 5 = 150 Most favorable response possible30 x 3 = 90 A neutral attitude30 x 1 = 30 Most unfavorable attitudeThe scores for any individual would fall between 30 and 150; above90, if opinions tended to be favorable,below 90, if opinions tended to be unfavorable to thethe given point of view.
  20. 20. Definition :a method of measuring a person’s ability,knowledge, or performance in a given domain.
  21. 21. 1. ACHIEVEMENT TEST:attempt to measure what an individual has learned –his or her present level of performance. Most testused in schools are achievement tests. Achievementtest scores are used in placing, advancing, orretaining students at particular grade level.Achievement tests scores used to evaluate theinfluence of course study, teachers, teachingmethods, etc/
  22. 22. The proficiency test also measures what students havelearned, but the aim of the proficiency test is todetermine whether this language ability corresponds tospecific language requirements.There are some samples of proficiency tests:1. Placements testThe student proficient enough to enterintermediate course or it is better to place him or her inbasic course.2. The reading- knowledge tests for doctoral candidatesExample: Is the student able to read professionalliterature in another language with a specific level ofaccuracy?
  23. 23. The progress test measures how much the student haslearned a specific course of instruction. The tests thatthe classroom teacher prepares for administration atthe end of a unit or the end of a semester are progresstests.
  24. 24.  Multiple choice Arranging sentence Fill in the blank Matching True or false Short answer
  25. 25. 1. Design each item to measure a specificobjectives2. State both stem and option as simply anddirectly as possible3. Make certain that intended answer isclearly the only correct one4. Use item indices to accept, discard, orrevise items.
  26. 26. 1. To develop the quality of test items through thestudents’ answers for each item.2. To find out whether the test items will be accepted,discarded, or revised.
  27. 27. 1. IF (item facility or item difficulty)2. ID (item discrimination and sometimes called itemdifferentiation)3. DistractorHow to analyze?
  28. 28. IF (item facility) The extent to which an item is easy or difficultfor the proposed group of test-takers. Brown (2000) defines item facility as theproportion of students who answered aparticular item correctly.The proportion is shown in index of IF.
  29. 29. The formula used to get the item facilityThe number of test-takers answering the item correctlyIF -----------------------------------------------------------------------------The number of test-takerse.g. if you have an item on which 13 out of 20 test-takers respondcorrectly, your IF index is:13 divided by 20 equals .65 (65%)
  30. 30. The Classification of IF Item0.00—0.30 The item is difficult0.31—0.70 The item is medium0.71—1.00 The item is easy
  31. 31. The extent to which an item differentiates between highand low ability test takers. An item on which high abilitystudents and low ability students score equally wellwould have poor ID because it didn’t discriminatebetween the two groups.
  32. 32. The formula for calculating ID isHigh group # correct – low group # correct---------------------------------------------------½ x total of your two comparison groups
  33. 33. Item #23 # Correct # IncorrectHigh-ability Ss (to 10) 7 3Low-ability Ss (bottom 10) 2 8Using the ID formula (7-2 = 5 ÷ 10= .50), youwould find that this item has an ID of .50, or amoderate level.
  34. 34. 0.40—1.00 The item is accepted0.30—0.39The item is accepted but it needs revising0.20—0.29The item must be revised0.19—0.00The item is discarded(Safari; 2005: 27)
  35. 35. The functions of ID are: To increase the quality of an item of test through the empiricaldata. Based on the index of the ID, each item will be found out tobe good, be revised, or be discarded. To find out how far each item can detect and distinguish thestudents’ ability, that is students who understand or do notunderstand the material taught by the teachers.
  36. 36.  Is one more important measure of multiple choiceitem’s value in a test, and one that is related to ItemDiscrimination (ID) The pattern is obtained by calculating the number oftest-takers who choose the option of test item or whodo not choose any options. The distribution patternwill show whether or not the distractor works well.
  37. 37. Example:Choices A B C D EHigh-ability Ss (10) 0 1 7 0 2Low-ability Ss (10) 3 5 2 0 0Note: C is the correct response
  38. 38. The characteristics of distractor is: the distractors should be brief and as homogeneous aspossible, the distractors should be plausible, use at least 3 distractors to reduce the chance of guessingthe correct answer, avoid distractors that provide clues, the distractors should embody misconceptions, partlycorrect answers and common errors of fact or reasoning.
  39. 39. Clear Instruction What to do  Appropriate response Time allowance  How many items to attemptExample:Test consists of 20 questions. Read each question carefully.In the button beside each answer, check the letter thatindicates the correct answer for each question. 20minutes are given to complete the test.
  40. 40.  It will determine how ready, motivated, and capable astudent might be. The best time is between 11 a.m. to 1 p.m. Time allowance should be told to the students. One should consider the level of difficulty in settingout the time allowance.
  41. 41.  Figure out the objectives and determining the weightof each item.Percent ofTotal GradePossible TotalCorrectListening 30% 20 items @ 1.5 points each = 30Grammar 40% 20 items @ 2 points each = 40Reading 30% 20 items @ 1.5 points each = 30Total 100