Your SlideShare is downloading. ×
Educational Research
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Educational Research


Published on

Published in: Education, Technology

  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Educational ResearchPresented by Erlan AgusrijayaBlog: 1
  • 2. The Purpose Of Educational Research To provide evidence To help us developto help people decide better ways to think which opinions are about the field of correct –at least education. more correct. 2
  • 3. Exercise: Indicate, on a scale of 1-5, the extent to which you think research has demonstrated the truth of each statement.researchClearly refuted by research refuted by Somewhat inconclusive research refuted by Somewhat by research Clearly supported 1 2 3 4 5 1. The more time beginning readers spent on phonics, the better readers they become. Answer 1 2 3 4 5 2. Bilingual education for nonnative speakers impairs their academic proficiency. Answer 1 2 3 4 5 3. Increased contact with handicapped people results in a more positive attitude toward them. Answer 1 2 3 4 5 4. Boys are better in math; girls are better in languages. Answer 3
  • 4. Exercise: Indicate, on a scale of 1-5, the extent to which you think research has demonstrated the truth of each statement. (Continued)researchClearly refuted by by research Somewhat refuted inconclusive by research Somewhat refuted by research Clearly supported 1 2 3 4 5 5. Requiring students who do not like one another to work together on a project results in an increase in their liking for each other. Answer 1 2 3 4 5 6. Students who take moral education courses behave more ethically than students who do not take such courses. Answer 1 2 3 4 5 7. The use of manipulatives in the elementary grades results in improved achievement in mathematics. Answer 1 2 3 4 5 8. Behavior modification is an effective way of teaching skills to very slow learners. Answer 4
  • 5. Exercise: Indicate, on a scale of 1-5, the extent to which you think research has demonstrated the truth of each statement. (Continued)researchClearly refuted by by research Somewhat refuted inconclusive by research Somewhat refuted by research Clearly supported 1 2 3 4 5 9. Classroom discussion of real-life sexual issues and problems results in increased promiscuity among teenagers. Answer 1 2 3 4 5 10. Among children who become deaf before languages has developed, those with hearing parents become better readers than those with deaf parents. Answer 1 2 3 4 5 11. The more teachers know about a specific subject matter, the better they teach it. Answer 5
  • 6. Forms of Educational ResearchSurveysExperimentsCase StudiesEthnographies 6
  • 7. Statement 1 is rated at 3.Despite a great deal of research on the topic, thisstatement can be neither clearly supported norrefuted. It is clear that phonics instruction is animportant ingredient; what is not clear how muchtime should be devoted to it. 7
  • 8. Statement 2 is rated at 2.Evidence is unclear as to whether or notbilingual methods are superior to English-only instruction, but several studiesindicate no impairment of academic skills. 8
  • 9. Statement 3 is rated at 2.Evidence indicate that a more positiveattitude results only if the nature ofthe contact is structured beforehand. 9
  • 10. Statement 4 is rated at 3.There is a considerable amount ofevidence that these gender differencesexist, though the reasons are notclear. 10
  • 11. Statement 5 is rated at 3.The evidence here is quite clear thatthe outcome depends on whether thestudents involved see one another asnecessary to achieving success. 11
  • 12. Statement 6 is rated at 3.There is relatively littleresearch on ethicalbehavior. 12
  • 13. Statement 7 is rated at 4.The evidence is quitesupportive of this method ofteaching mathematics. 13
  • 14. Statement 8 is rated at 5.There is a great deal ofevidence to support thestatement 14
  • 15. Statement 9 is rated at 3.Not much evidence existsand the evidence that doesnot exist is inconclusive. 15
  • 16. Statement 10 is rated at1.The findings of many studiesrefute the statement. 16
  • 17. Statement 11 is rated at3.The evidence is inconclusive despitethe seemingly obvious fact that theteacher must know more than thestudents. 17
  • 18. Empirical Research vs. Nonempirical Research 18
  • 19. Basic Research vs. Applied Research Basic AppliedResults apply to a great Results are applicable onlymany people and situations. to a specific group of people in a particular situation.Result are related to Results are not necessarygeneral theory or to a related to a broader field ofgeneral field of knowledge. knowledge .Results need not have Results must haveimmediate or even clear immediate and clearimplications for practice. implications for practice. 19
  • 20. Research Question 20
  • 21. Examples of Research Questions (with an appropriate methodology)Does client-centered therapy produce more satisfaction in clientsthan does traditional therapy? (experimental research)Are the descriptions of people in social studies in textbooks biased?(content-analysis research)What goes on in an elementary school classroom during anaverage week? (ethnographic research)Do teachers behave differently toward students of differentgenders? (causal-comparative research)How can we predict which students might have trouble learningcertain kinds of subject matter? (correlational research)How do parents feel about the school counseling program? (surveyresearch)How can a principal improve faculty morale? (interview research) 21
  • 22. Exercise: Which research questions suggest relationships? 22
  • 23. Question 1 and 2 do not suggest arelationship.Question 1 asks for no more than a description regarding thecurrent usage of manipulative materials in a particular schooldistrict. Similarly, question 2 asks only for a survey ofadministrative opinions. Investigations of such questions maybe extremely useful in their own right, but they do not extendour knowledge as to why such conditions exist. 23
  • 24. Question 1 and 2 indicate arelationship.Question 3 seeks to investigate a possible relationship between eatingdisorders and sexual abuse. If a history of sexual abuse is related to eatingdisorders, this suggests (although it does not prove) that such abuse may be acause of such disorders. It also suggests that counseling which addressespatient history may be helpful. Question 4 seeks to investigate a possiblerelationship between the type of language instruction and fluency in thelanguage taught. If the language laboratory method is shown to be moreeffective than classroom instruction by individual teachers, this has clearimplications for improving language learning. 24
  • 25. RELATIONSHIP and VARIABLE A variable is any characteristic that is not always the same—that is, any characteristic that varies. Examples of variables include gender, eye color, achievement, motivation, and running speed. 25
  • 26. Exercise: What are the variables in this research question? Answer: the variables are age and level of anxiety in mathematics courses. 26
  • 27. Quantitative vs. Qualitative VariablesMeasured/ Quantitative Variables• ~exist in some degree rather than all or none• are measured along a continuum from ―less‖ to ―more‖• assigned numbers to different individuals or objects• An example would be height.Categorical/ Qualitative Variables• ~not vary in degree, amount, or quantity, but are qualitatively different• e.g. eye color, gender, religious preference, occupation, position on a baseball team, and most kinds of ―treatments‖ or ―methods‖ 27
  • 28. Independent vs. Dependent VariablesIndependent Variables• are those the investigator chooses to study (and often manipulate) in order to assess their possible effect(s) on one or more other variables• are those the investigator chooses to study (and often manipulate) in order to assess their possible effect(s) on one or more other variablesDependent Variable• is the variable which the independent variable is presumed to affect• All outcome variables are dependent variables. 28
  • 29. Exercise: What are the independent andthe dependent variables in this question? 29
  • 30. 30
  • 31. Extraneous Variables and ConstantsExtraneous Variables areindependent variables thathave not been controlledConstants are potentialvariables that are notallowed to change 31
  • 32. Ethics and ResearchEvery researcher should consider: the protection of participants from harms the ensuring of confidentiality of research data the knowing deception of research subjects 32
  • 33. HypothesesA hypothesis is, simply put, a prediction ofsome sort regarding the possible outcomes ofa study.A research question is often restated as ahypothesis.• Question: ―Do individuals who see themselves as socially attractive want their romantic partners also to be socially attractive?‖• Hypothesis: ―Individuals who see themselves as socially attractive will want their romantic partners also to be (as judged by others) socially attractive.‖ 33
  • 34. Directional vs. Nondirectional HypothesesA Directional Hypothesis is one that indicatesthe specific direction(e.g., higher, lower, more, less) that aresearcher expects to emerge in arelationship.Nondirectional Hypothesis does not make aspecific prediction about what direction theoutcome of a study will take. 34
  • 35. Reviewing the Literature: General References: the sources a researcher refers to first. Secondary Sources: publications in which authors describe the work of others. Primary Sources: publications in which investigators report the results of their studies. 35
  • 36. Steps Involved in a Literature Review • Define the research problem as precisely as possible.1 • Skim through some relevant secondary sources.2 • Peruse one or two general reference works.3 •Formulate search terms (key words or phrases) that are4 pertinent to your research question. • Search the general references for relevant primary sources.5 • Read the relevant primary sources.6 • Take notes and summarize the key points in the sources.7 36
  • 37. A Computer Search of the Literature Define the problem as precisely as possible. Decide on the extent of the search. Decide on the Database. (e.g.,ERIC) Select descriptors. Conduct the search. Broaden or narrow the search. Obtain a printout of the desired references. 37
  • 38. Writing Your Summaries1. Try to locate at least five recent primary sources that are pertinent to your topic. At least three of these be should be research reports that present data of some kind (scores on a test, responses to a questionnaire, and so on). The other two may be the viewpoint or ideas of someone as expressed in an article (that is, merely an opinion piece that does not present data). 38
  • 39. Writing Your Summaries2. Limit your summary to approximately one-half page (200 words).3. Be sure to describe what the author did and what the author’s conclusions were.4. If the reference you are summarizing pertains to a research study, you should briefly describe the method of the researcher used. Be sure that you also note how the author arrived at his/her conclusions. 39
  • 40. An Example of a SummaryWalberg, H. J., and Thomas, S. C. 1972. An operational definition andvalidation in Great Britain and the United States . American educationalresearch journal, 9:197-216.The purpose of this article is to describe the development of anobservation scale and a teacher questionnaire for assessing the degreeof “openness” of a given elementary school classroom. Items werewritten within each of eight “themes” obtained from available literatureand reviewed by a panel of authorities.The resulting instruments were used in approximately 20 classrooms foreach of three types: British open, American open, and Americantraditional. The classrooms were identified by reputation and personalknowledge. Approximately equal numbers of lower and middlesocioeconomic-level classrooms were included.Results showed that overall assessments obtained with the twodifferent instruments (observation scale and questionnaire) agreedquite highly. Differences between the open and traditional classroomswere much greater than those between socioeconomic levels orbetween countries. 40
  • 41. Subjects and Sampling 41
  • 42. Examples of populations All of the high school principals in the United States. All of the elementary school counselors in the state of California. All of the students attending Central High School in Omaha, Nebraska, during the academic year 1987-1988. All of the students in Mrs. Browns’ third- grade class at Wharton Elementary School. 42
  • 43. Examples of samples A researcher is interested in studying the effects of diet on the attention span of third-grade students in a large city. There are 1500 third graders attending the elementary schools in the city. The researcher selects 150 of these third graders, 30 each in five different schools, to study. The principal of an elementary school district wants to investigate the effectiveness of a new U.S. history textbook being used by some of teachers in her district. Out of a total 22 teachers who are using the text, she selects 6, comparing the achievement of students in the classes of these 6 teachers with those of another 6 teachers who are not using the text. 43
  • 44. Simple Random Sampling Stratified Random Sampling Probability Sampling Random Cluster Sampling Two Stage Random sampling SamplingProcedures Convenience Sampling Nonprobability Sampling Purposive Sampling Systematic Sampling 44
  • 45. Simple Random Sampling (SRS) In SRS every member of the population has an equal and independent chance of being selected for the sample. Example:" We interviewed a sample of 41 mothers of eight graders from one middle school. These mothers were randomly selected from a list of 129 mothers provided by the principal of the school.‖ (Baker and Stevenson, 1986, p.157). 45
  • 46. Simple Random B G A E C F H I Q D J Population L O R K P M S N V Z U W T X Y D Y Sample N P L H 46
  • 47. Stratified Random Sampling Stratified sampling is a process whereby certain subgroups, or strata, are selected for the sample in the same proportion as they exist in the population. Example: ‖From a pool of all children who returned a parental permission form (more than 80% return rate) 24 first graders (10 girls, 14 boys; mean age, 6 years, 6 months), and 24 third graders (13 girls, 11 boys; mean age, 8 years, 8 months) were randomly selected.‖ (Clements and Nastasi, 1988, p.93) 47
  • 48. Stratified Random ABCDE 25% Population FGHIJ KLMNO 50% PQRST 25% B D 25% Sample FMOJ 50% PS 25% 48
  • 49. Random Cluster Sampling When it is not possible to select a sample of individuals from a population--for example, a list of all members of the population of interest is not available—cluster sampling is used. It involves the random selection of naturally occurring groups or areas and then the selection of individual elements from the chosen groups or areas. 49
  • 50. Cluster Random AB CD QR NOP Population LM EFG JK STU HI QR CD Sample EFG 50
  • 51. Two-Stage Random Sampling It is often useful to combine cluster sampling with individual sampling. Rather than randomly selecting 200 students from a population of 3000 ninth graders located in 100 classes, the researcher might decide to select 25 classes randomly from the population of 100 classes and then randomly select 8 students from each class. 51
  • 52. AB CDTwo-Stage Random QR NOP LM EFG JK Population HI STU CD LM Sample of clusters STU Sample of individuals Sample C,L,T 52
  • 53. Convenience Sampling A convenience sample is a group of individuals who (conveniently) are available for study. Example:" A high school counselor interviews all of the students who come to her for counseling about their carrier plans.‖ 53
  • 54. Convenience B G A F E C H K D O P J Population R S Z N V X M Q L I U Y T W Easily Accessible Sample Q Y X L I 54
  • 55. Purposive Sampling In purposive sampling the researcher selects particular elements from the population that will be representative or informative about the topic. Purposive sampling is different from convenience sampling in that the researcher does not simply study whoever is available, but uses his or her judgment to select the sample for a specific purpose. 55
  • 56. Purposive B G A E C F H I Q D J Population L O R K P M S N V Z U W T X Y B F Sample N V L 56
  • 57. Example of Purposive Sampling ―Introductory psychology students (N=210) volunteered to take the Dogmatism Scale (Form E) for experimental credit. From the upper and lower quartiles on the Dogmatism Scale, 44 high and 44 low dogmatic subjects were selected for the experiment.‖ (Rickards and Slife, 1987, pp.636-637) 57
  • 58. Systematic Sampling In systematic sampling every nth element is selected from a list of all elements in the population. 58
  • 59. Systematic A B C D E F G H I JPopulation K L M N O P Q R S T B G L Sample Q 59
  • 60. Measurement Measures are specific techniques or instruments used for measurements and generally refer to quantitative devices. These are often tests and questionnaires that provide objective and quantifiable data. Measurement is an essential component of quantitative research because it provides a standard format for recording observations, performance, or other responses of subjects and because it allows a quantitative summary of the results from many subjects. 60
  • 61. The Purpose of Measurement~To provide information about the variables that are being studied. In an experiment, the dependent variable is measured. In correlational research each variable is measured. In practice, the variable is defined by how it is measured (operational definition), not by how it is labeled or defined by the researcher. 61
  • 62. Instrument vs. Instrumentation An instrument is a device or procedure for systematically collecting information. Common types of instruments include tests, questionnaires, rating scales, checklists, and observation forms. Instrumentation refers not only to the instrument itself but also to the conditions under which it is used, when it is to be used, and by whom it is to be used. 62
  • 63. • Validity refers to the extent to which an instrument gives us the information we want.validity • Validity is a judgment of the appropriateness of a measure for the specific inferences or decisions that result from the scoresvalidity generated by the measure. 63
  • 64. Types of Evidence for Judging Validity • refers to the nature of the content included withinContent- the instrument, and the specifications the researcher related used to formulate the contentevidence • refers to the relationship between scores obtained using the instrument and scores obtained using one Criterion- or more other instruments or measures (often called related evidence criteria) • refers to the nature of psychological construct orConstruct characteristic being measured by the instrument-relatedevidence 64
  • 65. Reliability 65
  • 66. Validity and Reliability Coefficients • expresses the relationship which exists between scores of the same individuals on two differentA validity instrumentscoefficient • expresses a relationship between scores of the same individuals on the same instrument at two differentA reliability times, or between two forms of thecoefficient same instrument 66
  • 67. Methods of Estimating ReliabilityRequire two Administrations Require One Administration The Test-Retest Method Internal Consistency Methods The Equivalent Forms Method Split-Half Testing The Kuder-Richardson Approaches KR20 KR21 67
  • 68. RESEARCH DESIGNNonexperimental Research Experimental Research Weak Experimental Designs: • The One-Shot Case Study Design Descriptive Studies • The One-Group Pretest-Posttest Design • The Static-Group Comparison Design Relationship Studies True Experimental Design •The Randomized Posttest-Only Control e.g. Simple Correlational Group Design •The Randomized Pretest-Posttest ControlStudies, and Prediction Studies Group Design •The Randomized Solomon Four-Group DesignCausal-Comparative Studies Quasi-Experimental Design • The Matching Only Posttest-Only Control Group Design • The Matching Only Pretest-Posttest Control Group Design 68 True Experimental Designs in Suter (1998)
  • 69. Common Statistical TestsThe t Test To compare two meansThe F Test(ANOVA) To test two or more meansTest for r To test the significance of a correlation coefficientChi-square To test for relationshipsTest involving frequency data in the form of tallies or percentages 69
  • 70. Descriptive Studies  A descriptive study simply describes a phenomenon.  Example: ―Their initials attributions were primarily task attributions (46% to 58% said the words were easy). Their own effort was the next most common cause of their success (40% of the responses). When asked for a second response, the subjects evenly divided their answers among the four types of attributions.‖ (Cauley and Murray, 1982, p.476) Back toresearch 70 designs
  • 71. Criteria for Evaluating Descriptive Studies1. Conclusions about the relationships and causal relationships should not be made.2. Subjects and instrumentation should be well described.3. Graphic presentations should not distort the results.(McMillan, 1992: 146) 71
  • 72. Relationship Studies  Relationship studies investigate the degree to which variations or differences in one variable are related to variations or differences in another variable.  Examples: 1. Correlational Studies indicate relationships by obtaining two scores from each subject. 2. A predictive study shows how one variable can predict what the value will be on a second variable at a later time. Back toresearch 72 designs
  • 73. Example: Relationship Studyof Differences Among Groups ―Advanced level students were more internally responsible for their intellectual- academic failures than general level students. Surprisingly, neither general nor advanced level students were internally responsible for their intellectual-academic failures than the basic level students. (p.320)(McMillan, 1992: 149) 73
  • 74. Example: Predictive Research ―Our final three hypotheses dealt with classroom environment factors…In elementary schools we find that where teachers perceive class size as manageable, the reported level of career dissatisfaction is lower than in elementary schools in which teachers perceive class size as less manageable…. In secondary schools, only the perceived absence of student learning problems…and the perceived absence of student behavior problems…emerged as predictors of teacher career dissatisfactions.‖ (p.72)(McMillan, 1992: 153) 74
  • 75. Criteria for Evaluating Correlational Studies1. Causation should not be inferred from correlation.2. The reported correlation should not be higher or lower than the actual correlation.3. Practical significance should not be confused with ―statistical‖ significance.4. The size of the correlation should be sufficient for the use of the results.5. Prediction studies should report accuracy of prediction for new subjects.6. Procedures for collecting data should be clearly indicated.(McMillan, 1992: 153-156) 75
  • 76. Using Surveys in Descriptiveand Relationship Studies In a survey, the researcher selects a group of respondents, collects information (by asking them a number of questions), and then analyzes the information to answer the research questions. In a Cross-Sectional Survey, information is collected from one or more samples or populations at one time. In a Longitudinal Survey the same group of subjects is studied over a specified length of time. 76
  • 77. Causal-Comparative Study Ex Post facto Research  In Ex Post facto Research the investigators decide whether one or more preexisting conditions have caused subsequent differences between subjects who experienced different types of conditions (the phrase ex post facto means ―after the fact‖). Back toresearch 77 designs
  • 78. Ex post facto vs. experimentaland correlational designs Ex Post facto designs have some similarities with both experimental and correlational designs. Like an experiment, there is typically a ―treatment‖ and/or ―comparison‖ group, and the results are analyzed with the same statistical procedures. Of course in Ex Post facto Research there is no manipulation of the independent variable because it has already occurred, but the comparison of group differences on the dependent variable is the same. Like correlation studies, no manipulation of the independent variable, so that technically the study is nonexperimental. However, in a correlation two or more measures are taken from each subject, whereas in ex post facto research each subject is measured on the dependent variable. 78
  • 79. Causal-Comparative Study Correlational Research  Correlational research, like causal-comparative research, is an example of what is sometimes called associational research.  In associational research, the relationships among two or more variables are studied without any attempt to influence them.  In their simplest form, correlational studies investigate the possibility of relationships between only two variables, although investigations of more than two variables are common.  A correlational study describes the degree to which two or more quantitative variables are related, and it does so by use of a correlation coefficient. 79
  • 80. Similarities and Differences betweenCausal-Comparative and CorrelationalResearch  Similarities. Both causal-comparative and correlational studies are examples of associational research, that is, researchers who conduct them seek to explore relationships among variables. Both attempt to explain phenomena of interest. Both seek to identify variables that are worthy of later exploration through experimental research, and both often provide guidance for subsequent experimental studies. However, neither permits the manipulation of variables by the researcher. 80
  • 81. Similarities and Differences betweenCausal-Comparative and CorrelationalResearch  Differences. Causal-comparative studies typically compare two or more groups of subjects, while correlational studies require two (or more) scores on each variable for each subject. Correlational studies investigate two (or more) quantitative variables, whereas causal-comparative studies involve at least one categorical variable (group membership). Correlational studies analyze data using scatterplots and/or correlation coefficient, while causal-comparative studies compare averages or use crossbreak tables. 81
  • 82. Similarities and Differences betweenCausal-Comparative and ExperimentalResearch  Similarities. Both causal-comparative and experimental studies typically require at least one categorical variable (group membership). Both compare group performances (average scores) to determine relationships. Both typically compare separate group of subjects.  Differences. In experimental research, the independent variable is manipulated; in causal-comparative research, no manipulation takes place. Causal- comparative studies provide much weaker evidence for causation than do experimental studies. In experimental research, the researcher can sometimes assign subjects to treatment groups; in causal-comparative research, the groups are already formed—the researcher must locate them. In experimental studies, the researcher has much greater flexibility in formulating the structure of the design. 82
  • 83. Criteria for Evaluating Causal-ComparativeResearch The primary purpose of the research should be to investigate causal relationships when an experiment is not possible. The presumed causal condition should have already occurred. Potential extraneous variables should be recognized and considered. Differences between groups being compared should be controlled. Causal conclusions should be made with caution.(McMillan, 1992: 161-162) 83
  • 84. True Experimental Designs according to Suter (1998: 196-203) RandomizedRandomized Randomized pretest- posttest matched Randomized posttest control control factorial control group group design group design design design 84
  • 85. Survey Research A common form of research involving researchers asking a number of questions about a particular topic or issue (often prepared in the form of a written questionnaire or ability test) to a large number of individuals (either by mail, by telephone, etc.). 85
  • 86. Survey Research Cross- Longitudinal sectional collects informationfrom a sample that has Collects information at been drawn from a different points in time predetermined in order to studypopulation at just point changes over time in time 86
  • 87. Longitudinal Survey Research Changes in a subpopulation group identified by a common characteristic over timeChanges in the same people Trends in the over time same population over time 87
  • 88. Cross-sectional Survey Research Community needs National Attitudes andassessment PracticesProgram evaluation Group Comparisons 88
  • 89. Weak Experimental Designs These designs are referred to as ―weak‖ because they do not have built-in controls for threats to internal validity. Any researcher who uses one of these designs has difficulty assessing the effectiveness of the independent variable. 89
  • 90. Weak Experimental Designs1. The One-Shot Case Study: a single group is exposed to a treatment or event, and a dependent is subsequently observed (measured) in order to assess the effect of the treatment. X O treatment Observation (dependent variable) 90
  • 91. Weak Experimental Designs2. The One-Group Pretest-Posttest Design: a single group is measured or observed, not only after being exposed to a treatment of some sort, but also before. O X O Pretest treatment Posttest 91
  • 92. Weak Experimental Designs3. The Static-Group Comparison Design: Two already existing, or intact, are used. Comparisons are made between groups receiving different treatments. X1 O X2 ONote:------ : already formed, not randomly assignedX1 and X2: different treatmentsOs : placed vertically to each other, occurs at the same time92
  • 93. True Experimental Designs Subjects are randomly assigned to treatment groups for controlling the subject characteristics threat to internal validity. 93
  • 94. True Experimental Designs1. The Randomized Posttest-Only Control Group Design: involves two groups, one receives the experimental treatment while the other does not. Treatment Group R X1 O Control Group R X2 O R: random assignment X1 = T = Treatment X2 = No treatment O = test 94
  • 95. True Experimental Designs2. The Randomized Pretest-Posttest Control Group Design: both groups are measured twice, the first measurement serves as the pretest, the second as the posttest. Treatment Group R O X1 O Control Group R O X2 O 95
  • 96. True Experimental Designs3. The Randomized Solomon Four-Group Design: involves random assignment of subjects to four groups, with two of the groups being pretested and two not. One of the pretested groups and one of the unpretested groups is exposed to the experimental treatment. All four groups are then posttested. Treatment Group R O X1 O Control Group R O X2 O Treatment Group R X1 O Control Group R X2 O 96
  • 97. True Experimental Designs3. The Randomized Matched Control Group Design: It is similar to the randomized posttest control group design, but it is distinguished by the use of matching prior to random assignment. This design is used if the sample size is too small (perhaps less than 40 per group) to reasonably assure group comparability after random assignment. Subjects are first rank ordered on a variable closely related to the posttest. Then one of the two highest – forming matched pair – is randomly assigned to T or C, with the remaining one being assigned to the other. The next highest matched pair is similarly assigned, and this until the lowest two matched subjects are assigned randomly. Treatment Group M R X1 O Control Group M R X2 O 97
  • 98. Quasi-Experimental Designs Do not include the use of random assignment. Researchers who employ these design rely instead on other techniques to control (or at least reduce) threats to internal validity. 98
  • 99. Quasi-Experimental Designs A. The Matching Only Design: The researcher still matches the subjects in the experimental and control groups on certain variables, but he/she has no assurance that they are equivalent on others since subjects are not randomly assigned to groups. The two groups are intact (they are already existed before the intervention) and so are probably not comparable. An illustration of Matched Control Group Design 99
  • 100. Quasi-Experimental Designs1. The Matching Only Posttest-Only Control Group Design Treatment Group M X1 O Control Group M X2 O M = Matched 100
  • 101. Quasi-Experimental Designs2. The Matching Only Pretest-Posttest Control Group Design Treatment Group O M X1 O Control Group O M X2 O 101
  • 102. Quasi-Experimental Designs B. Counterbalanced Designs: Represent another technique for equating experimental and control groups. Each group is exposed to all treatments, however many there are, but in a different order. Any number of treatments may be involved. Researchers determine the effectiveness of the various treatments simply by comparing the average scores for all groups on the posttest for each treatment.Example: A Three-Treatment Counterbalanced Design Group One X1 O X2 O X3 O Group Two X2 O X3 O X1 O Group Three X3 O X1 O X2 O 102
  • 103. Quasi-Experimental DesignsC. Time-Series Designs: involves repeated measurements or observations over a period of time both before and after treatment.O1 O2 O3 O4 X O5 O6 O7 O8 103
  • 104. Quasi-Experimental DesignsD. Factorial Design: extend the number of relationships that may be examined in an experimental study allows a researcher to study the interaction of an independent variable with one or more other variables, sometimes called moderator variables Treatment Group R O X1 Y1 O Control Group R O X2 Y1 O Treatment Group R O X1 Y2 O Control Group R O X2 Y2 O 104
  • 105. Threats to Internal Validity Subject Characteristics Mortality Location Instrumentation Testing History Maturation Attitude of Subject Regression Implementation 105
  • 106. Suggested Readings Butler, Christopher. 1985. Statistics in Linguistics. New York: Basil Blackwell. Fraenkel, Jack R. and Norman E. Wallen. 1990.How to Design and Evaluate Research in Education. New York: McGraw-Hill, Inc. McMillan, James H. 1992. Educational Research: Fundamentals for the Consumer. New York: HarperCollinsPublishers. Suter, W.Newton. 1991.Primer of Educational Research. Boston: Allyn and Bacon. Singleton, Royce and Bruce Straits. 1999. Approaches to Social Research (3rd Edition). Oxford: Oxford University Press. Wallen, Norman E. and Jack R.Fraenkel. 1991. Educational Research: A Guide to the Process. New York: McGraw-Hill, Inc. 106