Your SlideShare is downloading. ×
Experimental design
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Experimental design

538

Published on

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
538
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
26
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Experimental DesignAndrew Rebeiro-Hargrave (PhD)
  • 2. Experimental Design• Experimental design is a planned interference in the natural order of events by theresearcher– A selected condition or a change (treatment) is introduced– Measurements are planned to see the effect of any change in conditions• Experimental design is also the quest for inference about causes or relationships– Researchers want to make inferences about what produced, contributed to, or causedevents and rule out alternative causes• Experimental design entails:– selecting or assigning subjects to experimental units– selecting or assigning units for specific treatments or conditions of the experiment(experimental manipulation)– specifying the order or arrangement of the treatment or treatments– specifying the sequence of observations or measurements to be taken
  • 3. Considerations in Design Selection• The selection of a specific type of design depends primarily on boththe nature and the extent of the information we want to obtain• Experimental Design is the task of extracting the exact informationneeded to solve the research problem• Two ways of checking potential designs:1. What questions will this design answer? We should specifyquestions the design wont answer as well ones it will answer.2. What is the relative information gain/cost picture? The major pointhere is that the researcher must take a close look at the probablecost before selecting a design.
  • 4. Experimental Design Terminology• Treatment group in an experiment receives the specified treatment• Control Group serves as a baseline against which to measure the effect of the full treatment on thetreatment group• A variable refers to almost anything (purchasing power, employment, health, education, housing, gender..)There are only two kinds of stuff in the world for researchers: variables and constants• Extraneous variables (external to the experiment) are variables that may influence or affect the results ofthe treatment on the subject (decline in external remittances with increasing poverty)• A variable of specific experimental interest is sometimes referred to as a factor.– Factor is used when an experiment involves more than one variable (poverty variables)– Level refers to the degree or intensity of a factor (education, gender)• Randomness refers to the property of completely chance events that are not predictable. If they are trulyrandom, examining past instances of occurrence should give the researcher no clues as to futureoccurrences• Random assignment of subjects to groups tends to spread out differences between subjects inunsystematic (random) ways so that there is no tendency to give an edge to any group
  • 5. Experimental Design Terminology• Ex post facto refers to causal inferences drawn "after the fact” - the causal event of interest has alreadyhappened• Variance refers to the variability of any event. If one uses a fine enough measuring device, one can finddifferences between any two objects or events• The inside logic of an experiment is referred to as internal validity. Primarily, it asks the question: Does itseem reasonable to assume that the treatment has really produced the measured effect?• External validity refers to the proposed interpretation of the results of the study. If asks the question:With what other groups could we reasonably expect to get the same results if we used the sametreatment?• Blocks usually refers to categories of subjects with a treatment group (low income block, middle incomeblock..)• Interaction refers to variables in the treatment which may interact with each other. It may make adifference whether a variable is used by itself, with another, or with different levels or degrees of another.
  • 6. Poverty Experimental DesignPoverty FactorsIncome securityRecurring IncomeIncrease asset valueVariables:+/-EducaVariables:+/-HouseVaria
  • 7. Village Business Experimental DesignVillage BusinessModelIncome securityRecurring IncomeIncrease asset valueVariables:+/-ProduVariables:+/-OnlinVaria
  • 8. Classes of informationThere are six major classes of information withwhich an experimental designer must cope:1. post -treatment behavior or physical measurement2. pre-treatment behavior or physical measurement3. internal threats to validity4. comparable groups5. experiment errors6. relationship to treatment
  • 9. Post-Treatment BehaviorUsually only immediate or short-range results are obtainedFive categories of post-treatment behavior or physical measurement can beidentified:1. behavior or measurement immediately after treatment2. a comparison of post-treatment behavior between experimental andcontrol groups3. a comparison of the post-treatment behavior between experimentalgroups or blocks4. long-term effects with continuing treatment and periodicobservations5. long-term effects without continuing treatment but withobservation(s)
  • 10. Pre-Treatment BehaviorInformation concerning pre-treatment behavior or condition requiresobservation, a test, or measurement, to be administered before theexperimental manipulation.Several classes of pre-treatment information can be acquired:1. behavior or measurement immediately before treatment2. comparing pre-treatment to post-treatment behavior ormeasurement3. a comparison of pre-treatment behavior or measurementbetween different pairs of subjects4. a comparison of the differences between pre-treatment and post-treatment behavior among groups of subjects5. the effect of the pre-treatment observation or measurement onsubsequent behavior or measurement of the subject
  • 11. Internal Threats to ValidityThis class of information refers to some rival hypothesis that threatensclear interpretation of the experiment. Typically, the rival hypothesisasserts that something outside of the experiment proper produced thebehavior or measurement of interest.Typically, internal threats to validity include:1. the subjects exhibited behavior because of some event other thanthe treatment2. some other drug or process caused the change3. the subject changed naturally (just improved)4. the subject had a massive change in attitude or emotion5. some other physical change occurred6. the subject could or would perform the behavior, or would haveexhibited the measurement without the treatment
  • 12. Comparable GroupsThis class of information concerns subjects in the different units wereabout the same in relevant attributes before the treatment, and duringthe treatment, except for the treatment condition itself.There are two types of comparability information:1. were the groups (either experimental or control) comparable before thetreatment?2. did the groups receive a comparable degree of experiences during thetime of the study (except for differences in treatment?)
  • 13. Experimental ErrorsExperiment error refers to some unwanted side effect of the experiment itselfwhich may be producing effect rather than the treatment.Two types of strategies exist to deal with the Hawthorne effect:1. provide for a placebo treatment group which gets the attention, but notthe "real" treatment and use blind and double blind strategies as needed2. continue the treatment over a longer period of time; research showsthat the Hawthorne effect tends to be short-lived
  • 14. Relationship to TreatmentThis class of information deals with the possible interaction of the treatment effectswith: different kinds of subjects, other treatments, different factors within acomplicated treatment, different degrees of intensity, repeated applications orcontinuation of the treatment, and different sequences or orders of the treatment orseveral treatments.Typically, information of this type is acquired from blocking, from factorial designs, andvarious repeated measures designs.1. did the treatment interact with subject characteristics so that subjects withdifferent characteristics behaved or reacted differently?2. how does the treatment interact when combined with other sorts of treatment?3. does the treatment contain different factors which may operate differentially onthe subjects?4. what is the effect of different levels or degrees of the treatment?5. what is the effect of different orders or sequences of various treatments?
  • 15. Basic Experimental DesignsEleven commonly used experimental designs:1. One-Shot2. One-Group, Pre-Post3. Static Group4. Random Group5. Pre-Post Randomized Group6. Solomon Four Group7. Randomized Block8. Factorial9. One-Shot Repeated Measures10. Randomized Groups Repeated Measures11. Latin Square
  • 16. Discussion topics when setting up anexperimental DesignAn experimental design or randomized clinical trial requires careful consideration of several factorsbefore actually doing the experiment. An experimental design is the laying out of a detailedexperimental plan in advance of doing the experiment.1. How many factors does the design have? and are the levels of these factors fixed or random?2. Are control conditions needed, and what should they be?3. Manipulation checks; did the manipulation really work?4. what are the background variables?5. What is the sample size. How many units must be collected for the experiment to be generalisable and haveenough power?6. What is the relevance of interactions between factors?7. What is the influence of delayed effects of substantive factors on outcomes?8. How do response shifts affect self-report measures?9. How feasible is repeated administration of the same measurement instruments to the same units at differentoccasions, with a post-test and follow-up tests?10. What about using a proxy pretest?11. Are there locking variables?12. Should the client/patient, researcher or even the analyst of the data be blind to conditions?13. What is the feasibility of subsequent application of different conditions to the same units?14. How many of each control and noise factors should be taken into account?

×