• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Experimental research sd
 

Experimental research sd

on

  • 4,366 views

chandraGC

chandraGC

Statistics

Views

Total Views
4,366
Views on SlideShare
4,365
Embed Views
1

Actions

Likes
0
Downloads
72
Comments
0

1 Embed 1

http://www.linkedin.com 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Experimental research sd Experimental research sd Presentation Transcript

      • Appropriateness of method of instruction
      • Effect of adopting instructional package on student learning
      • Effect of workshop class size on the acquisition of practical skills by students
      • Effect of a new procedure on the time taken by students to strip, clean and re-assemble a carburettor
      • Study the effect of intensity of light on visual activity
      • Effectiveness of a particular medication in reducing the body temperature
      • Design an experiment to examine relative performance of 10 automobiles
    • EXPERIMENTAL RESEARCH
      • A systematic and logical method for answering the question, ‘if this is done under carefully controlled conditions, what will happen?’
      • Manipulation of certain stimuli, treatments, or environmental conditions and observation of how the condition or behaviour of the subject is affected or changed.
    • Concept of Experimental Research
      • By experiment we refer to that portion of research in which variables are manipulated and their effects upon other variables observed. (Campbell & Stanley, 1966).
      • It incorporates those types of research design in which a researcher wants to establish cause-and-effect; maintain control over factor affecting result of experiment.
      • Procedure to test hypotheses by reaching valid conclusions about relationships between independent and dependent variables
    • Concept of Experimental Research
      • It involves the comparison of the effects of a particular treatment with that of a different treatment or of no treatment
      • Not always characterized by a treatment-nontreatment comparison; varying types, amounts or degrees of experimental factor applied to a number of groups
    • Early Experimentation – the law of single variable Mill (1972)
      • If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur have every circumstance in common save one, that one occurring only in the former, the circumstance in which alone the two instances differ is the effect, or the cause, or an indispensable part of the cause of the phenomenon.
      • If 2 situations are alike in every respect and one element is added (or removed) to (from) one but not the other, any difference that develops may be attributed to the added (subtracted) element
      • The law of the single variable provided the basis for much early laboratory experimentation; Robert Boyle and AC Charles
    • Factorial Designs Fisher, RA
      • The law of the single variable failed to provide a sound approach to experimentation in behavioural sciences
      • Fisher’s concept of random selection of subjects, random assignment of treatments, and ANOVA & ANCOVA made possible study of complex interactions through factorial designs.
    • Purpose of Experimental Research
      • To help a researcher answer a research question, and
      • To control for possible rival hypotheses or extraneous variables (Huck et al, 1984)
      • To generalize the variable relationships so that they may be applied outside the laboratory to a wider population of interest
    • Characteristics of Experimental Research
      • Independent, Dependent and Extraneous Variables
        • Manipulation of one variable followed by an observation of the effects of this manipulation on a second variable
        • The variable to be manipulated - the independent/ experimental variable
        • The variable that is measured to determine the effect of the experimental treatment - the dependent variable
        • Other factors called extraneous variables affecting the experiment might also attribute to the change
      • The Experimental Setting
        • Many experiments take place in laboratory settings where control can be exercised over these extraneous variables
    • Characteristics of Experimental Research
      • The Choice of Setting
        • Consider the nature and context of the problem being investigated and then choose the design and setting that is most appropriate
      • Establishing Cause-and-effect (through camparisons)
      • Multivariate Studies
        • Include all of those experiments in which there are either multiple independent variables (age, gender, qualification, institute, method of instruction), or multiple dependent variables (test score, attention span, time taken, skill acquisition, performance rating), or in some cases, both
    • Controlling Extraneous Variables
      • Removing the variable – selecting cases with uniform characteristics
      • Matching cases – selecting pairs or set of individuals with identical characteristics
      • Balancing cases – assigning subjects to groups in such a way that the M and the variances are as nearly equal as possible
      • Analysis of covariance – eliminates initial differences between the groups by statistical methods (using pretest mean scores as covariate)
      • Randomization
    • Experimental Validity
      • Internal validity
        • The extent that the factors that have been manipulated actually have a genuine effect on the observed consequences in the experimental setting
        • Extraneous variables influence results of experiment in ways that are difficult to evaluate
        • Impossible to eliminate completely
        • Anticipate and take precautions through sound experiment design and execution
    • Experimental Validity
      • Threat to Internal Validity
        • Maturation
          • Subjects change over a period of time; or bored, tired, wiser or influenced by incidental learnings or experiences
        • Testing
          • Pretesting at the beginning may produce a change
        • Unstable instrumentation
          • Changes in tool, calibration, observers/scorers
        • Statistical regression
          • Operates in pretest-posttest situations
        • Differential selection
        • Experimental mortality
    • Experimental Validity
      • External validity
        • The extent to which the variable relationships can be generalized to non-experimental situations – other settings, other treatment variables, other measurement variables, and other populations
        • Threat to External Validity
          • Contamination (a type of bias due to knowledge about subjects; outside raters)
          • Interference of prior treatment
          • Testing
          • Selection bias (selected from non-rep pop)
    • Experimentation in an Educational Setting
      • True Experimental Designs
        • building in controls for the threats to ward off the effect of extraneous variables and incorrect manipulation of treatments
      • two or more groups, so that there can be at least one treatment group and one control group
      • b) The random assignment of subjects to groups, so that equivalency of groups can be assumed
      • The posttest-only, equivalent-group design
      • Pretest-posttest equivalent-groups design
      • The Solomon four-group design - Population randomly divided into four samples. Two - experimental samples, Two - no experimental manipulation of variables. Two groups- a pretest and a post test. Two groups - only a post test. An improvement over the classical design because it controls for the effect of the pretest
    • Experimentation in an Educational Setting
      • Quasi Experimental Designs
        • When true experimental designs are not feasible, i.e., when total control over the experiment is not possible
        • control at least one and may be all of:
        • Time when the observations are made
        • Time when the treatment is applied, and
        • Assignment of treatments to groups
      • The nonequivalent, pretest-posttest design
      • The equivalent-materials, single group, pretest-posttest design
    • Experimentation in an Educational Setting
      • Pseudo Experimental Designs
        • Incorporate those designs where the researcher does not have the built-in control to say the treatment has had an effect.
        • In addition to the independent variable there may be several other plausible explanations as to why the dependent variable changed or remained the same
    • Step-by-step Procedure in the Effective Design of an Experimentation
      • Select Problem
      • Determining Dependent/Independent variables
      • Determining the number of levels of independent variables
      • Determining the possible combinations
      • Determining the number of observations
      • Redesign
      • Randomization
      • Meet ethical and legal requirements
      • Mathematical model
      • Data collection/reduction/verification