Synopsis nine: research methods, experiments
Researchers relying on observational research methods such as surveys, textual analysis and ethnography sometimes become frustrated that results of their data may describe what is happening, but cannot say one thing caused another. To establish a causal relationship between variables, researchers usually have to rely on experiments. The experimenter actually changes the independent variable, and then observes the result on a dependent variable. This is called manipulating variables, and it is intended to find a cause-and-effect relationship.
The biggest problem experimental researchers face is control: they need to control for all extraneous or "confounding" variables which might muddy the data and limit the usefulness of conclusions. To control these variables, researchers rely on several methods, including:
1. Eliminating or removing. For instance, finding a sound-proof lab to eliminate annoying sounds which could affect results.
2. Holding the variable constant. For instance, distributing the effects of fatigue on subjects equally across the study, or pairing men and women together to control for cross-sexual influence on responses.
3. Matching, putting subjects together based on age, education, profession, etc.
4. Blocking, or taking the variable in as part of the study.
5. Randomization. Randomization means selecting participants from a population using randomized formulas to guarantee a random sample. Relying on the rules of chance, you assure that a population will have a higher probability of being equal than being unequal. Statistics of probability can be used to quantify randomization in quantitative research.
6. Statistical control. A researcher may measure a nuisance variable, and use analysis of covariance (the nuisance variable is called a "covariate") to control for it in output scores.
Experiments fall into three areas depending on how well variables are controlled:
3. Full or true experiments.
Pre-experiments don't control for variables. Most common is the one-shot case study, the researcher who tries something with an un-randomized sample of people, and then observes the results. Teachers who write about techniques they've tried in their classes commonly do pre-experiments. Generally in this sort of case study method, results can't be generalized, but can offer preliminary data, and can be used to help predict possible results of a true experiment.
Quasi-experiments may control for some variables, but not comprehensively. One-group pretest/posttest examines causation, but does not include a control group or random sample. The static group comparison uses groups which may not be comparable, for instance a graduate-level class and a freshman-level class.
The most common full experiment is the pretest/posttest control group design. Here you have the experimental and control group, using a randomized population, and a pretest and posttest. Or in experimental shorthand:
R-- O1 X1 O2
R-- O2 X0 O2
R is the randomization, O is observation, and X is "treatment" (manipulation of the variable). The top line is the experimental group; the bottom line is the control group, to which no "treatment" (X0) is given.
To avoid having subjects become sensitive to the test, some researchers only use a posttest. Others control for test sensitivity using the Solomon Four Group design, though this is expensive and time-consuming.
Sometimes researchers want to set up an experiment using more than one variable, called variable factors or just factors. This "factorial study" examines interaction of independent variables as well as action of each on dependent variables. Such a study can be mapped using a box with cels, called a "design diagram." Factorial studies can quickly become complicated, expensive and time-consuming, which explains why many such studies are set up by multiple researchers under a principal investigator ("P.I.").
Often a pilot experiment is tried before a full experiment is set up, to discover possible unexpected problems or confounding variables.