Shared Flashcard Set

Details

Educational Research: Chapter 10
Experimental Research
31
Education
Graduate
11/05/2011

Additional Education Flashcards

 


 

Cards

Term

Purpose of experimental research

Definition

-        Researcher manipulates at least one independent variable, controls other relevant variables, and observes the effect on one or more dependent variables.  

-           Only type of research that can test hypotheses to establish cause-effect relations.

-          Most confidence that A causes B

Term

How experimental differs from causal-comparative research

Definition

-          Experimental research has both random selection and random assignment. 

-     Causal-comparative has only random selection, not assignment (populations are preexisting)

Term

How experimental differs from correlational research

Definition

-          Correlational study predicts a particular score for particular individual.

-          Experimental is more “global” (e.g. if you use X Approach you will probably get different results than if you use Approach Y.)

Term

Difference between “random selection” and “random assignment.”

Definition

Random selection is the ability to randomly select the participants in the research study

-Random selection of subjects is the best way to ensure they are representative of the population

Random assignment is the ability to randomly assign participants to different groups in the study

Term

Difference between “control group” and “comparison group”

Definition

Control group the group the receives a different treatment or is treated as usual (e.g. would maintain the same method of instruction)

Comparison group receive different treatments are equated on all other variables that influence performance on the dependent variable.  (also called the experimental group

Term
Control in experimental research
Definition

the efforts to remove the influence of any variable, other than the independent variable IV; that may affect Ss performance on the dependent variable DV.

- Control in experimental research is best described as an attempt

 

limit threats to internal validity.

- Subject variables and Enviornmental variables         

(e.g. Who are more effective tutors parents or students. Make sure the both groups receive the same amount of time with tutor.)

-          Control extraneous variables affecting performance of the dependent variable are threats to the validity of the experiment

Term

Two kinds of variables that need to control in experimental research

Definition

-          Subject/ attribute variables, on which participants in the different groups may differ (e.g. gender, age, race) 

-          Environmental variables, variables in the setting that may cause unwanted differences between groups (e.g. time of day, years of experience, physical setting)

Term

Difference between “actively/ enviornmental” “assigned/ attributed” variables

Definition

Actively manipulated variables the independent variables that are manipulated by the experimenter

(e.g., instructional method, time of day, behavior managment procedures)

-          You can assign or randomly assign people to group

-          the researcher selects the treatments and decides which group will get which treatment

Assigned/ attributed variables (subject variables): characteristics intrinsic to the subject you can’t assign

-          (e.g. gender, age, race, SES, creativity)  

Term
Validity in Experimental Research
Definition

that the effects of the outcome is due to the cause of the independent variable (the effects of A caused B)

 

internal validityobserved differences on the DV are a direct result of manipulation of the IV -- not some other variable
external validitythe extent to with results are generalizable to groups and environments outside of the experimental setting
Term

Difference between “internal” and “external” threats to validity

Definition

Internal validity (within the study) focuses on the threats or rival examinations that influence the outcomes of an experimental study but are not due to the independent variable

def: The control of extraneous variables to ensure that the treatment alone causes the effect

-          (e.g. difference in amount of tutoring time of students by parents versus student tutors)

External validity focuses on the threats or rival explanations that disallow the results of a study to be generalized to other setting or groups (synonym for external is generalized)

def: The ability to generalize the results of a study

 

-          (e.g. results should be applicable to other groups of parent versus student tutoring)

Term
 Threats to external validity
Definition

-         pretest-Treatment Interaction, Multiple-Treatment Interference,  specificity of variables, Treatment diffusion, experimenter effects, reactive arrangements

o   Pretest-Treatment Interaction- use of a pretest makes you less generalizable just by the mere fact that there was no pretest.  A way to prevent extraneous variable.  There is no control for this.

 

o   Specificity of variables- because there is so unique and specific in study, this lessons the ability to generalize

o   Reactive arrangements- subjects know they are in an experiment so they act differently.

o   Multiple treatment interference (only a factor in multiple treatment) carry over effect so artificial that it makes it very hard to generalizable

Ecological validity refers to the context to which results generalize

Experimentor bias:  example of active experimentor effect

Treatment diffusion:  when participants in different treatments talk to one another about the study. then they borrow treatment from the other which causes overlapping

-          (e.g. results should be applicable to other groups of parent versus student tutoring)

-          if results cannot be replicated in other settings by other researchers, the study has low external, or ecological, validity

Term
-          threats to internal validity
Definition

history, maturation, testing, instrumentation, statistical regression, differential selection of participants, mortality, selection-maturation interaction

o   history- many things happen along with the Independent variable that may have caused the effect between the pretest and posttest.  A way to prevent this extraneous variable history would be to establish a control group.

o   Maturation- as students mature they get stronger, faster, and more agile. Way to prevent this extraneous variable would be a control group.

- when subjects who drop out of the study are systematically different than those that stay in

o   Testing- practice effects- you do better just because you have practiced the test already. A way to prevent this extraneous variable would be a control group.

o   Instrumentation- something is wrong with your instrument (unequal validity).  A way to prevent this extraneous variable is control group

o   Statistical regression- using extreme groups (GT kids, remedial kids). Usually kids at the bottom will score higher on the second because they can not get any lower.  A way to prevent this extraneous variable is a control group.

o   Selection- unequal groups differ beforehand section because there was no random assignment.  A way to control is random assignment

o   Mortality- people dropping out, loss of subjects.  Not loss of numbers but unequal numbers.  A way to prevent extraneous variable is adding a pretest.

Term

Five ways to control extraneous variables

Definition

1.       Randomization subjects are assigned at random (by chance) to groups (e.g. same on participants variables such as gender, ability, or prior experience)

- best way to control: random selection of participants and group assignment

2.       Matching is a technique used for equating groups on one or more variables (e.g. random assignment of pairs)

- attempts to make membership in groups equal

3.       Comparing homogeneous groups or Subgroups (e.g. only groups of the same I.Q.)

- cost of using homogenous grous is loss in external validity

4.       Participants as their own control is a single group of participants who are exposed to multiple treatments, one at a time. Helpful because the same participant get both treatments.

5.       Analysis of covariance a statistical method for equating randomly formed group on one or more variables.

-ANCOVA is a statistical procedure that statistically controls for pretest differences between groups

Term
Single-variable designs
Definition

Investigates one independent variable

Term
Pre-experimental group designs
Definition

do not do a very good job of controlling threats to validity and should be avoided.

-          The worst design not useful in for most purposes except, perhaps, to provide a preliminary investigation of a problem

Term
One-shot case study
Definition

involves a single group that is exposed to a treatment (X) and then posttested (O)

-          Threats not controlled are history, maturation, mortality

Term
One-group pretest-posttest design
Definition

 involves a single group that is pretested (O), exposed to treatment (X), and then tested again (O).

-          Threats not controlled are history and maturation

Term
Statistic-group comparison
Definition

involves at least two nonrandomly formed groups, one that receives a new or unusual treatment, an both groups are postested.

-          Difficult to determine whether the treatment groups are equivalent

Term
True experimental group designs
Definition

 has random design of participants

-          control for nearly all threats to internal and external validity

-          Have one characteristic in common that the other designs do not have: random assignment of participants to groups

-          Provides a very high degree of control and are always to be preferred over pre-experimental and Quasi-experimental

Term
Solomon Four-Group Design
Definition

involves random assignment of subjects to noe of four group.  Two of the groups are pretested, and two are ont; one of the pretested groups and one of the unpretested groups receive the experimental treatment.  All four groups are posttested.

- Perfect experimental design

Term
Pretest-posttest control group design
Definition

 involves at least two groups, both of which are formed by random assignment. Both groups are administered a pretest, one group receives a new or unusual treatment, and both groups are posttested.

-          A variation of this design seeks to control extraneous variables more closely by randomly assigning members of matched pairs to the treatment groups.

Term
Posttest-only control group design
Definition

same as the pretest-posttest control group design except there is no pretest.  Participants are randomly assigned to at least two groups, exposed to the independent variable, and posttested to determine the effectiveness of the treatment.

-          A variation of this design is random assignment of matched pairs

Term
Quasi-experimental group designs
Definition

 

When random assignment is not possible, a quasi-experimental design may provide adequate controls

-          Does not control as well as true experimental designs but do a much better job than the pre-experimental designs

Term
Nonequivalient control group design
Definition

is like the pretest-posttest control group design except that th nonequivalent control group design does not involve random assignment.  If the differences between the groups on any major variable are identified, analysis of covariance can be used to statistically equate the groups.

Term
Multiple time-series design
Definition

 is a variation that involves adding a control group to the basic design.  This variation eliminates all threats to internal validity

Term
Counterbalanced design
Definition

all groups receive all treatments but in a different order, the number of groups equals the number of treatments, and groups are posttested after each treatment.  This design is usually employed when administration of a pretest is not possible

Term

Potential threats to validity in a experimental design

Definition

-          Any uncontrolled

-          d extraneous variables that affect performance on the dependent variable are threats to the validity of an experiment.

-          Threats to internal validity- history, maturation, testing, instrumentation, statistical regression, differential selection, mortality

-          Threats to external validity- pretest-treatment interaction, multiple-treatment interference, selection-treatment interaction, specificity of variables

Term
Factorial design
Definition

involves two or more independent variables, at least one of which is manipulated by the researcher. It is used to test whether the effects of an independent variable are generalizable across all levels or whether the effects are specific to particular levels (i.e. there is an interaction between the variables)

-          2x2 is the simplest factorial design. 

-          Rarely include more than three factors

Term
Interaction
Definition
scores for levels of a first independent variable change depending on the levels of a second independent variable
Term

Difference between single variable design groups and factorial design groups

Definition

-          Single variable design involves one manipulated independent variable; a factorial variable design is any design that involves two or more independent variables, at least one of which is manipulated.

-          Factorial designs can demonstrate relations that a single variable cannot.  For example, a variable found not to be effective in a single-variable study may interact significantly with another variable.

Interaction effect allows the researcher to see interaction of variables

Term

Symbols

 

X
O
R
Definition
X= treatment
O = test (pre-test or post-test)
R = random assignment of Ss to groups
Supporting users have an ad free experience!