Term
| American leader in Case Studies |
|
Definition
|
|
Term
| APA published date and edition |
|
Definition
|
|
Term
Volume or issue? Names or initials for first names? Title capitalized? Journal?
Book Title and Chapter?
Online?
Book Publishing? |
|
Definition
volume, initials, no, italicized
italicized
retrieved from, http://
Give the publishing city, publishing state: publisher. |
|
|
Term
| Online sources (Advantage) |
|
Definition
|
|
Term
| Online Sources (Disadvantage) |
|
Definition
|
|
Term
|
Definition
| keep the element safe, it is not always about the breakthroughs |
|
|
Term
|
Definition
| subjective evaluation (according to the people) person of the risk relative to the benefits, both to the individual and the society |
|
|
Term
|
Definition
| Risk is not higher than daily life or during performance of routine tasks (unless the benefits are extremely high and out weigh the risks) |
|
|
Term
|
Definition
| explicitly expressed willingness to participate in a research project based on clear understanding of the nature of the research (All factors that would effect someone's willingness to participate must be included) |
|
|
Term
|
Definition
| the right of an individual to decide how information is communicated to others |
|
|
Term
|
Definition
| intentionally withholding information (ok when a debriefing is given where all of the details are explained, but there must be a rationale given for the IRB) |
|
|
Term
|
Definition
| Bring the participants to a knowledge of what was actually being studied, reveal all deceptions (participants must be informed that there will be a debrief at the end [requirement to debrief]) |
|
|
Term
|
Definition
| presentation of someone else's ideas without properly citing them, the worst is when you claim it to be the first ideas of it's kind (Do not have to cite public information) |
|
|
Term
|
Definition
| committee members must be a variety of people with a variety of experience |
|
|
Term
|
Definition
| requires the study to be noninvasive and not use any demographics |
|
|
Term
| who gives consent for minors? |
|
Definition
|
|
Term
| 3 specific risks (and examples0 |
|
Definition
1. Physical (broken arm) 2. Psychological (stress) 3. Social (embarrassment) |
|
|
Term
|
Definition
|
|
Term
|
Definition
| study of individuals or small groups |
|
|
Term
| Case Studies (Advantages) |
|
Definition
1. More details about an individual 2. Qualitative 3. Provides a falsification to reject the null meaning one example can challenge scientific theories 4. generates new ideas and new hypotheses |
|
|
Term
| Case Studies (Disadvantages) |
|
Definition
1. Time consuming 2. Observer bias (too subjective, things could be interpreted differently) 3. Small sample size (low power) 4. Can't make causal inferences 5. Limited generalization |
|
|
Term
| Case Study Characteristics |
|
Definition
no means, no SD, only descriptive stats
self-report, archival data, must monitor as often as possible |
|
|
Term
| Threats to internal Validity (definition) |
|
Definition
| extraneous variables that a do not allow for a clear cause and effect relationship to be determined |
|
|
Term
| Threats to Internal Validity (all 10) |
|
Definition
1. History 2. Maturation 3. Testing 4. Instrumentation 5. Regression 6. Subject Attrition 7. Selection 8. Contamination 9. Experimenter Expectancy 10. Novelty Effects |
|
|
Term
Threats to Internal Validity
History |
|
Definition
an event that occurred, not related to the IV, common in repeated measures
control with an appropriate comparison group |
|
|
Term
Threats to Internal Validity
Maturation |
|
Definition
changes associated in the participant due to time (getting older)
control by using a comparison group |
|
|
Term
Threats to Internal Validity
Testing |
|
Definition
practice effects (become better at a test after taking it once)
control with a control group |
|
|
Term
Threats to Internal Validity
Instrumentation |
|
Definition
Changes in the measurements: faulty instruments (mechanical) and the observer (experimenter effects, like using different observers)
control by training observers |
|
|
Term
Threats to Internal Validity
Regression |
|
Definition
an outlier returning back to normal, a high score is most likely to return to the mean
control by using multiple tests to see if it is an outlier by chance |
|
|
Term
Threats to Internal Validity
Subject Attrition |
|
Definition
people drop out of one group especially compared to another group (usually due to your IV)
control it by random design and measuring why everyone dropped out then recognizing similar characteristics between participants |
|
|
Term
Threats to Internal Validity
Selection and Selection Additive Effects |
|
Definition
you inadvertently create not equivalent groups, differences between groups (significantly)
control by using random assignment
Selection Additive Effects: when all the threats of internal validity combine |
|
|
Term
Threats to Internal Validity
Contamination |
|
Definition
someone divulges the info of what they did or what the research is studying (between groups)
control by asking the groups not to tell what happened in the study |
|
|
Term
Threats to Internal Validity
Experimenter Expectancy |
|
Definition
you want the participant to react in a certain way so you observe them acting in that way, or you interpret them into meaning what you want it to mean
control by making it a blind or double blind study |
|
|
Term
|
Definition
| new experience increases how a person feels and the excitement they feel, the newness produces an emotional change |
|
|
Term
| Characteristics of a true experiment |
|
Definition
1. Intervention or treatment (IV) 2. High degree of control, including random assignment to conditions 3. Appropriate comparisons * placebo: good/bad? |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
| number that appears the most often |
|
|
Term
|
Definition
| the average amount the results deviate from the mean |
|
|
Term
|
Definition
| indicates that your treatment and your results have a relationship and how much that relationship is (more than 1)[[ the amount of relationship that's correlated, how strong the relationship, the higher the effect size the stronger the relationship with the treatment]] |
|
|
Term
|
Definition
| the variability between groups (due to experimental effect) divided by the variability within groups (due to chance or error) |
|
|
Term
|
Definition
| ANOVA use with multiple groups, want the difference between groups to be more than the differnces within groups (due to chance or error) [a ratio= between/within] so if F is 1 then nothing more than chance happened |
|
|
Term
|
Definition
| % sure that the means fall between 2 numbers (the confidence interval is the number and how likely it is to fall between those 2 other numbers) |
|
|
Term
|
Definition
| reject the null when it is true |
|
|
Term
|
Definition
| fail to reject the null when it is false |
|
|
Term
|
Definition
|
|
Term
| Advantages of a Case Study |
|
Definition
a. more details about the individual are made known because they are often times with a researcher b. qualitative data c. one example can challenge all scientific theories (provides a falsification to reject the null hypothesis d. generates new ideas and new hypotheses |
|
|
Term
| Distinguish between a monothetic and an idiographic approach to research. |
|
Definition
a. nomothetic: groups of people being researched b. idiographic: studies of an individual or small groups of people |
|
|
Term
| Disadvantages to case studies |
|
Definition
a. time consuming b. researcher bias c. small sample size (low power) d. can’t make causal inferences e. limited generalization |
|
|
Term
| Major limitation to drawing cause an effect conclusions from case studies? |
|
Definition
| because extraneous variables are not controlled for and several "treatments" may be applied simultaneously |
|
|
Term
| Distinguish between baseline and intervention stages of a single-subject experimental design. |
|
Definition
a. baseline: the researchers record the element’s behavior prior to any treatment (typically looking for the number of times the target behavior happens within a unit of time) b. intervention: treatment comes after the element’s behavior is relatively |
|
|
Term
| Under what conditions might a single-subject deign be more appropriate than a multiple group design? |
|
Definition
| when entering a realm in which little information is known |
|
|
Term
| Why is ABAB design also called a reversal design? |
|
Definition
| a. ABAB refers to demonstrating behavior changes alternating treatment and no treatment conditions, an initial baseline stage (A) is followed by treatment (B), next by a return to baseline (A), and then back to treatment (B). Because treatment is removed during the second A stage, and any improvement behavior is likely to be reversed at this point it is also called a reversal design |
|
|
Term
|
Definition
|
|
Term
| General procedures and logic that are common to all the major forms of multiple-baseline designs. |
|
Definition
a. Procedures: take multiple baselines (at home, at work, and school) then introduce a treatment in one (while still observing all 3 situations) the behavior in the one treated area should improve, while the other 2 situations should not, then introduce treatment into one other situation, while still observing, hen introduce treatment into the last scenario b. Logic: demonstrates that behavior changes only when the treatment is introduced |
|
|
Term
| What methodological problems are specifically associated with multiple base-line designs? |
|
Definition
| there is excessive variability in the baseline, it is hard to know if behavior was on the upswing or if it was due to the treatment |
|
|
Term
| What methodological problems must be addressed in all single-subject experimental designs? |
|
Definition
| limited external validity (don’t know if the effect can be generalized to other people) |
|
|
Term
| What evidence supports the external validity of single-subject experimental designs? |
|
Definition
a. the types of intervention are often potent ones and frequently produce dramatic results and sizable changes in behavior b. results proven across multiple individuals |
|
|
Term
|
Definition
| an intensive description and analysis of a single individual |
|
|
Term
|
Definition
| approach to research that seeks to establish broad generalizations or laws that apply to large groups (populations) of individuals; the average or typical performance of the group is emphasized |
|
|
Term
|
Definition
| intensive study of an individual, with an emphasis on both individual uniqueness and lawfulness |
|
|
Term
| single subject experiment |
|
Definition
| a procedure that focuses on behavior change in one individual by systematically contrasting conditions within that individual while continuously monitoring behavior |
|
|
Term
|
Definition
| first stage of a single-case experiment in which a record is made of the individual's behavior prior to any intervention |
|
|
Term
| ABAB design (reversal design) |
|
Definition
| A single-case experimental design in which an initial baseline stage (A) is followed by a treatment stage (B), a return to baseline (A), and then another treatment stage (B); the researcher observes whether behavior changes on introduction of the treatment, reverses when the treatment is withdrawn, and improves again when the treatment is reintroduced. |
|
|
Term
| Multiple-baseline design (across individuals, across behaviors, across situations) |
|
Definition
| A single-case experimental design in which the effect of a treatment is demonstrated by showing that behaviors in more than one baseline change as a consequence of the introduction of a treatment; multiple baselines are established for different individuals, for different behaviors in the same individual, or for the same individual in different situations |
|
|
Term
| threats to internal validity |
|
Definition
| possible causes of a phenomenon that must be controlled so a clear cause and effect inference can be made |
|
|
Term
|
Definition
| The occurrence of an event other than the treatment can threaten internal validity if it produces changes in the research participants’ behavior. |
|
|
Term
|
Definition
| Change associated with the passage of time per se is called maturation. Changes participants undergo in an experiment that are due to maturation and not due to the treatment can threaten internal validity. |
|
|
Term
|
Definition
| Taking a test generally has an effect on subsequent testing. Testing can threaten internal validity if the effect of a treatment cannot be separated from the effect of testing. |
|
|
Term
|
Definition
| Changes over time can take place not only in the participants of an experiment, but also in the instruments used to measure the participants’ performance. Changes due to instrumentation can threaten internal validity if they cannot be separated from the effect of the treatment. |
|
|
Term
|
Definition
| Because some component of a test score is due to error (as opposed to true score), extreme scores on one test are likely to be closer to the mean on a second test, thus posing a threat to the validity of an experiment in which extreme groups are selected; the amount of this statistical regression is greater for less reliable tests. |
|
|
Term
|
Definition
| A threat to internal validity occurs when participants are lost from an experiment, for example, when participants drop out of the research project. The loss of participants changes the nature of a group from that established prior to the introduction of the treatment—for example, by destroying the equivalence of groups that had been established through random assignment. |
|
|
Term
|
Definition
| Selection is a threat to internal validity when, from the outset of a study, differences exist between the kinds of individuals in one group and those in another group in the experiment. |
|
|
Term
|
Definition
| occurs when there is a communication of information about the experiment between groups of participants |
|
|
Term
| confirming what the data reveal |
|
Definition
| In the third stage of data analysis, the researcher determines what the data tell us about behavior. Statistical techniques are used to counter arguments that the results are simply "due to chance." |
|
|
Term
|
Definition
| A technique for visualizing both the general features of a data set and specific item information by creating leading digits as "stems" and trailing digits as "leaves." |
|
|
Term
|
Definition
| Threats to internal validity of a study that occur when people's behavior changes simply because an innovation (e.g., a treatment) produces excitement, energy, and enthusiasm; a Hawthorne effect is a special case of novelty effects. |
|
|
Term
|
Definition
| Procedures that resemble those characteristics of true experiments, for example, that some type of intervention or treatment is used and a comparison is provided, but are lacking in the degree of control that is found in true experiments. |
|
|
Term
| nonequivalent control groups design |
|
Definition
| Quasi-experimental procedure in which a comparison is made between control and treatment groups that have been established on some basis other than through random assignment of participants to groups. |
|
|
Term
| simple interrupted time series design |
|
Definition
| Quasi-experimental procedure in which changes in a dependent variable are observed for some period of time both before and after a treatment is introduced. |
|
|
Term
| times series with nonequivalent control group design |
|
Definition
| Quasi-experimental procedure that improves on the validity of a simple time-series design by including a nonequivalent control group; both treatment and comparison groups are observed for a period of time both before and after the treatment. |
|
|
Term
|
Definition
| Research that seeks to determine whether a change proposed by an institution, government agency, or another unit of society is needed and likely to have an effect as planned or, when implemented, to actually have an effect. |
|
|
Term
|
Definition
| In this first stage of data analysis, the researcher inspects the data for errors and outliers and generally becomes familiar with the general features of the data. |
|
|
Term
|
Definition
| In this second stage of data analysis, the researcher uses descriptive statistics and graphical displays to summarize the information in a data set. Trends and patterns in the data set are described. |
|
|
Term
|
Definition
| The score that appears most frequently in the distribution. |
|
|
Term
|
Definition
| The middle point in a distribution, above which half the scores fall and below which fall. |
|
|
Term
|
Definition
| The arithmetic mean, or average, is determined by dividing the sum of the scores by the number of scores contributing to that sum. The mean is the most commonly used measure of central tendency. |
|
|
Term
|
Definition
| The difference between the highest and lowest number in a distribution. |
|
|
Term
|
Definition
| The most commonly used measure of dispersion that indicates approximately how far, on the average, scores differ from the mean |
|
|
Term
| standard error of the mean |
|
Definition
| The standard deviation of the sampling distribution of means. |
|
|
Term
| estimated standard error of the mean |
|
Definition
| An estimate of the true standard error obtained by dividing the sample standard deviation by the square root of the sample size. |
|
|
Term
| confidence interval for a population parameter |
|
Definition
| A range of values around a sample statistic (e.g., a sample mean) with specified probability (e.g., .95) that the population parameter (e.g., population mean) has been captured within that interval. |
|
|
Term
|
Definition
| A graph showing the relationship between two variables by indicating the intersection of two measures obtained from the same person, thing, or event. |
|
|
Term
|
Definition
| A trend in the data that is appropriately summarized by a straight line. |
|
|
Term
|
Definition
| A relationship between two variables in which values for one measure increase as the values of the other measure also increase. |
|
|
Term
|
Definition
| A relationship between two variables in which values for one measure increase as the values of the other measure decrease. |
|
|
Term
|
Definition
| Assumption used as the first step in statistical inference whereby the independent variable is said to have had no effect. |
|
|
Term
|
Definition
| The probability when testing the null hypothesis that is used to indicate whether an outcome is statistically significant. Level of significance, or alpha, is equal to the probability of a Type I error. |
|
|
Term
|
Definition
| The probability of rejecting the null hypothesis when it is true, equal to the level of significance. |
|
|
Term
|
Definition
| The probability of failing to reject the null hypothesis when it is false. |
|
|
Term
|
Definition
| A statistical technique that can be applied (usually after obtaining a significant omnibus F-test) to locate the specific source of systematic variation in an experiment. |
|
|
Term
|
Definition
| A measure of effect size when there are more than two means that defines an effect relative to the degree of dispersal among group means. Based on Cohen’s guidelines, an f value of .10, .25, and .40, defines a small, medium, and large effect size, respectively. |
|
|
Term
|
Definition
| A measure of the strength of association (or effect size) based on the proportion of variance accounted for by the effect of the independent variable on the dependent variable. |
|
|
Term
|
Definition
| The initial overall analysis based on ANOVA. |
|
|
Term
|
Definition
| In the analysis of variance, or ANOVA, the ratio of between group variation and within group or error variation. |
|
|
Term
| single factor independent groups design |
|
Definition
| An experiment that involves independent groups with one independent variable. |
|
|
Term
|
Definition
| The analysis of variance, or ANOVA, is the most commonly used inferential test for examining a null hypothesis when comparing more than two means in a single-factor study, or in studies with more than one factor (i.e., independent variable). The ANOVA test is based on analyzing different sources of variation in an experiment. |
|
|
Term
| repeated measures (within test) t-test |
|
Definition
| An inferential test for comparing two means from the same group of subjects or from two groups of subjects "matched" on some measure related to the dependent variable. |
|
|
Term
| t-test for independent groups |
|
Definition
| An inferential test for comparing two means from different groups of subjects. |
|
|
Term
|
Definition
| Probability in a statistical test that a false null hypothesis will be rejected; power is related to the level of significance selected, the size of the treatment effect, and the sample size. |
|
|