Term
| Have at least 2 times the number of items you plan to have in final scale |
|
Definition
|
|
Term
| Cross-cultural methods (back translation and measurement invariance) |
|
Definition
| Schaffer & Riordan (2003) |
|
|
Term
The Earth is Round, P< .05: Should rely on effect sizes and CIs |
|
Definition
|
|
Term
| Crud Factor: Null hypothesis will never actually = 0. |
|
Definition
|
|
Term
| The earth is NOT round, p = .00: NHST is still useful |
|
Definition
|
|
Term
| Controlling isn't a magical purification tool |
|
Definition
| Spector & Brannick (2011) |
|
|
Term
| Difference scores? Use polynomial regression. |
|
Definition
|
|
Term
| Bootstrapping for mediation |
|
Definition
| Preacher & Hayes (2004; 2008) |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
SEM fit indices benchmarks
- CFI/TLI = .95
- SRMR = .08
- RMSEA = .06
|
|
Definition
|
|
Term
| P value is artibrary. We should rely on the practical importance of effect sizes |
|
Definition
| Rosnow & Rosenthal (1989) - Also introduces Rosenthal's BESD, which is desigend to indicate practical importance of an effect size estimate. |
|
|
Term
| Meta Method: Use corrections to find Rho |
|
Definition
|
|
Term
| Meta Method; No corrections - use variance weight (and Q test to check for possibility of moderators) |
|
Definition
|
|
Term
| Meta Method - Bare bones. Only deals with observables. |
|
Definition
|
|
Term
| P value (difficulty) should average .50 (and range between .1 and .9) |
|
Definition
|
|
Term
| Item-total correlations should be at least .30 (otherwise they do not discriminate well). |
|
Definition
|
|
Term
| Reverse coding can result in 2 artificial factors |
|
Definition
|
|
Term
|
Definition
|
|
Term
| CMV isn't necessarily a big deal. we must look at it via a case by case basis |
|
Definition
|
|
Term
| Provides recommendations to strengthen longitudinal studies |
|
Definition
|
|
Term
| Overview of research design, validity, and reliability |
|
Definition
|
|
Term
| Mundane Realism (Stone-Romero, 2002) |
|
Definition
| The degree to which the circumstances that research participants encounter in the lab are highly similar to the circumstances that would be found in settings in which the phenomenon occurs naturally |
|
|
Term
| Overreliance on theory stifles potential impact of interesting findings |
|
Definition
|
|
Term
| Theory is a thorough description of underlying processes that connect phenomena (not simply references, diagrams, or variables) |
|
Definition
|
|
Term
| There is no automatic inference machine. Statis cannot provide a test for causation, and you certainly can't just throw data into a computer and have it tell you the answer. You have to make inferences and decisions. |
|
Definition
|
|
Term
| The legitimacy of a mediation cliam depends on the study design and setting |
|
Definition
| Stone-Romero and Rosopa (2008) |
|
|
Term
| Field and lab results often converge. Lab generally procduces psychological truths |
|
Definition
|
|
Term
| Lab and field are complementary. Both should be equally respected. |
|
Definition
|
|
Term
| Randomized experiments can be generalizable; They serve as a potent design for causality |
|
Definition
|
|
Term
| Review of Experience Sampling. Daily Diary Methods. |
|
Definition
|
|
Term
| Self-reports are good for certain purposes (e.g., when trying to assess how people feel about their jobs) |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
| When a construct is viewed as causing a measure |
|
Definition
| Reflective measures (Edwards & Bagozzi, 2000) |
|
|
Term
| When a measure is viewed as causing a construct |
|
Definition
| Formative measures (Edwards & Bagozzi, 2000) |
|
|
Term
|
Definition
a = discrimination (slope)
b = difficulty (if easier, curves right further to the left)
c = guessing parameter (lower asymptote of IRC) |
|
|
Term
|
Definition
Independence of Observations
Normality
Homogeneity of Variances |
|
|
Term
|
Definition
- Unreliability
- Error in the Criterion
- Range Restriction/Range enhancement (S+H correction should be used for in any concurrent validity study)
|
|
|
Term
|
Definition
|
|
Term
|
Definition
| Square root of reliability is the upper bound on validity |
|
|
Term
|
Definition
|
|
Term
| Threats to Statistical Conclusion Validity |
|
Definition
- Statistical Power
- Violation of Independence
- Fishing/The Error Rate Problem
- Unreliability of Measures
- Reliability of Treatment Implementation
- Random Irrelevancies in the setting
- Heterogeneity of Respondents
- Restriction of Range
- Inaccurate Effect Size Estimation
|
|
|
Term
| Threats to Internal Validity |
|
Definition
- Ambiguous Temporal Precedence
- Selection
- History
- Mortality
- Regression to the Mean
- Maturation
- Testing
- Instrumentation
- Additive and Interactive Effects (e.g., Selection x History)
- Ecological Fallacy
|
|
|
Term
| Threats to External Validity |
|
Definition
- Units x Treatment
- Testing x Treatment
- Setting x Treatment
- Time x Treatment
- Different Treatment Interactions
|
|
|
Term
|
Definition
|
|
Term
|
Definition
- Configural
- Metric
- Scalar
- Error
|
|
|
Term
| Steps for Survey Development |
|
Definition
- Conduct a Literature Review
- Operationally Define the Construct
- Ask Individuals to come up with CIs (Flanagan, 1954)
- Write Items
- Pilot Test the Items
- Delete Poor Items
- Administer the Survey to a Large Population
- Evaluate the Survey (Reliability, Criterion Validity, Statistical Artifacts, Construct Validity, Group Differences/Differential Prediction)
|
|
|
Term
|
Definition
|
|
Term
| Threats to Construct Validity |
|
Definition
- Poorly defined or counfounded construct - Mono-Operation bias (Single operationalization of construct) - Mono-method bias - Interaction of procedure and treatment - Diffusion/imitation of treatment (tx contamination from tx to control group) - Experimenter expectancies (experimenter provides clues about how to act/react) - Confounding levels of constructs with other constructs - Compensatory equalization of treatment (we may compensate control group in other ways bc it is inherently unfair to deny tx) - Compensatory rivalry: Control works hard to seem as good as tx group - Resentful demoralization of respondents receiving less desirable tx - Hypothesis guessing |
|
|