Term
| areas of psychology within the UCR psychology department |
|
Definition
developmental
social/personality
behavioral/cognitive
neuroscience |
|
|
Term
| goals of behavioral research |
|
Definition
describe
predict
explain/understand
solve
|
|
|
Term
| 2 types of behavioral research |
|
Definition
basic - research driven by a researcher's curiosity
applied - research intended to solve real problems |
|
|
Term
| 4 broad methodological strategies |
|
Definition
descriptive
correlational
experimental
quasi-experimental |
|
|
Term
|
Definition
Describes patterns of behavior without efforts to explain anything and tells us how things are
|
|
|
Term
|
Definition
| attempts to understand how variables are related to each other; tells us how things are in relation to others |
|
|
Term
|
Definition
| Examines the effects of variables on thoughts, behaviors, and emotions by systematically exposing people to various “levels” of those variables tells us how things are and how they got to be that way |
|
|
Term
| quasi-experimental research |
|
Definition
| Examines the effects of variables on thoughts, behaviors, and emotions BUT the IV cannot be varied or controlled (only naturally occurring); Useful when “true” experiment is either impossible or unethical |
|
|
Term
| 4 ways to "know" something |
|
Definition
tenacity
authority
a priori
empirical reasoning |
|
|
Term
|
Definition
| people clinging to their beliefs or claims because they seem obvious, make common sense, or were accepted in the past |
|
|
Term
|
Definition
| people accepting claims because someone in a position of authority says that it is true |
|
|
Term
|
Definition
| people using individual powers of reason or logic to know or explain the world |
|
|
Term
|
Definition
| people relying upon the scientific method and attempting to draw upon independent realities to evaluate claims rather than depending on reason alone; implies a combination of logic AND controlled observation and measurement |
|
|
Term
| 3 properties of scientific theory |
|
Definition
systematic empiricism
public verification
testability |
|
|
Term
|
Definition
| the practice of relying on careful, organized observations to answer questions about our world |
|
|
Term
|
Definition
| the practice of presenting ideas, theories, and findings to the public |
|
|
Term
|
Definition
| ideas and claims have to be able to be falsifiable |
|
|
Term
| common fallacies in reasoning |
|
Definition
appeal to ignorance slippery slope false alternatives hasty generalizations questionable analogies |
|
|
Term
|
Definition
| argues that a claim is true because it cannot be proven false or the opposite |
|
|
Term
|
Definition
| if the first step in a series of events occurs, the rest must inevitably follow |
|
|
Term
|
Definition
| "either/or" thinking in which some classification is presumed to be exclusive or exhaustive |
|
|
Term
|
Definition
| forming a general conclusion based on an exceptional case or a very small biased sample |
|
|
Term
|
Definition
| we sometimes try to compare apples to oranges, or try to make two situations more similar than they really are |
|
|
Term
| questionable ethical practices |
|
Definition
involving participants without consent/knowledge,
coercing people to participate, deceiving research participants,
exposing participants to physical and/or mental stress,
maintaining confentiality of participant's data |
|
|
Term
| guidelines for animal research |
|
Definition
must be closely monitored by experienced caretaker,
must have a vet for consultations,
must be housed in humane and healthy conditions,
must make reasonable efforts to avoid harm when possible and all harm must be justified |
|
|
Term
|
Definition
| manuscript is sent to one journal for consideration, editor assigns paper to an associate editor, associate editor identifies 3 anonymous professionals who decide, associate editor takes reviews and makes the final decision |
|
|
Term
|
Definition
| a set of propositions that attempts to specify the interrelationships along a set of concepts |
|
|
Term
| Why construct new theories? |
|
Definition
no theory to explain observation,
research suggests existing theories are inaccurate,
or you don't like old theory |
|
|
Term
| relationship between theories, hypotheses, studies, and data |
|
Definition
hypotheses (logically deducted from theory) predictions (hypotheses applied to specific research setting) data collection (relate back to theory) |
|
|
Term
| What makes a good hypothesis? |
|
Definition
corresponds with reality coherent and parsimonious (Occam's razor),
falsifiable |
|
|
Term
| How do we generate research ideas? |
|
Definition
read relevant literature intensive case studies, notice surprising or paradoxical phenomenon look for meditating variables to explain known relationships,
attempt to resolve conflicting results effort to improve on or extend older ideas,
or the exploitation of unexpected observations |
|
|
Term
|
Definition
| dictionary like definition |
|
|
Term
|
Definition
specifies how the concept is measured or manipulated in a particular study ex. hunger defined as being deprived of food for 12 hours |
|
|
Term
|
Definition
| the presumed cause which the researcher manipulates or varies |
|
|
Term
|
Definition
| the measured effect or outcome in which a researcher is interested |
|
|
Term
| IVs and DVs in correlational designs |
|
Definition
| looks at relationship between the two as they occur naturally |
|
|
Term
|
Definition
| the process of assigning numbers to observations |
|
|
Term
|
Definition
behavioral
physiological
self-report
archival |
|
|
Term
|
Definition
directly observing behaviors
pros - less interpretation required, doesn't rely on memory or willingness to participate or answer questions
cons - time consuming |
|
|
Term
|
Definition
assesses internal processes that we cannot directly observe
cons - extremely expensive |
|
|
Term
|
Definition
analyzes the replies of questionnaires and interviews
pros - cheap, see stuff normally not seen
cons - biases interfere, memory, assumptions need to be made |
|
|
Term
|
Definition
| uses pre-existing measures for current purposes |
|
|
Term
| How do we develop measures? |
|
Definition
identify constructs
identify representative variables
operationally define measures |
|
|
Term
|
Definition
| consistency, stability, or dependability of a measure |
|
|
Term
| observed score = _______ + _________ |
|
Definition
true score
measurement error |
|
|
Term
| observed score = _______ + measurement error |
|
Definition
|
|
Term
| observed score = true score + ________ |
|
Definition
|
|
Term
| _________ = true score + measurement error |
|
Definition
|
|
Term
| What contributes to measurement error? |
|
Definition
transient states
stable attributes
situational factors
characteristics of the measure
mistakes |
|
|
Term
|
Definition
| actual score an individual should receive on a measure |
|
|
Term
|
Definition
| result of any factor that make observed scores different from the true score |
|
|
Term
| Contributions to measurement error? |
|
Definition
transient states (mood, fatigue)
stable attributes (motivation, suspiciousness)
situational factors (experimenter's personality, room temperature)
characteristics of the measure (ambiguous questions, long questionnaire)
mistakes (data entry, coding, counting) |
|
|
Term
| 3 assessments of reliability |
|
Definition
test-retestreliability
internal consistency reliability
inter-rate reliability |
|
|
Term
|
Definition
|
|
Term
| internal consistency reliability |
|
Definition
|
|
Term
| 3 ways to test internal consistency reliability |
|
Definition
item-total correlation
split-half reliability
Cronbach's alpha coefficient |
|
|
Term
|
Definition
| calculate correlation coefficient between each item and the sum of all other items; should be >.30 |
|
|
Term
|
Definition
| calculate correlation coefficient between two halves of a scale; should be >.70 |
|
|
Term
| Cronbach's alpha coefficient |
|
Definition
| equivalent to average of all possible split-half reliabilities; should be >.70 |
|
|
Term
|
Definition
| consistency across raters or judges |
|
|
Term
| How do we increase reliability? |
|
Definition
Standardize administration of measures
Clarify instructions and questions
Train raters and judges carefully
Minimize and check for coding errors |
|
|
Term
|
Definition
| the measure that is a true representation of the construct we want to measure; can't have validity without reliability but can have reliability without validity |
|
|
Term
| 4 assessments of validity |
|
Definition
face validity
content validity
construct validity - convergent, discriminant criterion-related validity - concurrent, predictive |
|
|
Term
|
Definition
| does the measure look right? items should look like they measure intended construct; ex. need to belong scale |
|
|
Term
|
Definition
| does the measure adequately represent relevant construct? ex. need to belong scale captures different aspects of desire for social acceptance |
|
|
Term
|
Definition
| does the measure relate to the other constructs as it should? |
|
|
Term
|
Definition
| does the measure correlate with other measures in which it should be related? |
|
|
Term
|
Definition
| does the measure NOT correlate with other measures to which it should or should not be related? |
|
|
Term
|
Definition
nominal scales
ordinal scales
interval scales
ratio scales |
|
|
Term
|
Definition
| data values that are just labels or categories relevant to participants |
|
|
Term
|
Definition
| rank ordering of a set of behaviors or characteristics |
|
|
Term
|
Definition
| equal differences between numbers reflect equal differences on the dimension being measured and has no meaningful zero |
|
|
Term
|
Definition
| interval scale plus a "true zero" |
|
|
Term
|
Definition
| involve replies people give to questionnaires and interviews |
|
|
Term
|
Definition
| allow participants to answer questions in their own ways |
|
|
Term
|
Definition
| provide participants with specific rating dimensions of interest |
|
|
Term
| How should you phrase questions? |
|
Definition
short and simple
use precise terminology
avoid unnecessary negatives
place conditional information before the key idea
avoid unwarranted assumptions
ask only one question at a time (avoid double-barreled questions)
avoid leading questions avoid false choices pretest your questions |
|
|
Term
| How many scale points should you use? |
|
Definition
Magic # for memory 7 +/- 2
Most use 5, 7, or 9 point scales
Reliability increases to 7 points and then levels off |
|
|
Term
| 6 biases of self-report measures |
|
Definition
knowledge problems
motivation problems
verbal/reading skills problems
social desirability bias
response style bias - acquiescent, disagreement scale position biases - central tendency, extreme position |
|
|
Term
|
Definition
acquiesce response style
disagreement response style
central tendency bias
extreme position bias |
|
|
Term
| acquiesce response style bias |
|
Definition
|
|
Term
| disagreement response style bias |
|
Definition
|
|
Term
|
Definition
| tendency for raters to rate averagely |
|
|
Term
|
Definition
| tendency to score on the extremes of a scale |
|
|
Term
| 3 decisions to make in observational measures |
|
Definition
Where will the observation occur?
Will the participants know they're observed?
How will behavior be recorded? |
|
|
Term
| 2 types of locational observations |
|
Definition
naturalistic
participant contrived |
|
|
Term
| naturalistic observations |
|
Definition
| behavior observed as it naturally occurs with no intrusion from the researcher (participant observation - researcher engages in same activities as the people he or she is observing) |
|
|
Term
|
Definition
| behavior is observed in research settings that are arranged for observing and recording behavior |
|
|
Term
| 2 types of observational techniques |
|
Definition
|
|
Term
|
Definition
| participants know they are being observed -problem: reactivity |
|
|
Term
|
Definition
researchers conceal observation of participants -partial concealment strategy
-ethical issues
-unobtrusive measures |
|
|
Term
| How is behavior recorded? |
|
Definition
narratives
checklists (tally sheets)
temporal measures (duration, latency)
rating scales |
|
|
Term
| What unit of text will be analyzed? |
|
Definition
individual words
utterance
whole text |
|
|
Term
| How will each unit of text be coded? |
|
Definition
| comprehensive coding vs. selective coding |
|
|
Term
| What type of coding system will you use? |
|
Definition
| classification vs. rating system |
|
|
Term
| How will inter-rate reliability be maximized? |
|
Definition
| coding system must be specific and precise raters should be trained and allowed to practice pilot test coding with real data check inter-rater agreement periodically |
|
|
Term
| 3 conditions that need to be met in order to infer causality |
|
Definition
co-variation between X and Y
temporal precedence of X before Y
No third variables |
|
|
Term
| 3 essentials for good experimental design |
|
Definition
good manipulation of the independent variable - pilot test, manipulation check
random assignment
experimental control - confound, internal validity |
|
|
Term
|
Definition
| test to see whether the independent variables produce hypothesized effects on the participants' behaviors |
|
|
Term
|
Definition
| questions designed to determine whether the independent variable was being manipulated successfully |
|
|
Term
|
Definition
| when something other than the independent variable differs in some systematic way |
|
|
Term
|
Definition
| the degree to which a researcher draws accurate conclusions about the effects of the independent variable |
|
|
Term
| 8 threats to internal validity |
|
Definition
selection
differential attrition
pretest sensitization
history
maturation
instrumentation
statistical regression
miscellaneous design confounds - demand characteristics, experimenter expectancy effects, double-blind procedure |
|
|
Term
|
Definition
| group differences because of biased assignment to groups; ex. testing effectiveness of a tutoring program - solution: good random assignment |
|
|
Term
|
Definition
| group differences because participants drop out, move from one group to another, or different types of participants drop out vs. stay |
|
|
Term
|
Definition
| group differences due to presence of pretest; can occur by practice, familiarity, or forms of reactivity - solution: Solomon four-group design |
|
|
Term
|
Definition
| group differences due to changes in environment of research with significant historical impacts |
|
|
Term
|
Definition
| group differences due to changes in participants over the course of the study; ex. age, wisdom, or short-term like fatigue |
|
|
Term
|
Definition
| group differences due to changes in measurement process; ex. changes in equipment or judges/coders |
|
|
Term
|
Definition
| group differences due to changes in extreme scores; ex. Sophomore slump, Sports Illustrated curse |
|
|
Term
| 3 miscellaneous design confounds |
|
Definition
demand characteristics
experimenter expectancy effects
double-blind procedure |
|
|
Term
|
Definition
| aspects of a study that indicate to participants how they should behave |
|
|
Term
| experimenter expectancy effects (a.k.a. the Rosenthal effect) |
|
Definition
| when the experimenters' expectations distort the results of an experiment by affecting how they interpret participants' behavior. |
|
|
Term
|
Definition
| neither the participants nor the experimenters who interact with them know which experimental condition a participant is in at the time the study is conducted |
|
|
Term
|
Definition
| the degree to which the results obtained in one study can be replicated or generalized to other samples, research settings, and procedures |
|
|