Term
|
Definition
| A one-paragraph summary of the content and purpose of the research report. |
|
|
Term
|
Definition
| Portion of target populationthat is available to the researcher and willing to participate. |
|
|
Term
|
Definition
| Level of .05, statistically significant |
|
|
Term
|
Definition
| When a participant develops expectations about which conditon should occur next in sequence. |
|
|
Term
|
Definition
| Differences between participants in different conditions due to a biased system if assignment to conditions, rather than the influence of the IV. |
|
|
Term
|
Definition
| Loss of participants during a study; are the participants who drop out different from those who continue? |
|
|
Term
|
Definition
| Measure of the behavior before any intervention. |
|
|
Term
|
Definition
| The most common technique for carrying out random assignment in the random groups design; each block includes a random order of the conditions, and there are are as many blocks as there are subjects in each condition of the experiment. |
|
|
Term
|
Definition
| An intensive description and analysis of a single individual. |
|
|
Term
|
Definition
| The initial step in data reduction, in which unit of behavior or particular events are identified and classified according to specific criteria. |
|
|
Term
|
Definition
| Individuals in the research situation who are instructed to behave a certain way in order to create a situation for observing behavior. |
|
|
Term
|
Definition
| Indicates the range of values which we can expect to contain a population value with a specified degree of confidence (95%) |
|
|
Term
|
Definition
| Paricipants must be informed that all data is confidential. |
|
|
Term
|
Definition
| When the IV of interest systematically covaries with a second, unintended IV |
|
|
Term
|
Definition
| Everything except IV is identical. |
|
|
Term
|
Definition
| Concepts, i.e: creativity, empathy, motivation, optimism etc. |
|
|
Term
|
Definition
| The extent to which your test measures the construct it is supposed to measure. (Our IQ test correlates with other IQ tests) |
|
|
Term
|
Definition
| When participants communicate about information during the experiment. |
|
|
Term
|
Definition
| Group that does not receive any active treatment and is compared to the treatment group. |
|
|
Term
|
Definition
| Selecting readily available participants. i.e.: Intro to Psych students |
|
|
Term
|
Definition
| When two measures converge. i.e.: scores on the Satisfaction with Life and Life Satisfaction scores correlate |
|
|
Term
|
Definition
| Two different measures in the same experiment relate and/or vary. |
|
|
Term
|
Definition
| Balances practice effects during a repeated measures design. Every participant is exposed to every condition, but in different orders. |
|
|
Term
|
Definition
| At the end of a study, researchers tell every aspect of study to participants and allow them to ask questions. |
|
|
Term
|
Definition
| Moving from a general principle to specific research |
|
|
Term
|
Definition
| Number of values that are free to vary without effecting experiment. |
|
|
Term
|
Definition
| Cues used by participants to guide their behavior in a study, often leading them to what they feel the researcher expects them to. |
|
|
Term
|
Definition
| Measure in the study that is effected by the IV. |
|
|
Term
|
Definition
| Describes the data. i.e.: measures of central tendency (mean,median mode) AND measures of variability (range, SD, variance, SEM) |
|
|
Term
|
Definition
| The effects of one condition persist and influence performance in other conditions. |
|
|
Term
|
Definition
| Respondents in one group become aware of the information intended meant for the other group. |
|
|
Term
|
Definition
| When constructs discriminate. EX: Life Satisfaction and General Happiness, satisfaction with one's life is not equal to general happiness |
|
|
Term
|
Definition
| People who take part in the study |
|
|
Term
|
Definition
| Both participant and observer do not know what the treatment is that is being administered. |
|
|
Term
|
Definition
| Aquiring knowledge that emphasizes direct obserbation and examperimentation as a way to answer questions. |
|
|
Term
|
Definition
| Control group that is put on a waitlist rather than given an ineffective treatment. |
|
|
Term
|
Definition
| Measure of effect size for a repeated measures design, the strength association measure. |
|
|
Term
|
Definition
| A controlled research situation in which scientists manipulate one ore more factors and observe the effect of this manipulation on behavior. |
|
|
Term
|
Definition
| Group that is receiving the treatment. |
|
|
Term
|
Definition
| the statement created by researchers when they speculate upon the outcome of a research or experiment. |
|
|
Term
|
Definition
| The researcher unintentionally influences the participants. |
|
|
Term
|
Definition
| Potential variables that are not directly of interest to the researcher but that could still be sources of confounding in the experiment. |
|
|
Term
|
Definition
| Practice effects. When a participant over does a tast and becomes burnt out. |
|
|
Term
|
Definition
| Procedure in which one or more IV's is manipulated by an observer in a natural setting to determine the effect on behavior. |
|
|
Term
|
Definition
| How well the sample can be generalized to the population. |
|
|
Term
|
Definition
| When the treatment group changes their behavior not because of the treatment, but because they are getting attention. |
|
|
Term
|
Definition
| Restricts who has access to your protected health information and under what circumstances it can be shared and with whom. |
|
|
Term
|
Definition
| The occurence of an event other than the treatment that can threaten internal validity if it produces changes in the research participant's behavior. |
|
|
Term
|
Definition
| Testable prediction for a set of variables |
|
|
Term
|
Definition
| When people increase participation because they are trying to guess what observer's study is about. |
|
|
Term
|
Definition
| Manipulated, categorical variable. Effects DV |
|
|
Term
|
Definition
| Moving from specific cases to a general principle. |
|
|
Term
|
Definition
| Confirms whether the IV has produced an effect in an experiment. |
|
|
Term
|
Definition
| The researcher must have consent from the participants before going on with the experiment. |
|
|
Term
|
Definition
| The instruments used in an experiment can change over time. This threatens internal validity |
|
|
Term
|
Definition
| The effect of on IV differs depending on the level of a second IV |
|
|
Term
| INTEROBSERVER RELIABILITY |
|
Definition
| Degree to which two independent observers are in agreement. |
|
|
Term
|
Definition
| Institutional Review Board. All findings must be approved by this. |
|
|
Term
|
Definition
| Each IV can have conditions. Like, a background has 2 levels : serene or busy. |
|
|
Term
|
Definition
| Trend in the data that is summarized by a straight line. |
|
|
Term
|
Definition
| Aims to review critical points of current knowledge |
|
|
Term
|
Definition
| Research Design in which the same sample of respondents are tested more than once. |
|
|
Term
|
Definition
| Overall effect of an IV on a DV |
|
|
Term
|
Definition
| A process used to verify if the experiment works. Necessary for validity. Constant comparison of variables. |
|
|
Term
|
Definition
| Change associated with the passage of time. |
|
|
Term
|
Definition
| Analysis of results of several independent experiments investigating the same research area. |
|
|
Term
|
Definition
| Principles and practices that underly research in the field. |
|
|
Term
|
Definition
| Researchers always want to have as little risk as possible during a study. |
|
|
Term
|
Definition
| Researcher observing participant in their natural setting without intervention. |
|
|
Term
|
Definition
| Null hypothesis significance testing. Used to decide whether a variable had an effect in the study. |
|
|
Term
|
Definition
| The review of work by other people in order to enhance the quality of your research. |
|
|
Term
|
Definition
| Assumption used as the first step in statistical inference where the IV is said to have no effect. |
|
|
Term
|
Definition
| Observer's expectancies regarding the outcome of the study cause systematic errors |
|
|
Term
|
Definition
| Outside main body of data. It is an extreme number that does not relate. |
|
|
Term
|
Definition
| Participant's may act in a way that they think the researcher expects them to. |
|
|
Term
|
Definition
| The group that is not treated but receives a sugar pill as a "fake" treatment. |
|
|
Term
|
Definition
| When the control group acts a certain way not because they are being treated, but because they think they are being treated. |
|
|
Term
|
Definition
| A sugar pill and/or fake treatment |
|
|
Term
|
Definition
| When someone takes another's research without permission or without citation. |
|
|
Term
|
Definition
| The people that the study is interested in as a whole |
|
|
Term
|
Definition
| Probability that a false null hypothesis will be rejected. |
|
|
Term
|
Definition
| When a participant starts a acting a particular way due to continuing the steps of the study repeatedly. |
|
|
Term
|
Definition
| Research that seeks to determine whether a change proposed by an institution, government agency, etc. is needed and likely to have an effect. |
|
|
Term
|
Definition
| An internet page that includes scholarly articles and research |
|
|
Term
|
Definition
| A set of predetermined questions for all respondents that seves as the primary research instrument in survey research |
|
|
Term
|
Definition
| Influence that an observer has on the behavior under observation; behavior when observer is there vs. when they are not is different. |
|
|
Term
|
Definition
| Statistical regression can occur when individuals have been selected to participate because of "extreme" scores |
|
|
Term
|
Definition
| A measurement is reliable when it is consistent. |
|
|
Term
|
Definition
| Repeating the exact procedures used in an experiment to determine whether the same results are obtained. |
|
|
Term
|
Definition
| A subset of the population |
|
|
Term
|
Definition
| Threat to the representativeness of a sample that occurs when the selection of the sample over represents or under represents the population |
|
|
Term
|
Definition
|
|
Term
|
Definition
| When the participant reports items in the study rather than the observer reporting. |
|
|
Term
|
Definition
| Effects on performance in one condition due to experience with previous conditions. |
|
|
Term
|
Definition
| When one IV effects one DV |
|
|
Term
|
Definition
| Each person has an equal opportunity of being selected for the study. |
|
|
Term
|
Definition
| When the participants do not know if they are given the experimental treatment or the control treatment. |
|
|
Term
|
Definition
| Pressures on participants to answer as they think they should to be socially acceptable, and not in accordance to what they actually believe. |
|
|
Term
|
Definition
| What exists when evidence falsely indicates that two or more variables are related. |
|
|
Term
|
Definition
| Indicates how far the average scores differ from the mean. |
|
|
Term
| STANDARD ERROR OF THE MEAN |
|
Definition
| An estimate of how much error there is in estimating the population mean based on the sample mean |
|
|
Term
|
Definition
| When f two variables are related on a certain level. The probability is small, even if error variation assumed some of the responsibility. |
|
|
Term
|
Definition
| Technique that visualizes general features of data as well as specific features |
|
|
Term
| STRATIFIED RANDOM SAMPLING |
|
Definition
| Random sampling that includes subpopulations. |
|
|
Term
|
Definition
| Variety of observational methods that uses a degree of control. |
|
|
Term
|
Definition
|
|
Term
|
Definition
| The group of people that the study hopes to represent. |
|
|
Term
|
Definition
| Testing can threaten internal validity if the effect of a treatment cannot be separated from the effect of testing. |
|
|
Term
|
Definition
| Organized system of assumptions and principles to explain a set of phenomena. |
|
|
Term
| THREAT TO EXTERNAL VALIDITY |
|
Definition
| selection bias, volunteer bias, cross species generalization |
|
|
Term
| THREAT TO INTERNAL VALIDITY |
|
Definition
| maturation, history, testing, sequence effects, regression to mean, attrition, assignment bias, instrumentation |
|
|
Term
|
Definition
| characteristic or value that can be changed or effected. |
|
|
Term
|
Definition
| control group that does not receive any treatment, but is rather put on a wait list till the study is over. |
|
|
Term
| WHAT MAKES PSYCHOLOGY A SCIENCE? |
|
Definition
| Theories and research to answer questions about behavior, it uses the scientific method, approaches everything empirically and with skepticism. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN EXPERIMENTAL AND NON EXPERIMENTAL RESEARCH? |
|
Definition
| Non experimental research measures variables and experimental research measures and manipulates variables. |
|
|
Term
| WHAT ARE THE CHARACTERISTICS OF A TRUE EXPERIMENT? |
|
Definition
| manipulation of the IV, control of confounding variables, random assignment, difference between groups using the effect of DV on them. |
|
|
Term
| WHEN IS NON EXPERIMENTAL RESEARCH BETTER THAN A TRUE EXPERIMENT? |
|
Definition
| When it is unethical or impossible to manipulate the IV, if research is at an early stage. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN QUANTATATIVE AND QUALITATIVE RESEARCH? |
|
Definition
| Quantitative refers to studies in which data is based on statistics. Qualitative refers to studies mades on theory. |
|
|
Term
| GIVE AN EXAMPLE OF DESCRIPTIVE RESEARCH. |
|
Definition
| Observational methods, survey research, unobtrusive measures of behavior. |
|
|
Term
| WHAT ARE THE ELEMENTS OF THE SCIENTIFIC METHOD? |
|
Definition
| observation, reporting, concepts, instruments, measurement, hypothesis. |
|
|
Term
| WHAT ARE SOME CHARACTERISTICS IF A GOOD HYPOTHESIS? |
|
Definition
| Control, unbiased, precise scientific instruments |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN APPLIED RESEARCH AND BASIC RESEARCH? |
|
Definition
| Applied research is for a specific purpose or study. Basic research is for general knowledge. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN EMPIRICAL RESEARCH AND LIBRARY RESEARCH? |
|
Definition
| Empirical research gathers information by conducting a study. Library research is found by gathering information from other people's studies. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN A PRIMARY SOURCE AND A SECONDARY SOURCE? |
|
Definition
| Primary source is the research article in which the author collected the data. Secondary source is someone elses summary of the research. |
|
|
Term
| WHAT ARE THE MAJOR SECTIONS OF A RESEARCH PAPER OR JOURNAL ARTICLE? |
|
Definition
| Title Page, Abstract, Introduction, Method, Results, Discussion, References, and Appendices. |
|
|
Term
| WHEN AND HOW DO YOU CITE JOURNAL ARTICLES? |
|
Definition
| You cite articles in APA format. You cite articles when you use someone else's ideas or quote them directly. |
|
|
Term
| WHAT ARE SOME SIMILARITIES AND DIFFERENCES BETWEEN A PSYCHIC AND A RESEARCHER? |
|
Definition
| Similarities are that both predict and both observe. Differences are that psychology uses research and psychic uses a pseudo science. |
|
|
Term
| WHAT ARE SOME OBSTACLES TO ACCURATE MEASUREMENT? |
|
Definition
| Validity, Reliability, Range Effects (ex: really hard test so no one does well) |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN OPEN ENDED AND CLOSE ENDED QUESTIONS? HOW DOES THIS REALTE TO QUALITATIVE AND QUANTITATIVE RESEARCH? |
|
Definition
| Open ended questions have the participant's free lance answer while close ended questions have participant's pick from the given choices. Open ended usually qualitative and close ended usually quantitative. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN A MEASURED OPERATIONAL DEFINITION AND AN EXPERIMENTAL OPERATIONAL DEFINITION? |
|
Definition
A measured operational definition is the procedure used to measure variables. i.e. depression inventory
An experimental operational definition is the procedure used to create the treatment conditions i.e. "distraction" was measured by three beeps |
|
|
Term
| WHAT ARE THE ADVANTAGES AND DISADVANTAGES OF LAB RESEARCH, NATURALISTIC STUDIES, AND FIELD STUDIES? |
|
Definition
Lab Research Adv/Dis: control, allows use of sophisticated equipment. BUT decreased external validity
Naturalistic Studies Adv/Dis:Study things without worrying about ethical concerns due to manipulation, increased external validity. BUT people may behave differently if they know they are being watched.
Field Studies Adv/Dis: participants have less awareness that they are being observed. BUT environmental distractions |
|
|
Term
| HOW DO WE REDUCE BIAS IN RESEARCH? |
|
Definition
| Observer should try to remain unbias as possible. But, double blind studies may also be used to reduce bias. |
|
|
Term
| WHAT ARE THE TYPES OF RELIABILITY AND HOW ARE THEY MEASURED? |
|
Definition
Interobserver Reliability- two independent observers are in agreement
Test-Retest Reliability- Participant's get tested consistently to see if scores are similar
Parallel Forms Reliability- Comparing alternate versions of test to find consistency
Internal Consistency- Items of a questionnaire or measure should be measuring the same thing. |
|
|
Term
| LIST AND DEFINE THE DIFFERENT TYPES OF VALIDITY WITH RESPECT TO MEASUREMENT. |
|
Definition
Face Validity: The extent to which the items clearly relflect what the test is measuring.
Content Validity: The extent to which the items represent the "universe" of behaviors the test is attempting to measure. (general test: math, history, english etc.)
Predictive Validity: How well a measure predicts a participant's behavior
Concurrent Validity: How well a measure compares to current criterion
Construct Validity: The extent to which the measures the construct it is supposed to measure. |
|
|
Term
| CAN YOU HAVE VALIDITY WITHOUT RELIABILITY (OR VICE VERSA) WHY OR WHY NOT? |
|
Definition
It's possible to have reliability without validity. You can consistently measure something other than what you think you are measuring. I.E. you can consistently measure depression while thinking you are measuring anxiety.
You can't have validity without reliability. You cannot be accurate without first being consistent. |
|
|
Term
| DEFINE THE TWO TYPES OF RANGER EFFECTS (CEILING AND FLOOR EFFECTS) |
|
Definition
| Measurement problem where the scores have clustered at a max or min and cannot improve. Makes it hard to notice the effect of the IV or a possible interaction effect. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN OMISSION AND COMMISSION WHEN IT COMES TO DECEPTION? |
|
Definition
| Commission is when you tell the participant something that is false. Omission is when you leave information out about the experiment. |
|
|
Term
ETHICAL ISSUES OF:
Stanford Prison Experiment
The Milgram Electric Shock Study
Tuskegee Experiments
Watson's Little Albert Study |
|
Definition
Stanford Prison Experiment: guards became sadistic
The Milgram Electric Shock Study: participan'ts realized the danger in their compliance
Tuskegee Experiments: the control group was denied treatment
Watson's Little Albert Study: Boy had fear for the rest of his life |
|
|
Term
| HOW DO WE ENSURE WE ARE CONDUCTING ETHICAL RESEARCH? |
|
Definition
- Weigh benefits vs. risks
- Only use deception when absolutely necessary
- IRB approval
- Voluntary participation
- Informed consent
- Confidentiality
- Debriefing
|
|
|
Term
| WHAT ARE SOME TIPS FOR OBTAINING INFORMED CONSENT? |
|
Definition
| Avoid technical terms, Avoid use of first person, Describe what the overall experience of the study will be, State benefits, State alternatives to being a participant, State confidentiality, Legal rights must not be waived, Must have contact person, Statement of voluntary participation |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN A CODED DATA SET AND A DE-IDENTIFIED DATA SET? |
|
Definition
| A coded data set identifies variables (i.e. gender) with criteria such as numbers. A de-identified data set does not identify these variables with criteria. |
|
|
Term
| WHAT ARE SOME ISSUESE THAT ARE RAISED WHEN INCLUDING NO-TREATMENT OR WAITLIST CONTROL GROUPS IN EXPERIMENTAL RESEARCH? |
|
Definition
| The control groups do not receive any treatment. |
|
|
Term
| WHAT ARE THE FOUR SCALES OF MEASUREMENT? |
|
Definition
| Nominal (numbers), Ordinal (rank-ordered), Interval (interval between attributes ex: temp), Ratio (zero is an absolute value ex: clients in 6 months) |
|
|
Term
| WHAT IS A SUCCESSIVE INDEPENDENT SAMPLES DESIGN (AND GIVE EXAMPLE)? |
|
Definition
| A between subjects design where participants divided into equal groups and receive different treatments once. |
|
|
Term
| CAN CORRELATIONAL RESEARCH ESTABLISH CAUSATION? |
|
Definition
| Correlation does not mean causation. Variables may be related but that does not mean that one variable caused another. Confounding variables could be another source for the relationship that must be considered. |
|
|
Term
| IS CORRELATIONAL RESEARCH EXPERIMENTAL OR NON EXPERIMENTAL? |
|
Definition
| Non-experimental because we are not manipulating the variables. |
|
|
Term
| IN TERMS OF AN INDEPENDENT AND DEPENDENT VARIABLE, WHICH ONE DO WE MANIPULATE AND WHICH ONE DO WE MEASURE? |
|
Definition
| Manipulate IV and measure DV |
|
|
Term
| LIST AND DESCRIBE THE DIFFERENT TYPES OF SAMPLING. |
|
Definition
Random Sampling: each member has an equal chance of being a part of the sample
Stratified Random Sampling: Random sampling that includes sub populations to represent the population better
Quota Sampling: Similar to stratified but it is convenience sampling rather than random sampling
Cluster Sampling: selecting pre existing groups i.e. classrooms
Convenience Sampling: selecting readily available participants |
|
|
Term
| HOW CAN WE INCREASE THE REPRESENTATIVENESS OF A SAMPLE? WHAT DOES THIS HAVE TO DO WITH GENERALIZABILITY? |
|
Definition
| Try to do stratified random sampling, create the same conditions for every group, and control confounding variables the best you can |
|
|
Term
| WHAT DOES THE P VALUE TELL YOU? |
|
Definition
| P value is .05 and tells us if there is a significant relationship between variables |
|
|
Term
| DESCRIBE SOME REASONS WHY A STUDY MIGHT END UP WITH A BIASED SAMPLE? |
|
Definition
| Random sampling was not done, sample is not representative of the population, there is a convenience sample |
|
|
Term
| DESCRIBE SOME OF THE THREATS TO INTERNAL AND EXTERNAL VALIDITY |
|
Definition
Internal Validity: confounding variables, unrepresentative sample, bias
External Validity: selection bias, volunteer bias, cross species generalization |
|
|
Term
| WHAT ARE SOME PROBLEMS WITH USING A PRETEST-POSTTEST DESIGN? |
|
Definition
| maturation, history, testing, order effects, regression to the mean, attrition |
|
|
Term
| HOW CAN YOU INCREASE EXTERNAL VALIDITY? WHAT TYPES OF STUDIES HAVE HIGH EXTERNAL VALIDITY & AT WHAT COST? |
|
Definition
| Increase sample size and use multiple sites to get participants, random sampling, field study, create a duplicate of the setting you're interested in. These types of studies are extremely expensive |
|
|
Term
| HOW CAN YOU INCREASE INTERNAL VALIDITY? WHAT TYPES OF STUDIES HAVE HIGH INTERNAL VALIDITY AND AT WHAT COST? |
|
Definition
| Adding adequate controls to reduce or eliminate confounding variables (assigning people to a group in a random way) less expensive |
|
|
Term
| WHAT ARE THE BASIC CHARACTERISTICS OF A BETWEEN GROUPS OR INDEPENDENT GROUPS DESIGN? |
|
Definition
| Participants are split up equally into different groups and all receive a DIFFERENT treatment only once. |
|
|
Term
| WHAT ARE THE BASIC CHARACTERISTICS OF A REPEATED MEASURES DESIGN? |
|
Definition
| Each participant is exposed to more than one condition |
|
|
Term
| WHAT ARE THE THREE KINDS OF GENERALIZATION? |
|
Definition
1. From sample to general population
2. From one research study to another research study
3. From a study to a real world situation |
|
|
Term
| HOW DO YOU MINIMIZA CONFOUNDING VARIABLES IN RESEARCH? |
|
Definition
| Adding more controls to the experiment (ex: both groups in same type of room with same temperature and same observer, EVEN though that isn't what you are testing) |
|
|
Term
| WHAT IS RANDOM ASSIGNMENT AND WHY IS IT HELPFUL IN RESEARCH? |
|
Definition
| Every participant has an equal opportunity of being chosen. It increases external validity and improves generalizability. Helps decrease bias. |
|
|
Term
| DESCRIBE A MATCHED GROUPS DESIGN. |
|
Definition
| The researcher forms comparable groups by matching subjects on a pretest task and then randomly assigning the members of these matched sets of subjects to the conditions of the experiment. |
|
|
Term
| WHAT IS A QUASI EXPERIMENTAL DESIGN AND HOW IS IT DIFFERENT THAN AN EXPERIMENTAL DESIGN? |
|
Definition
| It is like a true experiment but lacks in the degree of control that is found in true experiments. |
|
|
Term
| WHAT ARE THE CHARACTERISTICS OF A GOOD INTRODUCTION IN A PAPER OR ARTICLE? |
|
Definition
| Introduce the broader issue of what your study is about, summarize that main message of what other authors found, emphasize research findings. |
|
|
Term
| WHAT ARE THE SUB HEADINGS IN A METHOD SECTION? |
|
Definition
| Participants, Materials, Procedure |
|
|
Term
| WHAT DOES THE RESULTS SECTION OF A RESEARCH ARTICLE INCLUDE? |
|
Definition
| State hypotheses, how they were tested, whether or not they were supported, main effect vs. interaction effect, refer to tables |
|
|
Term
| WHAT DOES THE DISCUSSION SECTION OF THE ARTICLE INCLUDE? |
|
Definition
| An overall summary of findings. Limitations of the study and suggestions for further studies. |
|
|
Term
| WHAT IS THE DIFFERENCE BETWEEN NON PROBABILITY SAMPLING AND PROBABILITY SAMPLING? |
|
Definition
| Non probability sampling has no way to estimate the probability of each element's being included in the sample (convenience sampling). Probability sampling has probability that each element of the population will be included in the sample. |
|
|
Term
| WHAT IS COHEN'S d AND WHAT ARE THE 3 EFFECT SIZES? |
|
Definition
| The difference in means for two conditions is divided by the average variability of participant's scores. .2,.5,.8 |
|
|
Term
| WHEN WOULD YOU COMPUTE A PEARSON CORRELATION? |
|
Definition
|
|
Term
| WHAT IS THE ABBREVIATION FOR PEATSON CORRELATION? |
|
Definition
|
|
Term
| WHAT DOES A POSITIVE AND NEGATIVE CORRELATION TELL YOU? (GIVE EXAMPLES) |
|
Definition
If there is a positive relationship or a negative relationship.
(height and self esteem)
|
|
|