Term
| Maturation (threat to internal validity/research progression effect) |
|
Definition
| changing psychological processes affect your participants behavior or beliefs |
|
|
Term
| History (threat to internal validity/research progression effect) |
|
Definition
some incidental event effected the results of the study example: studying opinions on gun control and we have a shooter |
|
|
Term
| Mortality (threat to internal validity/research progression effect) |
|
Definition
| Sometimes people die in the middle of your study (not usually in cross-sectional but in longitudinal) can refer to real actual death or just people dropping out |
|
|
Term
| Statistical Regression (threat to internal validity/research progression effect) |
|
Definition
| Settling in effects. Over time, people on the extreme tend to get less extreme. |
|
|
Term
| Research progression effects (threat to internal validity) |
|
Definition
1) Maturation 2) History 3) Mortality 4) Statistical Regression |
|
|
Term
| Threats to Internal Validity |
|
Definition
1) Research Progression Effect 2) Reactivity Effects |
|
|
Term
| Demand Characteristics (threat to internal validity/research progression effect) |
|
Definition
Some sort of cues that suggest to the participant how they should respond or behave example: leading questions "don't you think abortion is wrong?" |
|
|
Term
| Evaluation Apprehension (threat to internal validity/ research progression effect) |
|
Definition
| test anxiety, or how worried the participants are about being evaluated |
|
|
Term
| Compensatory Rivalry (threat to internal validity/research progression effect) |
|
Definition
because participants in a control group are deprived of something, the researchers may try to make up for it in other ways. example: if researchers know who is getting the placebo, they might be extra nice to the people not getting the treatment |
|
|
Term
| Research Progression Effects |
|
Definition
1)Demand Characteristics 2)Evaluation Apprehension 3)Compensatory Rivalry 4) Researcher Effects 5) Test sensatization or testing effects |
|
|
Term
| Researcher Effects (threat to internal validity/research progression effects) |
|
Definition
physical or psychological characteristics of the researcher examples: gender, attractiveness |
|
|
Term
| Test sensatization or testing effects (threat to internal validity/research progression effects) |
|
Definition
| A heightened awareness of the issue being tested. |
|
|
Term
| Sampling Deficiency Effects (threat to internal validity) |
|
Definition
1) Sample selection problems 2) Sample assignment problems |
|
|
Term
| Sample Selection Problems (threat to internal validity/Sampling deficiency effects) |
|
Definition
did the fact that certain people selected into that sample affect the result?
examples: Kinsey's sex studies, or Anita's rectal exam example |
|
|
Term
| Sample assignment problems (threat to internal validity/sample deficiency effects) |
|
Definition
your assignment of people to groups or cells is problematic?
example: studying everyone who got there first and you group in order of arrival |
|
|
Term
| The Strengths of Survey Research- Self Administered Questionaires |
|
Definition
1) Cheap/Economical 2) More convenient (can choose when to do it 3) Relatively quick 4) No worries about interviewer bias 5) a lot of anonymity |
|
|
Term
| The Stengths of Survey Research- Interview Surveys |
|
Definition
1) fewer questions left blank 2) fewer misunderstandings 3) higher return rates 4) greater flexibility (can take notes about responses and make observations, and give directions) |
|
|
Term
| General Advantages of Survey Methods |
|
Definition
1) Economy 2) Efficiency (a lot of data, quickly) 3) Can keep things pretty standardized (compared to messiness of observations) |
|
|
Term
| General Disadvantages of Self-Report/Survey Methods |
|
Definition
1) artificial 2) superficial 3) tough to capture social processes |
|
|
Term
| Field Research (the scope of) |
|
Definition
| involves the direct observation of social phenomena in their NATURAL settings |
|
|
Term
| When is Field Research appropriate? |
|
Definition
1) for events or processes that seem to defy quantification 2) topics concerning attitudes or behaviors that are best understood in their natural setting 3) studying social processes over time |
|
|
Term
| Dangers of Complete Participation as a researcher |
|
Definition
|
|
Term
|
Definition
1) Complete Participant 2) Participant as Observer |
|
|
Term
|
Definition
| Researcher identifies herself (like newspaper reporter) |
|
|
Term
| Complete observer "potted plant" |
|
Definition
| Observe processes without taking part in it |
|
|
Term
| How do you do sampling in field research? |
|
Definition
1) quota sample (set quota beforehand) 2) snowball sample (ex: homeless) 3) Deviant cases (find examples that don't fit the norm) 4) Purposive (observe those that you think will give you particularly relevant data) |
|
|
Term
| Notes about keeping notes |
|
Definition
1) Take your notes as soon as possible after your observation so you don't lose detail 2) rewrite your notes as soon as possible so you can add more details 3) make copies of your notes 4) be creative about using your computer (cut and paste, mindmaps, etc) 5) always keep one copy of your original data, make different files for your manipulated data |
|
|
Term
| Strengths of Field Research |
|
Definition
1) Inexpesive in terms of $$ 2) Flexibility (can modify design at any time) 3) Rich and Valid data |
|
|
Term
| Weaknesses of Field Research |
|
Definition
1)does not yield precise descriptive statements about large populations (because you aren't looking at large populations) 2) conclusions often suggestive rather than definitieve (unable to generalize) looking at one thing at a time (missing others) 3) not as reliable a source of data as other methods |
|
|
Term
|
Definition
| Procedures for summarizing the characteristics of sample data |
|
|
Term
|
Definition
| involve a system for estimating population characteristics based on sample descriptions |
|
|
Term
| Distributions (descriptive stats) |
|
Definition
how your data are distributed. different types of distributions: 1) Flat distribution 2) Skewed to the right 3) Skewed to the left 4) U shaped |
|
|
Term
|
Definition
| Straight line (fairly equal distribution of scores across the board) |
|
|
Term
| Skewed to the right (distribution) |
|
Definition
| 1)disproportionate people are more [blank] than not |
|
|
Term
| Skewed to the left distribution |
|
Definition
| More people are less [blank] than not |
|
|
Term
|
Definition
| scores tend toward the extremes, not the middle (ex)People either love it or they hate it. |
|
|
Term
| Measures/Indeces of central tendency |
|
Definition
1)Mean 2)Median 3)Most common |
|
|
Term
| Mean (measure/index of central tendency) |
|
Definition
| Sum of items/ number of items |
|
|
Term
| Median (measure of central tendency) |
|
Definition
| The midpoint in the distribution. |
|
|
Term
| Mode (measure of distribution) |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
| Standard deviation (index of dispersion) |
|
Definition
| square root of the variance |
|
|
Term
| 4 assumptions for inferential stats |
|
Definition
1) all sample data are to be selected randomly 2) the characteristics of each random sample are related to true population parameters 3) multiple random samples from the same population cluster around true population parameters in predictable ways 4)we can calculate the sampling error associated with a sample statistic |
|
|
Term
| The Sampling Distribution of Sample Means |
|
Definition
The mean is equal to the parent population. -The standard deviation is related to -the standard deviation of the parent population. -it's normally distributed if the parent population is normally distributed -even if the parent pop is not normally distribution the standard deviation is normally distributed if you have a large n |
|
|
Term
| Sampling Distribution of Sample Means |
|
Definition
| A theoretical distribution consisting of the mean scores of all possible samples (of a given size) from a population |
|
|
Term
|
Definition
| The expected deviation of a sample mean from the sampling distribution mean. It is equal to the standard deviation of a sample population |
|
|
Term
|
Definition
|
|
Term
|
Definition
| harder to get significance than for a one-tailed test |
|
|
Term
|
Definition
| IVs and DVs: An experiment examines the effect of an IV on a DV. Usually, the IV takes the form of an experimental stimulus, which is either present or not present. The experimenter then compares what happens when the stimulus is present to what happens when it is not. Examines causal processes. |
|
|
Term
|
Definition
| The measurement of a dependent variable among subjects |
|
|
Term
|
Definition
| The measurement of a dependent variable among subjects after they’ve been exposed to an independent variable |
|
|
Term
|
Definition
| In experimentation, a group of subjects to whom an experimental stimulus is administered |
|
|
Term
|
Definition
| An experimental design in which neither the subjects nor the experiments know which is the experimental group and which is the control |
|
|
Term
|
Definition
| rarely used in experimental design because you need at least 100 participants. However, you do want your sample to match a larger population. |
|
|
Term
|
Definition
| A technique for assignment experimental subjects to experimental and control groups randomly. |
|
|
Term
|
Definition
| In connection with experiments, the procedure whereby pairs of subjects are matched on the basis of their similarities on one or more variables, and one member of the pair is assigned to the experimental group and the other to the control group |
|
|
Term
| Preexperimental research designs: One-shot case study |
|
Definition
| the researcher measures a single group of subjects on a DV following the administration of some experimental stimulus |
|
|
Term
| Preexperimental research design: One-group pretest-postest design |
|
Definition
| adds a pretest for the experimental group, but lacks a control group. |
|
|
Term
|
Definition
| research based on experimental and control groups, but no pretests. |
|
|
Term
|
Definition
| refers to the possibility that the conclusions drawn from experimental results may not accurately reflect what went on in the experiment itself. |
|
|
Term
| Sources of internal invalidity (12): |
|
Definition
1. history 2. maturation 3. testing 4. instrumentation 5. statistical regression 6. selection biases 7. experimental mortality 8. causal time order 9. diffusion or imitation of treatments (contamination of the control group) 10. compensation 11. compensatory rivalry 12. demoralization 13. ***The classical experiment with random assignment of subjects guards against each of these problems. |
|
|
Term
|
Definition
refers to the possibility that conclusions drawn from experimental results may not be generalizable to the “real” world. 1. The interaction of testing and stimulus is an example of external invalidity that the classical experiment does not guard against 2. With proper randomization, there is no need for pretesting in experiments 3. Solomon four-group design |
|
|
Term
| Solomon four-group design |
|
Definition
1. Group 1: pretest, stimulus, posttest 2. Group 2: pretest, no stimulus, posttest 3. Group 3: no pretest, stimulus, posttest 4. Group 4: no pretest, no stimulus, posttest |
|
|
Term
| Alternative experimental settings |
|
Definition
1. web-based experiments 2. “natural” experiments (e.g. studies after hurricanes or other tragedies) |
|
|
Term
| Strengths and weaknesses of the experimental method |
|
Definition
1. Strengths: primary way to study causal relationships, ability to isolate the experimental variable, need little time/money/participants, easy to replicate, logical rigor 2. Weaknesses: artificiality |
|
|
Term
|
Definition
1. Deception is involved. Must decide (1) if the particular deception is essential the experiment and (2) whether the value of what may be learned from the experiment justifies the ethical violation 2. Experiments are intrusive. Must weight the potential value of the research against the potential damage to subjects |
|
|
Term
|
Definition
| focuses on the study of meanings, sense-making |
|
|
Term
| Functional research design |
|
Definition
| assumes peoples’ meanings and then looks at subsequent effects |
|
|
Term
| Experimental research design |
|
Definition
| does not mean conducted in a lab. It means you manipulated one or more variables. |
|
|
Term
| Naturalistic research design |
|
Definition
| means that you do not manipulate variables. |
|
|
Term
| Laboratory research v. field research |
|
Definition
| are you watching people engage in interactions in a lab or do you go out into the field (restaurant, house, anywhere). In the field, you’re going to them. |
|
|
Term
|
Definition
| You, the researcher are a participant |
|
|
Term
| Basic (academic) research |
|
Definition
| when you ask a question based on some theoretical reason. Purpose is to advance knowledge. |
|
|
Term
|
Definition
| more practical than basic research |
|
|
Term
| Threats to internal validity |
|
Definition
1) Research progression effects 2)Reacitivity effects (has to do with testing) 3) Sampling deficiency |
|
|
Term
| Two common threads in discourse analysis |
|
Definition
1. A commitment to the study of connected texts 2. The function of language -- a concern with how people use language to accomplish social purposes |
|
|
Term
| Five distinct ways to approach or use discourse |
|
Definition
1. Ethno-methodological (aka conversation analysis) 2. Formal/structural 3. Culturally-focused 4. Discourse processing 5. Discourse and identity |
|
|
Term
|
Definition
1. Interested in the modeling and experimental testing of theories about how text is comprehended and, to a lesser degree, produced. 2. also includes computer simulations of natural language processes |
|
|
Term
| Five criteria differentiating approaches to disclosure |
|
Definition
1. Starting point for research 2. Text type studied 3. Transcription detail 4. Reliance on nontext information 5. Role for quantification |
|
|
Term
| Topics appropriate for Survey Research: |
|
Definition
· May be used for descriptive, explanatory, and exploratory purposes · Used in studies that have individual people as the unit of analysis · Some individual persons must serve as respondents or informants |
|
|
Term
|
Definition
a person who provides data for analysis by responding to a survey questionnaire
· Best method available to the social researcher who is interested in collecting original data for describing a population too large to observe directly |
|
|
Term
|
Definition
| instrument designed to elicit information that will be useful for analysis |
|
|
Term
| Question forms: Questions and Statements |
|
Definition
· Both questions and statements can be used profitably
· Gives you more flexibility in the design of items and can make the questionnaire more interesting as well |
|
|
Term
| Question forms: Open-ended |
|
Definition
| uestions for which the respondent is asked to provide their own answers – qualitative questions Must be coded before analysis |
|
|
Term
|
Definition
-survey questions in which the respondent is asked to select an answer from among a list provided by the researcher -provide greater uniformity of responses and are more easily processed |
|
|
Term
| Creation of close-ended questions should be: |
|
Definition
| exhaustive & mutually exclusive |
|
|
Term
|
Definition
| all the possible responses that might be expected should be provided |
|
|
Term
|
Definition
| respondent should not feel compelled to select more than one |
|
|
Term
|
Definition
-items need to be clear and unambiguous -items should be precise so the respondent knows what the researcher is asking |
|
|
Term
| Avoid double-barreled questions: |
|
Definition
| whenever the word "and" appears, check against double-barreled questions |
|
|
Term
|
Definition
|
|
Term
|
Definition
clear short relevant phrased positively unbiased |
|
|
Term
|
Definition
| any property of questions that encourages respondents to answer in a particular way |
|
|
Term
|
Definition
| people answer questions through a filter of what will make them look good |
|
|
Term
|
Definition
-should be spread out and uncluttered -boxes are useful -if circling items, include clear, prominent instructions |
|
|
Term
|
Definition
a survey question intended for only some respondents determined by their responses to some other question -can tell participants to skip ahead to the next set of questions if their answer is no |
|
|
Term
|
Definition
-uses space effciently -faster to complete -increases comparability of responses -can foster a response-set among respondents (answering all agree) |
|
|
Term
| Three main methods of completing questionnaires |
|
Definition
1) Self-administered questionnaires (e.g., mail distribution) 2) Surveys through interviews (i.e., face-to-face encounters) 3) Surveys (phone) |
|
|
Term
| Monitoring mail surveys returns |
|
Definition
| researchers should undertake a careful recoding and graph the varying rates of return among respondents |
|
|
Term
|
Definition
-can send nonrespondents an additional letter of encouragement to participate -send a new copy of the survey w/ letter -2-3 weeks between mailings |
|
|
Term
|
Definition
The number of people participating the survey divided by the number selected in the sample, in the form of a percentage -less chance of significant non-response bias with a high rate |
|
|
Term
|
Definition
| a data collection in which one person asks questions of another |
|
|
Term
| The role of the survey interviewer |
|
Definition
-higer response rate than mail -decreases "don't knows" -can provide clarification -can make observations and ask questions |
|
|
Term
| Guidelines for survey interviewing |
|
Definition
1. Appearance and Demeanor 2. Familiarity with Questionnaire 3. Following Question wording exactly 4. Recording responses exactly 5. probing for responses |
|
|
Term
|
Definition
| interviewers should dress in a fashion similar to that of the people they will be interviewing & should be pleasant |
|
|
Term
| Familiarity with Questionnaire |
|
Definition
| If an interviewer is unfamiliar with a questionnaire it takes longer and may be unpleasant for the respondent |
|
|
Term
| Following Question wording exactly |
|
Definition
| slight changes can yield different answers |
|
|
Term
| recording responses exactly |
|
Definition
| no attempt should be made to summarize, paraphrase or correct bad grammar |
|
|
Term
|
Definition
|
|
Term
|
Definition
| a sampling technique in which random numbers are selected from withing the ranges of numbers assigned to active telephones |
|
|
Term
| CATI (Computer assisted telephone interviewing) |
|
Definition
| a data collection technique in which a telephone-survey questionnaire is stored in a computer, permitting the interviewer to read the questions from the monitor and enter the answers on the keyboard |
|
|
Term
| Online Survey main concern |
|
Definition
| Representativeness is a concern |
|
|
Term
| Appropriateness of self-reports |
|
Definition
| generally appropriate to study interactional and other relational phenomena, but inappropriate for objective measure of actual interactional behavior. |
|
|
Term
| Appropriateness of self reports 2: |
|
Definition
Apprioriate when the useful info is: 1) The gist of the conversation rather than the verbatim message 2) verbal rahter than nonverbal behaviors 3) one's own messages rahter than a partner's 4) the presence or absence of specific statements rather than frequencies of occurrences |
|
|
Term
| conducting qual field research |
|
Definition
1. Preparing the field: Build rapport 2. Qualitative Interviewing: Contrasted with survey interviewing, the qualitative interview is based on a set of topics to be discussed in depth rather than based on the use of standardized questions 3. Focus Groups: A group of subjects interviewed together, prompting a discussion 4. Recording Observations: Take notes! |
|
|
Term
| Strengths and weak of qual field research |
|
Definition
validity: tends to provide measures with greater validity than do survey and experimental measurements reliability: often very personal (not reliable) |
|
|
Term
|
Definition
| Study of recorded human communication (magazines, tv shows, magazines, websites, newspapers, emails, etc. Need to carefully define your unit of analysis (ex. writers vs. types of books), which drives your sampling strategy (all books by a certain author vs. type of book/by multiple authors) |
|
|
Term
|
Definition
| can choose to code manifest content (visible, surface content) or latent content (underlying meaning) |
|
|
Term
| Strengths of Content Analysis |
|
Definition
| economical, allows for correction of errors, permits study of processes over time, minimal opportunity for researcher to have effect on subject studied, concreteness of documents results in high reliability. |
|
|
Term
|
Definition
| limited to recorded communications |
|
|
Term
|
Definition
| Gallup surveys, Statistical Abstract of the US. |
|
|
Term
| potential validity issues with analyzing existing stats |
|
Definition
| limited to available stats (see Durkheim example below); stats represent groups within the population - often don’t cover exactly what we’re interested in. |
|
|
Term
| potential reliability issues with analyzing existing stats |
|
Definition
| accuracy of data (ex. pressure on local enforcement – as well as recordkeeping practices - impacts what crime stats are tracked/recorded. Also, certain crimes are underreported). |
|
|
Term
| Comparitive & historical research |
|
Definition
Uses historical methods to examine societies over time and in comparison with one another. Often informed by a particular theoretical paradigm.
*
Ex. Weber’s study of religion institutions as sources of social behavior (ie, degree to which religion set foundation for capitalistic economies) |
|
|
Term
| Weber's study of religion: |
|
Definition
| By reviewing official church doctrinal documents, Weber found that Protestant/Calvinist religious teaching emphasized the accumulation and reinforcement of capital, whereas religions in China and India (as well as Judaism) did not - which is why, Weber reasoned, capitalism did not develop in the ancient cultures of China, India, and Israel. |
|
|
Term
| Comparitive & historical analytic techniques |
|
Definition
| Researcher must be able to find patterns amongst voluminous amounts of data using conceptual models of social phenomena. For example, Weber studied bureaucracy to understand the essentials of bureaucratic operation. |
|
|
Term
| Comparitive and historical research validity concern |
|
Definition
| Cannot assume that history as documented fully coincides with what actually happened. Also need to be aware of the biases in the data (do they represent primarily the views of a particular group of people – ie, wealthy, poor, etc.) |
|
|
Term
| experiment v. observation |
|
Definition
*
Goal of experiments is to isolate the cause/effect relationship, actual interaction events may be greatly simplified in the interest of experimental control. *
With observation, primary research focus is on details of the interaction events. |
|
|
Term
| self-report v. observation |
|
Definition
*
Observation focused on perspective of third-party perspective, while self-reports generated from participant’s own perspective. *
Self reports also tend to measure more global aspects of behavior (self-reports report on more specific items); self-reports confound what people actually do with what they think they do; self-reports are more subjective, selective and thematic. |
|
|
Term
| Advantages of natural events (v. staged) |
|
Definition
| Naturalistic observation provides realism, and allows participants to more quickly habituate to presence of observer since context is already familiar and hence focuses their attention (thereby diminishing effect of researcher’s presence). |
|
|
Term
| disadvanatages of natural events (v. staged) |
|
Definition
*
Range of situations that can be observed naturally is limited due to privacy issue. *
“Houseguest effect”- participants may be on their best behavior (more positive, less conflict). *
Lack of standardization across observed situations makes comparison across samples difficult. *
Hard to combine naturalistic observation with self-reports. |
|
|
Term
| advantages of staged events |
|
Definition
| role plays, games, conflict-generating tasks are more precise than natural events at eliciting the behavior of interest. |
|
|
Term
|
Definition
| cause people to disagree about things they wouldn’t, and they may be more or less expressive than normal. One solution is to have people discuss issues that occur in their own relationships. |
|
|
Term
|
Definition
| Tend to use convenience samples which over represents white, middle class, college-educated people. Self-selection bias (people who choose to participate) adds to this issue – Babee recommends quota sampling as one solution. |
|
|
Term
|
Definition
*
Behaviors exhibited during study affected by size and diversity of sample. It is helpful to increase observation time to address this concern (but then can end up with overwhelming amount of data). *
Participant reactivity is also an issue – ie, degree to which participants will put forth socially-desirable behavior or be more inhibited because of observation. These tendencies are difficult for participants to sustain over time, though. *
Also, behaviors sampled tend to neglect the brief, unfocused, and variable encounters that dominate daily interaction. |
|
|
Term
|
Definition
the numerical representation and manipulation of observations for the purpose of describing and explaining the phenomena that those observations reflect)
1. Some data, such as age and income, are intrinsically numerical
2. Often quantification involves coding into categories that are then given numerical representations
a. Example “Student Concerns”
i. Advisors suck, tuition too high, not enough parking spaces, etc..
ii. These concerns can be categorized into Academic, Administrative, & Facilities |
|
|
Term
|
Definition
| ocument used in data processing and analysis that tells the location of different data items in a data file. Codebook identifies the locations of data items and the meaning of the codes used to represent diff attributes of variables |
|
|
Term
|
Definition
1. Primary guide in coding process
a. Guide for locating variables & interpreting codes during analysis |
|
|
Term
|
Definition
The analysis of a single variable, for purposes of description rather than explanatory.
a. Example: gender, we would look @ how many of the subjects were men & how many were women. |
|
|
Term
|
Definition
description of the number of times the various attributes of a variable are observed in a sample
i. The report that 53% of a sample were men, & 47% were female is and example of frequency distribution |
|
|
Term
|
Definition
| ambiguous term generally suggesting typical or normal measures (average, mean, mode, median) |
|
|
Term
|
Definition
| data can be represented in the form of an average to measure your central tendency. The mean, median and mode are all specific examples of mathematical averages |
|
|
Term
|
Definition
Sum of Values/ total number of cases (n)
a. Example GPA |
|
|
Term
|
Definition
|
|
Term
|
Definition
average representing the value of the middle case in a rank ordered set of observations.
a. Example: Ages of 5 men are: 16, 17, 20, 54, and 88 the median would be 20. |
|
|
Term
|
Definition
the distribution of values around some central value, such as an average. The range is a simple example of a measure of dispersion.
i. Example: The mean age of a group is 37.9 and the range is from 12-89. |
|
|
Term
|
Definition
| more sophisticated measure of dispersion. Measure of dispersion around the mean, calculated so that approximately 68% of the cases will lie w/i plus of minus 1 standard deviation from the mean, 95% will lie w/i 2 standard deviations, and 99.9% will lie w/i 3 standard deviations |
|
|
Term
|
Definition
| a variable whose attributes form a steady progression, such as age or income. Example: group of people 21, 22, 23, 24 and so forth could be broken down into fractions of years. |
|
|
Term
|
Definition
| a variable whose attributes are separate from one another, or discontinuous, as in the case of gender, religious affiliation, military rank and year in college. No progression from male to female in the case of gender. |
|
|
Term
|
Definition
subgroup comparisons can be used to describe similarities and differences among subgroups with respect to some variable
1. Babbies’ tips: collapse response categories in tables. Basically he is saying there are better ways to construct your tables to avoid reading into things too much or the wrong way.
2. Handling “Don’t Knows” or “No Opinion” basically take these out of your sample data when you conduct analysis. It will give you a more accurate pic of what is going on |
|
|
Term
|
Definition
| the analysis of two variables simultaneously, for the purpose of determining the empirical relationship between them. Construction of a simple percentage table or the computation of a simple correlation coefficient are examples of bivariate analysis. |
|
|
Term
|
Definition
| the results of bivariate analyses often are presented in the form of contingency tables, which are constructed to reveal the effects of the independent variable on the dependent variable |
|
|
Term
|
Definition
| the analysis of the simultaneous relationships among several variables. Examining simultaneously the effects of age, gender, and social class on religiosity would be an example of multivariate analysis. |
|
|
Term
|
Definition
| the analysis of the simultaneous relationships among several variables. Examining simultaneously the effects of age, gender, and social class on religiosity would be an example of multivariate analysis. |
|
|
Term
| ethics and quantitative analysis |
|
Definition
ou should present an un-biased analysis…
*
if your study does not produce statistical significance, you should report the lack of correlation. *
Do not hark. *
Protect privacy of subjects just as you would in qualitative analysis. |
|
|