Term
|
Definition
| much of it is concerned with developing, exploring or testing the theories or ideas that social researchers have about how the world operates |
|
|
Term
|
Definition
| it is based on observations and measurements of reality -- on what we perceive of the world around us. |
|
|
Term
|
Definition
| aws or rules that pertain to the general case (nomos in Greek) and is contrasted with the term "idiographic" which refers to laws or rules that relate to individuals |
|
|
Term
|
Definition
|
|
Term
|
Definition
| most social research is interested (at some point) in looking at cause-effect relationships; most demanding of the three. |
|
|
Term
|
Definition
| When a study is designed primarily to describe what is going on or what exists. Public opinion polls that seek only to describe the proportion of people who hold various opinions are primarily descriptive in nature. For instance, if we want to know what percent of the population would vote for a Democratic or a Republican in the next presidential election, we are simply interested in describing something. |
|
|
Term
|
Definition
| When a study is designed to look at the relationships between two or more variables. A public opinion poll that compares what proportion of males and females say they would vote for a Democratic or a Republican candidate in the next presidential election is essentially studying the relationship between gender and voting preference. |
|
|
Term
|
Definition
| When a study is designed to determine whether one or more variables (e.g., a program or treatment variable) causes or affects one or more outcome variables. If we did a public opinion poll to try to determine whether a recent political advertising campaign changed voter preferences, we would essentially be studying whether the campaign (cause) changed the proportion of voters who would vote Democratic or Republican (effect). |
|
|
Term
|
Definition
| one that takes place at a single point in time. In effect, we are taking a 'slice' or cross-section of whatever it is we're observing or measuring. |
|
|
Term
|
Definition
| one that takes place over time -- we have at least two (and often more) waves of measurement in a longitudinal design |
|
|
Term
|
Definition
| two or a few waves of measurement |
|
|
Term
|
Definition
| many waves of measurement over time; at least 20 observations. |
|
|
Term
|
Definition
| measures that don't require the researcher to intrude in the research context |
|
|
Term
|
Definition
unobtrusive measure that occurs naturally in a research context. The researcher is able to collect the data without introducing any formal measurement procedure.
-very careful about ethical issues |
|
|
Term
|
Definition
analysis of text documents; quantitative, qualitative or both; -can be a powerful tool for determining authorship, data trends and patterns, monitoring public opinion |
|
|
Term
| types of content analysis |
|
Definition
1) thematic analysis of text: identification of major themes or ideas 2) indexing: (KWIC-key words in context) search for key words 3) quantitative descriptive analysis: how many words used most frequently in text, etc |
|
|
Term
| Problems with content analysis: |
|
Definition
| 1) limited to available text forms 2) must be careful to avoid bias 3) careful with interpretation of results |
|
|
Term
| secondary analysis of data |
|
Definition
| makes use of already existing sources of data. However, secondary analysis typically refers to the re-analysis of quantitative data rather than text. frequently combines information from multiple databases |
|
|
Term
| secondary analysis advantages and disadvantages |
|
Definition
Ad: efficient, can extend scope of research with no cost Dis: access, unaware of problems with study social studies frequently undocumented and only used once...waste of information. |
|
|
Term
|
Definition
| categories are established following some preliminary examination of the data. First, two people independently review the material and come up with a set of features that form a checklist. Second, the researchers compare notes and reconcile any differences that show up on their initial checklists. Third, the researchers use a consolidated checklist to independently apply coding. Fourth, the researchers check the reliability of the coding (a 95% agreement is suggested; .8 for Cohen's kappa). |
|
|
Term
|
Definition
| the categories are established prior to the analysis based upon some theory. Professional colleagues agree on the categories, and the coding is applied to the data. Revisions are made as necessary, and the categories are tightened up to the point that maximizes mutual exclusivity and exhaustiveness |
|
|
Term
|
Definition
| sampling, context, recording |
|
|
Term
Reliability is defined by... -stability -reproducibility |
|
Definition
stability: intra-rater reliability. Can the same coder get the same results try after try? reproducibility: inter-rater reliability. Do coding schemes lead to the same text being coded in the same category by different people? |
|
|
Term
|
Definition
| approaches 1 as coding is perfectly reliable and goes to 0 when there is no agreement other than what would be expected by chance |
|
|
Term
| Flaws of content analysis |
|
Definition
| Two fatal flaws that destroy the utility of a content analysis are faulty definitions of categories and non-mutually exclusive and exhaustive categories. |
|
|
Term
| Advantages of content analysis |
|
Definition
| systematic, replicable, unobtrusive, useful in dealing with large amounts of data |
|
|
Term
|
Definition
|
|
Term
|
Definition
| focused on the specific ways -- the methods -- that we can use to try to understand our world better |
|
|
Term
|
Definition
| rejection of metaphysics, science is only what can be observed and measured, world and universe is deterministic (operated by laws of cause and effect) |
|
|
Term
|
Definition
| rejection of positivism; he way scientists think and work and the way we think in our everyday life are not distinctly different; all scientists are biased by cultural experiences and all observations are theory-laden; most are constructivists |
|
|
Term
|
Definition
| there is a reality independent of our thinking about it that can be studied, post-positivists recognize that all observation is fallible and theories can be revisable; critical of our ability to know reality with certainty |
|
|
Term
|
Definition
| there is no external reality |
|
|
Term
| relativist idea of the incommensurability of different perspectives |
|
Definition
| we can never understand each other because we all come from different backgrounds and cultures |
|
|
Term
|
Definition
| we each construct our idea of the world based on perception; because perception and observation are fallible, are constructs are never perfect |
|
|
Term
| natural selection theory of knowledge |
|
Definition
| theories continually are perfected by finding their fallacies and imperfections |
|
|
Term
|
Definition
| the best approximation of the truth of a given proposition, inference or conclusion |
|
|
Term
| Measures, samples and data have a validity. T/F |
|
Definition
|
|
Term
|
Definition
| theory of what the cause is |
|
|
Term
|
Definition
| we have an idea bout what we are ideally trying to affect and measure |
|
|
Term
|
Definition
| construct --> manifestation; theory to observation |
|
|
Term
|
Definition
| In this study, is there a relationship between the two variables? |
|
|
Term
|
Definition
| if there is a relationship in the study, is it a causal effect? |
|
|
Term
|
Definition
| did we implement the program we intended to implement and did we measure the outcome we wanted to measure? |
|
|
Term
|
Definition
| Assuming that there is a causal relationship in this study between the constructs of the cause and the effect, can we generalize this effect to other persons, places or times? |
|
|
Term
|
Definition
| reasons the conclusion or inference might be wrong |
|
|
Term
| two events that brought forward ethical research discussion |
|
Definition
| tuskegee syphilus study and nuremburg war crimes trials |
|
|
Term
|
Definition
| people should not be coerced into participating; especially important with captive audiences (prisoners, universities, etc.) |
|
|
Term
|
Definition
| participants must be fully aware of details of a study and must sign a consent agreement |
|
|
Term
|
Definition
|
|
Term
|
Definition
| personal information will be be made known to anyone outside of the research study |
|
|
Term
|
Definition
| personal information will be be made known to anyone outside of the research study |
|
|
Term
|
Definition
| no one, including the researchers, will know who the participant is throughout the study |
|
|
Term
|
Definition
| when that treatment or program may have beneficial effects, persons assigned to the no-treatment control may feel their rights to equal access to services are being curtailed. |
|
|
Term
| Institutional Review Board (IRB) |
|
Definition
| a panel of people who review grant proposals with respect to ethical implications & decides whether additional action needs to be taken to protect the safety and rights of participants |
|
|
Term
|
Definition
| how to develop the idea for a research project; many come from practical problems in the field, literature in a specific field, requests for proposals, think up the research |
|
|
Term
|
Definition
| list of proposals that the government and some companies would like researchers to study |
|
|
Term
| Feasibility of a project: tradeoffs btwn rigor and practicality must be considered! |
|
Definition
1) how long the research will take 2) ethical constraints 3) needed cooperation 4) costs |
|
|
Term
|
Definition
1) concentrate efforts on scientific literature (preferably with blind review) 2) do the review early in the process |
|
|
Term
| what to look for in a literature review |
|
Definition
| 1) tradeoff and constructs they faced 2) use their literature review 3) review their measurement instruments 4) anticipation of common problems |
|
|
Term
|
Definition
| -group process -structure and facilitated approach -everal state-of-the-art multivariate statistical methods that analyze the input from all of the individuals and yields an aggregate group product -helpful for developing and detailed ideas for research |
|
|
Term
|
Definition
| a structured process, focused on a topic or construct of interest, involving input from one or more participants, that produces an interpretable pictorial view (concept map) of their ideas and concepts and how these are interrelated. |
|
|
Term
|
Definition
| Preparation, Generation, structuring, representation, interpretation, utilization |
|
|
Term
|
Definition
| 1) preparation (identify stakeholders, work with sh to determine focus, appropriate schedule) |
|
|
Term
|
Definition
| 2) Generation Step (large set of statements that address the focus) |
|
|
Term
|
Definition
| 4) a) sort statements into piles of similar ones b) rate statements on relative importance |
|
|
Term
|
Definition
| multidimensional scaling & cluster analysis (creating the map and categorizing it) |
|
|
Term
|
Definition
| the facilitator works with the stakeholder group to help them develop their own labels and interpretations for the various maps. |
|
|
Term
|
Definition
| use the map to address original issue |
|
|
Term
|
Definition
| systematic assessment and acquisition of information to receive useful feedback of an object |
|
|
Term
|
Definition
| influence decision-making or policies through empirically-driven data |
|
|
Term
| Types of evaluation strategies |
|
Definition
1) scientific experimental methods 2) management-oriented systems models 3) qualitative/anthropological methods 4) participant-oriented (evaluation participants' observations used) |
|
|
Term
|
Definition
| seeks to improve object being evaluated |
|
|
Term
|
Definition
| seeks to examine effects and outcomes of object |
|
|
Term
|
Definition
| -action-oriented -teaching-oriented -diverse, inclusive, participatory, responsive and fundamentally non-hierarchical. -blah blah blah ethical |
|
|
Term
|
Definition
| process of selecting units (people or organizations) from a population so that by studying the sample we may generalize about the larger population |
|
|
Term
| External validity is related to generalizing. T/F |
|
Definition
|
|
Term
|
Definition
| the degree to which the conclusions in your study would hold for other persons in other places and at other times. |
|
|
Term
| Two major approaches for how to provide evidence for generalization |
|
Definition
| 1) sampling method 2) proximal similarity model |
|
|
Term
| proximal similarity method |
|
Definition
| different generalizability contexts and developing a theory about which contexts are more like our study and which are less so |
|
|
Term
|
Definition
| different contexts in terms of their relative similarities; ex: people, places, times, settings |
|
|
Term
| threats to external validity |
|
Definition
| explanations of why your validity is wrong; people, places or times |
|
|
Term
| How to improve external validity |
|
Definition
| good job of drawing a sample (random selection), keep dropout low, external validity better the more you replicate |
|
|
Term
|
Definition
| group you want to generalize to; group you'd like to sample |
|
|
Term
| theoretical vs. accessible population |
|
Definition
| make a distinction between the population you would like to generalize to, and the population that will be accessible to you. |
|
|
Term
|
Definition
| The listing of the accessible population from which you'll draw your sample |
|
|
Term
|
Definition
| the group of people who you select to be in your study. |
|
|
Term
| Sample is the group of people that participant in the study. T/F |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
| a specific value of measurement that a sampling unit supplies. (ex: choose 1 through four to express your strength of opinion) |
|
|
Term
|
Definition
| responses for entire sample |
|
|
Term
|
Definition
| f you measure the entire population and calculate a value like a mean or average; |
|
|
Term
| statistical is to sample as parameter is to population; t/f |
|
Definition
|
|
Term
|
Definition
| The distribution of an infinite number of samples of the same size as the sample in your study |
|
|
Term
|
Definition
| the spread of the scores around the average in a single sample |
|
|
Term
|
Definition
| spread of the averages around the average of averages in a sampling distribution. |
|
|
Term
| The greater the sample standard deviation, the greater the standard error (and the sampling error). |
|
Definition
|
|
Term
| the greater the sampling size, the greater the standard error. T/F |
|
Definition
| False The greater your sample size, the smaller the standard error. |
|
|
Term
|
Definition
| any method of sampling that utilizes some form of random sampling |
|
|
Term
| objective of simple random sampling |
|
Definition
| To select n units out of N such that each NCn has an equal chance of being selected. |
|
|
Term
| stratefied random sampling |
|
Definition
| involves dividing your population into homogeneous subgroups and then taking a simple random sample in each subgroup. |
|
|
Term
| cluster or random area sampling |
|
Definition
* divide population into clusters (usually along geographic boundaries) * randomly sample clusters * measure all units within sampled clusters |
|
|
Term
|
Definition
|
|
Term
| non probability samples do not rely on rationale of probability theory. two types |
|
Definition
accidental-convenience purposive-(ex: ? at mall) |
|
|
Term
| subcategories of purposive modeling |
|
Definition
| modal instance (sampling the most frequent case); expert sampling; quota (proportional & nonproportional), heterogeneity (diversity), snowball sampling (ask one who fits criteria to recommend others-good for if you reaching an inaccessible population) |
|
|
Term
|
Definition
| the degree to which inferences can legitimately be made from the operationalizations in your study to the theoretical constructs on which those operationalizations were based. |
|
|
Term
|
Definition
| whether "on its face" it seems like a good translation of the construct. |
|
|
Term
|
Definition
| check the operationalization against the relevant content domain for the construct. |
|
|
Term
| criteria-related validity |
|
Definition
| check the performance of your operationalization against some criterion. |
|
|
Term
|
Definition
| ability to predict something it should theoretically be able to predict. |
|
|
Term
|
Definition
| ability to distinguish between groups that it should theoretically be able to distinguish between. |
|
|
Term
|
Definition
| degree to which the operationalization is similar to (converges on) other operationalizations that it theoretically should be similar to. |
|
|
Term
|
Definition
| examine the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should be not be similar to |
|
|
Term
| Threats to construct validity |
|
Definition
1) Inadequate Preoperational Explication of Constructs 2) Mono-Operation Bias 3) Interaction of Different Treatments 4) Interaction of Testing and Treatment 5) Restricted Generalizability Across Constructs (unintended consequences) 6) confounding constructs and levels of contrasts |
|
|
Term
| social threats to construct validity |
|
Definition
| Hypothesis Guessing, Evaluation Apprehension, Experimenter Expectancies |
|
|
Term
|
Definition
| quality of measurement (consistency and repeatability) |
|
|
Term
|
Definition
| true score theory maintains that every measurement is an additive composite of two components: true ability (or the true level) of the respondent on that measure; and random error (variability of the measure equals sum of the variability due to true score and the variability due to random error) |
|
|
Term
| random error vs. systematic error |
|
Definition
| mood (not consistent) vs. bias (noise outside that affects all students) |
|
|
Term
| It is not possible to calculate reliability |
|
Definition
|
|
Term
| types of reliability estimates |
|
Definition
1) inter-rater (Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon) 2) test-retest 3) parallel forms (two tests constructed in the same way) 4) internal consistency (consistency of results of items within a test) |
|
|
Term
|
Definition
| relationship among three values; nominal, ordinal, interval, ratio |
|
|
Term
|
Definition
| name the attribute uniquely |
|
|
Term
|
Definition
|
|
Term
|
Definition
| distance between attributes |
|
|
Term
|
Definition
| zero is always meaningful |
|
|