Term
|
Definition
| The process of specifying what we mean when we use particular terms |
|
|
Term
|
Definition
| Careful, deliberate observations of the real world for the purpose of describing objects and events in terms of the attributes composing a variable |
|
|
Term
|
Definition
| A sign of the presence or absence of the concept we're studying |
|
|
Term
|
Definition
| A specifiable aspect of a concept |
|
|
Term
| Define nominal and operational definitions? |
|
Definition
Nominal: commonsense or dictionary definition Operational: specifies how a concept will be measured |
|
|
Term
| Describe nominal level of measurement |
|
Definition
Nominal is a level of measurement describing a variable that has attributes that are merely different. It is exhaustive, and mutually exclusive, |
|
|
Term
| What is ordinal level of measurement? |
|
Definition
| Ordinal is a level of measurement describing a variable with attributes we can rank order along some dimension |
|
|
Term
| Describe interval level of measurement |
|
Definition
| Interval is a level of measurement describing a variable whose attributes are rank-ordered and have equal distances between adjacent attributes |
|
|
Term
| Describe ration level of measurement |
|
Definition
| Ration is a level of measurement describing a variable with attributes that have all the qualities of nominal, ordinal and interval measures and in addition are based on a "true zero" point |
|
|
Term
|
Definition
| A quality of measurement that suggests that the same data would have been collected each time in repeated observations of the same phenomenon |
|
|
Term
| What are two ways to test reliability? |
|
Definition
| Test-retest method, inter-rater reliability |
|
|
Term
| What is the test-retest method? |
|
Definition
| Take the same measurement more than once, compare results |
|
|
Term
| What is inter-rater reliability? |
|
Definition
| All investigators on a project must measure the variables in the same way |
|
|
Term
|
Definition
| A term describing a measure that accurately refelcts the concept it is intended to measure |
|
|
Term
| What are 3 forms of validity? |
|
Definition
Face validity Criterion or construct validity Content validity |
|
|
Term
|
Definition
| A type of composite measure that summarizes and rank-orders several specific observations and represents some more general dimension |
|
|
Term
|
Definition
| A type of composite measure composed of several items that have a logical or empirical structure among them. |
|
|
Term
| What are some similarities between indexes and scales? |
|
Definition
Ordinal measure Rank order units of analysis in terms of specific variables Utilize more than one data item |
|
|
Term
| What is a difference between indexes and scales? |
|
Definition
| A scale has an inherent intensity structure? |
|
|
Term
| What is index construction logic? |
|
Definition
| To create an index, we could simply give respondents 1 point for each action that they have taken |
|
|
Term
| What is scale construction logic? |
|
Definition
| To create a scale, we would score respondents according to the pattern that best represents them |
|
|
Term
| What are 7 steps of index construction? |
|
Definition
1. Select items Face validity Unidimensional General or specific Adequate variance 2. Examine empirical relationships With other items With independent-dependent variable 3. Assign scores 4. Missing data? 5. Validate the index Administer index Statistical validation (internal validation factor analysis reliability analysis Chronbach's Alpha (range 0-1) Another option: split- half method 6. External validation Determines construct or predictive validity 7. |
|
|
Term
| Discuss the Bogardus Social Distance Scale |
|
Definition
Are you willing to permit African-Americans to live in your country? Are you willing to permit African-Americans to live in your community? Are you willing to permit African-Americans to live in your neighborhood? Are you willing to let an African-American live next door to you? Are you willing to let an African-American marry your daughter? |
|
|
Term
| Describe the Thurstone Scales |
|
Definition
| Judges determine the intensity of different scales. |
|
|
Term
|
Definition
Uses a standardized response format IE strongly agree, strongly disagree |
|
|
Term
| Describe the Semantic Differential Scale? |
|
Definition
| Respondents are asked to rank answers between two extremes |
|
|
Term
| What are the steps of the Guttman scale? |
|
Definition
1. Select items that differ in their level of intensity? 2. Administer scale 3. Determine if the items have a hierarchical pattern |
|
|
Term
|
Definition
| A method for collecting information and drawing inferences about a larger population or universe, from the analysis of only part thereof |
|
|
Term
| What are four forms of non-probability sampling? |
|
Definition
1. Convenience sample 2. Purposive or judgemental sampling 3. Snowball sampling 4. Quota sampling |
|
|
Term
| What is convenience sampling? |
|
Definition
| Using subjects that are readily available |
|
|
Term
| What is purposive/judgemental sampling? |
|
Definition
| Sample is selected based upon knowledge of a population, its elements, and the purpose of the study |
|
|
Term
| What is snowball sampling? |
|
Definition
| Used when members of a population are difficult to locate |
|
|
Term
|
Definition
| Used to ensure generalizability when researcher has some knowledge of the population and its characteristics |
|
|
Term
| What is probability sampling? |
|
Definition
| A sample that gives every member of the population a known (nonzero) chance of inclusion |
|
|
Term
| What are the two "logics" of probability sampling? |
|
Definition
1. A sample will accurately reflect the population if every member of the population is given the same chance of inclusion 2. As sample size increases, the sample becomes more representative of the population |
|
|
Term
| Define representativeness |
|
Definition
| Quality of a sample having the same distribution of characteristics as the population from which it was selected |
|
|
Term
| What is a sampling error? |
|
Definition
| The degree of error to be expected of a given sample design |
|
|
Term
|
Definition
| Unit about which information is collected and that provides the basis of analysis |
|
|
Term
| Define population (theoretical population) |
|
Definition
| The theoretically specified aggregation of study elements |
|
|
Term
|
Definition
| Aggregation of elements from which the sample is actually selected |
|
|
Term
|
Definition
| The list or quasi list of units composing a population from which a sample is selected |
|
|
Term
| What are the four types of probability sampling designs? |
|
Definition
1. Simple random sampling 2. Systematic random sampling 3. Stratified random sampling 4. Cluster sampling |
|
|
Term
| Define simple random sampling |
|
Definition
| Every element in the population is given the same chance of inclusion |
|
|
Term
| Define Systematic Random Sampling |
|
Definition
| Involves selecting every nth member (element) from a list of population elements after the first member has been randomly selected |
|
|
Term
| Define stratified random sampling |
|
Definition
| Involves dividing population into groups then randomly sampling from each group |
|
|
Term
|
Definition
| A multistage sampling technique in which natural groups are sampled initially with the members of each selected group being subsampled afterward |
|
|
Term
| What is an experimental group? |
|
Definition
| A group of subjects to whom an experimental stimulus is administered |
|
|
Term
|
Definition
| A gruop of subjects to whom no experimental stimulus is administered and who resemble the experimental group in all other respects |
|
|
Term
| What is a double-blind experiment? |
|
Definition
| An experimental design in which neither the subjects nor the experimenter know which is the experimental group and which is the control |
|
|
Term
| The experimental and control groups must be as similar as possible. What are two ways this is achieved? |
|
Definition
1. Randomization (random sampling) 2. Matching (quota sampling) |
|
|
Term
| What are three pre-experimental research designs? |
|
Definition
1. One-shot case study 2. One-group pretest-posttest 3. Static-group comparison |
|
|
Term
| What is a one-shot case study? |
|
Definition
| A single groups of subjects is measured on a variable following an experimental stimulus |
|
|
Term
| What is a one-group pretest-posttest? |
|
Definition
| adds a pretest for the group, but lacks a control group |
|
|
Term
| What is a static-group comparison? |
|
Definition
| Has a control group and an experimental group, but lacks a pre-test |
|
|
Term
| What is internal invalidity? |
|
Definition
| Occurs when the conclusions drawn from an experiment don't accurately reflect what went on in the experiement itself |
|
|
Term
| What are six sources of internal invalidity? |
|
Definition
1. Selection bias - Occurs when control and experimental group are initially dissimilar 2. History - Effect of historical events 3. Testing - Effect of test-retest 4. Instrumentation - Validity of measure 5. Statistical Regression - An "extreme" group will tend to regress towards the mean 6. Experimental Mortality - Subjects drop out |
|
|
Term
| What is external invalidity? |
|
Definition
| Occurs when the experimental conclusions are not generalizable to the real world |
|
|
Term
| Define response rate. What is the significance of it? |
|
Definition
Response rate: number of people participating in a survey divided by the number selected in the sample Significance: Generalizability assumes a 100% response rate Affects sample size required for a project Totoal sample size = required sample size / probable response rate |
|
|
Term
| What is a guideline for writing close-ended questions? |
|
Definition
Response categories for closed-ended questions must be mutually exclusive and exhaustive
Example: What is your religious affiliation? Protestant Catholic Lutheran Baptist Jewish |
|
|
Term
| What are more guidelines for writing questions? There are 8 of them |
|
Definition
1. Respondents must be competent to answer 2. Avoid double-barreled questions 3. Short questions are best 4. Avoid biased items and terms 5. Respondents must be willing to answer (questions concerning sexual behavior, deviance, illegal behavior, etc.) 6. Questions should be relevant 7. Avoid negative terms 8. Items should be clear |
|
|
Term
| What are 3 general suggestions for writing questions? |
|
Definition
1. Appropriate reading level 2. Include clear instructions 3. Make questionnaire attractive and easy to read |
|
|