Term
|
Definition
the careful, deliberate observation of the real world for the purpose of describing objects and events in terms of the attributes composing a variable. |
|
|
Term
|
Definition
the mental process whereby fuzzy and imprecise notions (concepts) are made more specific and precise. |
|
|
Term
Concepts are WHAT derived by mutual agreement from mental images. |
|
Definition
|
|
Term
|
Definition
collections of seemingly related observations and experiences. |
|
|
Term
| Social scientists measure three classes of things: |
|
Definition
1. Direct observables 2. Indirect observables 3. Constructs |
|
|
Term
Direct observables (including manifest concepts / variables) EXAMPLE |
|
Definition
Physical characteristics of a person directly in front of an interviewer (sex, weight, height, eye color). |
|
|
Term
Indirect observables (including latent concepts / variables) example |
|
Definition
Characteristics of a person as indicated by answers given in a self- administered questionnaire. |
|
|
Term
| Constructs (theoretical creations based on observations that cannot be observed directly or indirectly)example |
|
Definition
Level of self-esteem, as measured by a scale that combines several direct and/or indirect observables. |
|
|
Term
Conceptualization provides definite meaning to a concept by specifying one or more WHAT of what one has in mind. |
|
Definition
|
|
Term
|
Definition
| an observation one chooses to consider as a reflection or representation of a variable they wish to study. |
|
|
Term
Indicators are WHAT of something; they do not represent the exact concept/variable they are associated with. |
|
Definition
|
|
Term
|
Definition
one may use the number of religious services one attends over a period of time as an indicator of religiosity. |
|
|
Term
|
Definition
specifiable aspect of a concept. |
|
|
Term
|
Definition
one’s religiosity might be specified in terms as in: ▪ Beliefs ▪ Rituals ▪ Devotion ▪ Faith |
|
|
Term
If several different indicators all represent the same concept: |
|
Definition
all of them will behave the same way the concept would behave if it were real and could be observed. |
|
|
Term
| Sociologists distinguish three kinds of definitions: |
|
Definition
1. Real definitions 2. Nominal definitions 3. Operational definitions |
|
|
Term
|
Definition
| statements about the essential nature of some entity. |
|
|
Term
| Real definitions WHAT a construct is a real entity (which it is not: it is a proxy). |
|
Definition
|
|
Term
| Problem of real definitions |
|
Definition
real definitions are often vague they are not useful for the purpose of rigorous inquiry. |
|
|
Term
|
Definition
| assigned to a term without any claim that the definition represents a “real” entity. |
|
|
Term
| Nominal definitions are what? |
|
Definition
|
|
Term
Nominal definitions represent a consensus or WHAT about the meaning of something. |
|
Definition
|
|
Term
|
Definition
specify precisely how a concept will be measured, i.e. what operations will be performed. |
|
|
Term
| Operational definitions are nominal, but they attempt to achieve: |
|
Definition
| clarity about the meaning of a concept within the context of a study. |
|
|
Term
The order of conceiving a research question often is as follows: |
|
Definition
1. Conceptualization 2. Nominal definition 3. Operational definition 4. Real world measurement |
|
|
Term
| Clarifying one’s concepts IS |
|
Definition
| a continuing process in sociology. |
|
|
Term
|
Definition
What are the different meanings and dimensions of the concept "aggression?” |
|
|
Term
|
Definition
| For our study, we will define aggression as representing physical harm, specifically, how often one hits another. |
|
|
Term
|
Definition
We will measure physical harm via responses to the survey question “How many times have you hit someone in the past year?” |
|
|
Term
| Measurements in the real world |
|
Definition
| The interviewer will ask, “How many times have you hit someone in the past year?” |
|
|
Term
Generally, definitions are more problematic for WHAT than for explanatory research. |
|
Definition
|
|
Term
|
Definition
| requires detail and precision in its definitions. |
|
|
Term
|
Definition
| often is less concerned with subtle nuances of a definition, and more with general patterns (so multiple definitions for the same phenomenon might be acceptable). |
|
|
Term
| Conceptualization DEFINED |
|
Definition
| the refinement and specification of abstract concepts. |
|
|
Term
| Operationalization DEFINED |
|
Definition
| the development of specific research procedures that will result in empirical observations representing those concepts in the real world. |
|
|
Term
When operationalizing a concept, one must be clear about the WHAT that interests them. |
|
Definition
|
|
Term
| One must determine what WHAT of categories are appropriate to use in a measurement. |
|
Definition
|
|
Term
|
Definition
| second consideration when operationalizing variables. |
|
|
Term
|
Definition
logical set of attributes (e.g. gender, age). |
|
|
Term
|
Definition
| characteristic or quality of something (e.g. female, old) |
|
|
Term
| Every variable must have two qualities: |
|
Definition
1. Attributes composing a variable must be mutually exclusive.
2. Attributes composing a variable must be exhaustive. |
|
|
Term
A variable’s attributes or values are WHAT if every case can have only one attribute. |
|
Definition
|
|
Term
| Example of a variable whose attributes are NOT mutually exclusive: |
|
Definition
▪ Income ▪ 0-$15,000 ▪ $13,000-$25,000 ▪ $25,001-$50,000 ▪ $50,001-$75,000+ |
|
|
Term
A variable’s attributes or values are WHAT when every case can be classified into one of the variable’s categories |
|
Definition
|
|
Term
| Example of a variable whose attributes are NOT exhaustive: |
|
Definition
▪ Race ▪ White ▪ Black ▪ Mexican ▪ Native American |
|
|
Term
There are four levels (or scales) of measurement that define all variables: |
|
Definition
1. Nominal 2. Ordinal 3. Interval 4. Ratio |
|
|
Term
| Measures have greater use in data analysis as they move from the WHAT to the ratio level. |
|
Definition
|
|
Term
|
Definition
? level variables (also called categorical variables) represent unordered categories identified only by name. ? measurements only permit one to determine whether two individuals are the same or different. Examples: religion, race, or countries. |
|
|
Term
|
Definition
Ordinal variables represent an ordered set of categories. Ordinal measurements tell one the direction of difference between two individuals. Examples: the alphabet, Likert scales, any scale that measures something according to low, medium, and high. |
|
|
Term
|
Definition
Interval scales represent an ordered series of equal-sized categories. Interval measurements identify the direction and magnitude of a difference. The zero point is located arbitrarily on an interval scale. Examples: Fahrenheit temperature scale, IQ scores, dates (i.e. March 12 or April 2). |
|
|
Term
|
Definition
Ratio scale measures are interval scales that contain an absolute zero at one point along the spectrum of the scale (i.e. zero indicates none of the variable). Ratio measurements identify the direction and magnitude of differences and allow ratio comparisons of measurements. Examples: income, height, 40 yard dash time. |
|
|
Term
Three elements are important to consider regarding measurement quality: |
|
Definition
1. Precision and accuracy 2. Reliability 3. Validity |
|
|
Term
|
Definition
concerns the fineness or distinctions made between attributes of a variable. |
|
|
Term
|
Definition
| regards the degree of truth, correctness, or exactness of a variable’s attributes. |
|
|
Term
| Precise measures are what to imprecise ones |
|
Definition
|
|
Term
| Precision is not the same as WHAT. |
|
Definition
|
|
Term
|
Definition
refers to the quality of a measurement method that suggests the same data would have been collected each time in repeated observations of the same phenomenon. |
|
|
Term
| Reliability is not the same as |
|
Definition
|
|
Term
The following methods can be used to ensure one has reliable measures: |
|
Definition
1. Test-retest method 2. Split-half method 3. Using established measures 4. Having reliable research workers |
|
|
Term
|
Definition
one makes the same measurement more than once. |
|
|
Term
| (test-retest method) If one measures twice and gets the same result: |
|
Definition
| a measurement is more likely to be reliable. |
|
|
Term
| (test-retest method)If a second measure reveals different results: |
|
Definition
the measurement is likely to be unreliable. |
|
|
Term
|
Definition
one uses multiple sets of randomly assigned variables in order to produce the same classifications. |
|
|
Term
| (split-half method) If the result for each group is different |
|
Definition
| the measure of self-esteem would likely be unreliable. |
|
|
Term
| Example of split-half method |
|
Definition
| Rosenberg self-esteem scale has 10 items that together measure “self-esteem.” If one split the 10 items into two groups of 5, both groups should still represent one’s level of self-esteem. |
|
|
Term
|
Definition
| measures that others have already proved reliable in previous research. |
|
|
Term
| example of Established measures |
|
Definition
one has a unique measure for “prejudice” they can compare their results with established measures of prejudice to be confident their measure is a reliable measure of prejudice. |
|
|
Term
| Having reliable research workers |
|
Definition
One can determine the reliability of measurements and results by checking the reliability of research assistants E.g. multiple coders can be used for the same data. |
|
|
Term
|
Definition
a term describing a measure that accurately reflects the concept it is intended to measure. |
|
|
Term
|
Definition
| E.g. a measure of “social class” should not measure “religiosity” instead. |
|
|
Term
| Four types of validity are important to consider: |
|
Definition
1. Face validity 2. Criterion-related validity 3. Construct validity 4. Content validity |
|
|
Term
|
Definition
| whether the quality of an indicator makes it a reasonable measure of some variable. |
|
|
Term
|
Definition
| that a measure “makes sense” on the face. It is the lowest level of validity assurance. |
|
|
Term
|
Definition
| E.g. one’s voting frequency seems to be a good indicator of community involvement. |
|
|
Term
| Criterion-related validity represents |
|
Definition
the degree to which a measure relates to some external criterion. |
|
|
Term
| example of Criterion-related validity |
|
Definition
| E.g. the validity of SAT tests is based on their ability to predict college success. |
|
|
Term
| Construct validity represents |
|
Definition
| the degree to which a measure relates to other variables as expected within a system of theoretical relationships. |
|
|
Term
| example of Construct validity |
|
Definition
E.g. the variable marital satisfaction is likely to correlate with the variable marital fidelity.
By comparing these variables one can better determine whether one has a valid measure. |
|
|
Term
| Content validity represents |
|
Definition
| the degree to which a measure covers the range of meanings included within a concept. |
|
|
Term
| example of Content validity |
|
Definition
E.g. a measure of mathematical ability does not have content validity if it only includes “addition.”
By including “addition, subtraction, division, and multiplication” one ensures their measure of mathematical ability is more valid. |
|
|