Term
| Differentiate between basic and applied social research with regard to (a) goal or purpose and (b) primary audience? |
|
Definition
| (a) The primary goal of basic social research is the development of theoretical understanding; the primary goal of applied social research is to solve a practical or social problem. (b) the audience for basic social research is peers within the researcher’s scientific discipline; the audience for applied social research consists of various stakeholders, in particular the individual or group that sponsors the research. |
|
|
Term
| Explain how evaluation is related to rational decision making. |
|
Definition
| Evaluation is one phase in the process of rational decision-making: after a problem is identified, options for its solution are considered, and one or more of these options is implemented, the implemented options are evaluated. |
|
|
Term
| When did evaluation research emerge as a distinct field of study? |
|
Definition
| Evaluation research emerged as a distinct field of study in the late 1960s and 1970s, as a result of extensive 1960s social legislation that often mandated program evaluation. |
|
|
Term
| What type(s) of evaluation was (were) carried out in the (a) study of the prepared meals provision for the homeless, (b) the TARP experiment on the effect of financial aid to prisoners on recidivism, and (c) on time-series analysis of the effect of raising the minimum drinking age on alcohol-related traffic accidents? |
|
Definition
| (a) program monitoring; (b) program monitoring and summative evaluation (effect and efficiency assessment); (c) summative evaluation (effect assessment). |
|
|
Term
| How does the social condition become identified as a social problem? |
|
Definition
| A social condition is identified as a social problem when it become a matter of public concern, as when journalists and political elites define objective conditions as warranting attention. |
|
|
Term
| How can evaluation provide a conceptual foundation for the formulation of policy problems? |
|
Definition
| Analysis of existing theory and research on a problem can reveal what is known and unknown about its nature, scope, and causes, which may suggest various solutions. |
|
|
Term
| Give an example of a social indicator. What purposes do social indicators serve for the policy maker? |
|
Definition
| Examples of social indicators are the crime rate, the infant mortality rate, and the number of teenage mothers. Examining trends in such indicators reveals important changes in social conditions, which facilitate problem identification and policy planning. |
|
|
Term
| Explain how needs assessment and social impact assessment assist the policymaker? |
|
Definition
| Needs assessment studies identify problems that need attention, establish perceived priorities among problem areas, and forecast how a program will be used to address needs. Social impact studies assess the consequences of policies or programs for the individuals, groups, and communities affected. |
|
|
Term
| How is formative evaluation analogous to experimental pretesting? |
|
Definition
| Just as experimental pretests determine how well various experimental procedures—the cover story, randomization, and so forth—are working, formative studies determine how well specific program components are meeting their objectives. |
|
|
Term
| What are the purposes of program monitoring? Using the TARP experiment as an example, explain how this form of evaluation can facilitate the interpretation of program effects? |
|
Definition
| The purposes of program monitoring are to assess (1) program coverage, or the extent to which the program reaches its target population, (2) program delivery, or whether the services provided are consistent with their objectives, and (3) the resources expended to conduct the program. Monitoring the TARP experiment revealed that program delivery did not fully meet its objectives because of TARP participants’ misunderstanding of the terms of their eligibility for benefits in the event of employment. Only one-third to one-half of the participants knew the number of weeks of eligibility, and few participants in a treatment condition with a work incentive understood how the favorable tax rate on their benefit worked. |
|
|
Term
| What is the difference between effect assessment and efficiency assessment? |
|
Definition
| Effect assessment is concerned with the effects of a program or policy on the participants’ actions, such as the effect of TARP payments on recidivism; efficiency assessment consists of a cost-benefit analysis, such as comparing the costs of payments to released prisoners with the costs of imprisonment and processing persons through the criminal justice system. |
|
|
Term
| Why do evaluation researchers place less emphasis on statistical significance than basic researchers do? |
|
Definition
| Statistical significance indicates that an effect is likely to have occurred—that the outcome is not likely to be the result of random processes. Large samples, however, may generate statistically significant results with very small effects, and evaluation researchers are necessarily concerned with the size of the effect and its practical significance. |
|
|
Term
| Explain how the atheorectical nature of much evaluation research creates problems in investigating program or treatment effects. |
|
Definition
| Much evaluation research assumes an oversimplified model of the causal process underlying program effects. As a consequence, researchers may ignore important variations in the administration of the treatment, fail to consider the full range of effects, including unexpected side effects and variations in effect duration and magnitude, and incorporate inadequate research designs and outcome measures. |
|
|
Term
| What are the optimum research strategies or designs for assessing the effects of (a) existing and new full-coverage programs, (B) existing partial-coverage programs, and (c) new partial-coverage programs? |
|
Definition
| The optimum research strategy for assessing the effects of (a) existing or new full-coverage programs is an interrupted time-series or panel design; (b) existing partial-coverage programs, a cross-sectional survey or, if possible, multiple time-series design; and (c) new partial-coverage programs, a randomized experiment. |
|
|
Term
| Describe three methods of control that are used when estimating the impact if existing partial-coverage programs. |
|
Definition
| (1) Statistical control in a cross-sectional survey, which consists of holding key extraneous variables constant when assessing program effects; (2) aggregate matching, which consists of finding control groups that are similar to the treatment group on crucial variables; (3) multiple time-series analysis, which consists of comparing control group time-series to the “treatment” time-series. |
|
|
Term
| How is the implementation of randomized experiments more problematic for the evaluation researcher than for the basic researcher? |
|
Definition
| Usually evaluation researchers must depend on program officials to carry out the randomization process; officials may not implement the process properly or may let a client’s perceived need or merit override random assignment; and there may be treatment-related attrition or refusals that undermine the process. |
|
|
Term
| Why are program sponsors and staff likely to resist random assignment to programs and treatments? What are the evaluation researchers' counterarguments to there objections? |
|
Definition
| Program officials may believe that it is unfair, harmful, or illegal to deprive some people of a treatment or program that is believed to be beneficial. Evaluation researchers argue that the unknown effectiveness of the program should be put to scientific test and, given its unproven status, that randomization is the most equitable way to assign persons to treatment and control groups. |
|
|
Term
| How can the diffusion of information from one treatment group to another threaten the internal validity of evaluation studies? |
|
Definition
| When members of control groups learn that they are receiving a less desirable treatment, they may become resentful and lower their productivity or they may become motivated to overcome their disadvantage; or upon learning about another treatment they may imitate or copy it. In any of these events, differences between the treatment and control group may be attributable to the control group’s reaction to the diffusion of information rather than to the effects of the treatment per se. |
|
|
Term
| What peculias measurement problems arise in evaluation studies? How are these generally resolved? |
|
Definition
| The nature of the program to be evaluated may be vaguely defined or nonuniform and program goals may be unclear or of undetermined duration. Implementation studies at various program sites help to clarify the key ingredients of vague and nonuniform treatments; ill-defined goals must be translated into measurable outcomes. For a well-defined program with clear objectives, the measurement problem amounts to finding or creating valid indicators of program effects. Also, the use of multiple indicators increases reliability and validity and may permit the assessment of both short- and long-term effects. |
|
|
Term
| What factors inhibit the ability to use probability sampling in evaluation studies? |
|
Definition
| The target population may be too difficult or expensive to enumerate; it may be difficult to implement a high quality program with a random sample of a dispersed population; and evaluation studies may be limited to those individuals who agree to participate. |
|
|
Term
| How does the social context of evaluation threaten external validity? |
|
Definition
| The effectiveness of a program may depend on the personal qualities of the staff who administer it as well as the historical time and particular geographic setting where it takes place; subjects in an evaluation study may behave differently—for example, increasing their output—when they know they are being evaluated. |
|
|
Term
| What is modal instance sampling? |
|
Definition
| Modal instance sampling consists of choosing individuals and test sites that are representative of the conditions under which the program ultimately would be implemented if it became formal policy. |
|
|
Term
| Describe five potential stakeholders in an evaluation study. How does the existence of these stakeholders affect evaluation research? |
|
Definition
| Potential stakeholders include the program sponsor, the evaluation sponsor, program participants, program management and staff, the evaluators, and the general public. Each of these parties may have different interests, may be concerned with different phases of an evaluation, and presents unique challenges in communicating findings. |
|
|
Term
| What are the differences among antecedent, intervening, suppressor, and distortor variables in causal modeling? |
|
Definition
| Antecedent variables are causally prior to others in a theoretical model, whereas intervening variables are intermediate between two other variables in a causal chain. Suppressor and distorter variables are uncontrolled antecedent variables that respectively suppress or distort the direction of a true causal relationship between a dependent and an independent variable. |
|
|
Term
| Explain the differences between the ideal elaboration outcomes of explanation, interpretation, specification, and replication. |
|
Definition
| In simple elaboration, a third variable (T) is controlled to elaborate the relationship between an independent variable (X) and a dependent variable (Y). If the original association between X and Y (zero-order relationship) vanishes when a causally antecedent T is controlled, the outcome is called explanation as T “explains” away the zero-order association as spurious. If the causal position of T is intervening, and the zero-order association vanishes when T is controlled, the outcome is called interpretation as variable T interprets the causal process by which X influences Y. If the zero-order relationship remains essentially the same when variable T is controlled, the outcome is called replication. Finally, if the association between X and Y vary according to the level of T, the outcome is called specification as T specifies the conditions under which the initial zero-order relationship varies. |
|
|
Term
| What is specification error? Is it likely to occur in a true experimental design? Why or why not? |
|
Definition
| A specification error occurs when one or more important variables are left out of a model, which may produce misleading results. It is not likely to occur in a true experimental design since all extraneous variables are controlled through randomization and control procedures during the experiment. |
|
|
Term
| Explain the difference between the regression coefficent in bivariate regression and the partial-regression coefficients in multiple regression. Why are they called "Slopes"? |
|
Definition
| In bivariate regression, the regression coefficient (or slope) indicates how much the dependent variable increases (or decreases) for every unit change in the independent variable. Partial-regression coefficients estimate how much the dependent variable changes (increases or decreases) for every unit change in each independent variable when all other variables in the equation are held constant (controlled). Regression coefficients are called “slopes” because their magnitude determines the steepness of a graphed regression line. |
|
|
Term
| Which of the following variables are collinear: (a) respondent's education, (B) annual family income, (C) respondent's age, (d) respondent's income, (e)weekly family income, (F) respondent's year of birth, (g) family size? Why? |
|
Definition
| The following pairs of variables are collinear (i.e., have a perfect linear association): (b) annual and (e) weekly family income; respondent’s (c) age and (f) year of birth. In each case, the value of one variable can be calculated perfectly from the value of the other; thus, Annual Income = 52*Weekly Family Income, and Age = Present Year - Year of Birth. |
|
|
Term
| What is meant by high multicollinearity? Why is it a problem in multiple regression? |
|
Definition
| High multicollinearity arises when combinations of two or more independent variables are highly correlated (linearly) with each other. It is a problem because estimates of the coefficients will be very unstable, varying greatly from sample to sample, making it difficult to distinguish significant from nonsignificant independent variables. |
|
|
Term
| What is R2? Why is it called measure of fit? |
|
Definition
| R2 indicates approximately the proportion of the variance in the dependent variable (spread of observations around the mean) “explained” by the independent variables. Thus, it measures the goodness of the fit between the multiple regression model and the data. |
|
|
Term
| Interpret the following standardized regression coefficents: (a) +0.40, (b) -1.5, (c) 0.0, and (d) -0.02. |
|
Definition
| (a) The dependent variable increased by .40 standard deviation units for every standard deviation increase in the independent variable; (b) for every increase of one standard deviation in the independent variable, the dependent variable decreases by 1.5 standard deviation units; (c) no linear relationship; (d) the dependent variable decreases by .02 standard deviation units for every standard deviation increase in the independent variable. |
|
|
Term
| Why is it so important for scientists to be completely honest and accurate in conducting and reporting their research? |
|
Definition
| Scientific progress is based on mutual trust among investigators who build their ideas and research on the work of one another; therefore, dishonesty and inaccuracy in reporting and conducting research undermine science itself. |
|
|
Term
| In what ways can research participants in social research be harmed? |
|
Definition
| It is conceivable that research participants may be harmed physically, personally (e.g., by being humiliated or embarrassed), psychologically (e.g., by losing their self-esteem), and socially (e.g., by losing their trust in others). |
|
|
Term
| Is it ever considered ethical to use procedures that might expose research participants to physical or mental discomfort, harm or danges? Explain. |
|
Definition
| Yes. However, such research is rarely considered justifiable, and only then when the research has great potential benefit. |
|
|
Term
| What are the limitations of a cost-benefit anaylsis of proposed research? |
|
Definition
| Costs and benefits may be impossible to measure and predict, and the benefits may not accrue to individual research participants but to the investigator, science, or society in general, which makes it difficult to justify exposing participants to potential harm. |
|
|
Term
| What safeguards do social scientists use to protect research participants from harm? |
|
Definition
| (1) informing subjects about reasonable or foreseeable risks; (2) screening out subjects who are most susceptible to risk; and (3) assessing the impact of potentially harmful procedures after an investigation, in order to counteract possible negative effects. |
|
|
Term
| What are the basic ingredients of informed consent? How did Stanley Milgram violate this principle in his research on obedience to authority? |
|
Definition
| Subjects’ informed consent is obtained by making it clear to them that their participation is voluntary, and by providing them with enough information about the research so that they can make an informed decision about whether to participate. Milgram violated this principle by not telling subjects that they would be placed in a potentially harmful, highly stressful situation. |
|
|
Term
| Which research approaches present the most serious problems from the standpoint of informed consent? |
|
Definition
| Field experiments and covert participation observation present the greatest problems because they simply do not allow investigators to acquire subjects’ informed consent. |
|
|
Term
| Why do researchers use deception? What are the arguments against its use in social research? |
|
Definition
| Researchers use deception because they believe that without it subjects would not behave naturally and research results would be meaningless. Opponents of deception believe that it (1) violates informed consent, (2) damages the credibility of science and trust in authority, and (3) does not accomplish the scientific objectives that it is purported to accomplish. |
|
|
Term
| What is the most basic safeguard against the potentially harmful effects of deception? Is it effective? Explain |
|
Definition
| The basic safeguard against the potentially harmful effects of deception is debriefing—explaining the true purpose of the study and nature of the deception at the conclusion of the study. Research indicates that debriefing is effective if done properly; that is, debriefed subjects report positive experiences from their participation in deception experiments. |
|
|
Term
| When is social research likely to invade people's privacy? |
|
Definition
| Privacy could be invaded by using hidden devices to observe behavior in private settings, such as homes or personal offices, to which the researcher would not ordinarily have access, and by using a false cover to obtain private information that subjects would not ordinarily reveal. |
|
|
Term
| How is research participants' right to privacy typically secured in (a) surverys and (b) field research? |
|
Definition
| The right to privacy typically is secured in (a) surveys by promising anonymity, which is provided by having respondents fill out unmarked questionnaires, or confidentiality, which is provided by removing identifiers from the data, not disclosing individual identities in research reports, and not divulging individual information without the respondent’s permission. This right is protected in (b) field research by altering reports of field studies so as to prevent recognition, and by asking the research participants themselves if the material is objectionable. |
|
|
Term
| What are the institutional review boards (IRBs)? What part do they play in evaluating the ehtics of research? |
|
Definition
| IRBs are committees at research institutions that are responsible for reviewing research proposals involving the use of human and animal subjects. Such boards determine if the investigator has thoroughly considered the potential for harm and has provided adequate safeguards for the protection of subjects’ rights. |
|
|
Term
| What is meant by value-free sociology? Idenitify the major challenges to this position. |
|
Definition
| According to a strict “value-free” doctrine, science is nonmoral; scientists should strive to eliminate personal values from research, and should not be concerned with the ends to which their findings may be put. This position is challenged by the fact that values inevitably influence the research process, and by the understanding that value-neutrality in effect places scientists in the service of others’ values, namely, those who choose to use scientific findings. |
|
|
Term
| Explain Howard Becker's position that social scientists should declare "Whose side they are on". What purposes does this declaration serve? |
|
Definition
| Becker advocates that researchers carefully consider and declare “whose side we are on” as a way of handling potential personal and political biases and of identifying the limitations of one’s research. This clarifies where the researcher’s sympathies lie as well as the particular perspective from which a group, organization, or institution has been studied. |
|
|
Term
| What obligations fo social scientists have regarding the use of the knowledge they generate? |
|
Definition
| As individuals, scientists should (1) consider how their research findings might be used and avoid conducting research that is clearly intended to exploit certain groups, (2) disseminate knowledge as widely as possible, and (3) actively promote beneficial uses and fight against misuses when potential applications are clear. |
|
|