Term
| Schedule of Reinforcement |
|
Definition
| In relation to responses, it is the arrangement of the environment in terms of discriminative stimuli and behavioral consequences. |
|
|
Term
|
Definition
| Schedule-controlled behavior that is stable and does not change over time. |
|
|
Term
|
Definition
A notation system that describes the independent variables that produce operant behavior. A set of symbols for programming schedules of reinforcement arranged in the laboratory. |
|
|
Term
| Continuous Reinforcement (CRF) |
|
Definition
| When each response produces reinforcement. |
|
|
Term
|
Definition
| After a period of reinforcement, the increase in behavioral variability or topography during extinction. |
|
|
Term
|
Definition
A response-based schedule of reinforcement that is set to deliver reinforcement following a prescribed number of responses. It specifies the number of responses for each reinforcer. |
|
|
Term
|
Definition
| Schedules of reinforcement based on the passage of time and one response after that time has elapsed. |
|
|
Term
|
Definition
| A response-based schedule of reinforcement that delivers reinforcement after a fixed number of responses have been made. |
|
|
Term
|
Definition
| A pattern of response, seen on a cumulative record, that occasionally develops on fixed-interval schedules. There is a long postreinforcement pause (PRP) followed by a brief burst of responses that result in reinforcement. |
|
|
Term
| postreinforcement pause (PRP) |
|
Definition
| The pause in responding that occurs after reinforcement on some intermittent schedules. (e.g., FR, FI). |
|
|
Term
|
Definition
The number of responses (ratio size) required and the magnitude of the reinforcer have both been shown to influence the PRP. Calling this pause a "post"-reinforcement event accurately locates the pause, but it is the ratio size that actually controls it. Hence, many researchers refer to the PRP as this. |
|
|
Term
|
Definition
| A response-based schedule of reinforcement in which the number of responses required for reinforcement changes after each reinforcer. The average number of responses is used to index the schedule. |
|
|
Term
|
Definition
| A schedule of reinforcement in which an operant is reinforced after a fixed amount of time has passed. |
|
|
Term
|
Definition
| The characteristic pattern of response seen on a cumulative record produced by a fixed-interval (FI) schedule. There is a pause after reinforcement, then a few probe responses, and finally an increasingly accelerated rate of response to the moment of reinforcement. |
|
|
Term
|
Definition
| Implies that the effects of contingencies of reinforcement extend over species, reinforcement, and behavior. |
|
|
Term
|
Definition
| A schedule of reinforcement in which one response is reinforced after a variable amount of time has passed. |
|
|
Term
|
Definition
A contingency where the reinforcer is available for a set time after an interval schedule has timed out. Adding this to a VI schedule increases the rate of responding by reinforcing short IRTs. |
|
|
Term
|
Definition
| A schedule in which the number of responses (ratio) increases ( or decreases) after reinforcement. |
|
|
Term
|
Definition
| The highest ratio value completed on a progressive ratio (PR) schedule of reinforcement. |
|
|
Term
| Reinforcement Efficacy (or effectiveness) |
|
Definition
Most of the applied research on progressive ratio (PR) schedules uses the giving-up point or breakpoint as a way of measuring this. This is especially true of drugs because the breakpoint for a drug indicates how much operant behavior the drug will sustain at a given dose. If the breakpoints for two drugs are different, we can say that the drug with the higher breakpoint has this. |
|
|
Term
|
Definition
| The instability of behavior generated by a change in contingencies of reinforcement. |
|
|
Term
|
Definition
| A disruption of responding that occurs when a ratio schedule is increased rapidly. |
|
|
Term
| Interreinforcement Interval (IRI) |
|
Definition
| The time between any two reinforcers. |
|
|
Term
| Molecular Accounts of Schedule Performance |
|
Definition
| Focus on small moment-to-moment relationshiops between behavior and its consequences. |
|
|
Term
| Molar Accounts of Schedule Performance |
|
Definition
| Concerned with large-scale factors that regulate responding over a long period of time. |
|
|
Term
|
Definition
| The time between any two responses. |
|
|
Term
|
Definition
An area of research that attempts to analyze schedule effects in terms of a few basic processes. Requires a high level of mathematical sophistication. Both linear and nonlinear calculus is used to model the behavioral impact of schedules of reinforcement. If performance on schedules can be reduced to a small number of fundamental principles then reasonable interpretation may be made about any particular arrangement of the environment. Also, it should be possible to predict more precisely behavior based on knowledge of the operating contingencies and the axioms that govern reinforcement schedules. |
|
|
Term
|
Definition
| A fast burst of responding. |
|
|
Term
|
Definition
| There is a _____ between increasing PRP length and the time between reinforcements (IRI). |
|
|
Term
|
Definition
| Research shows that the postreinforcement pause (PRP) is a function of the ____. |
|
|
Term
|
Definition
| As the time between reinforcements become longer, the PRP _____. |
|
|
Term
|
Definition
| The interresponse time may be treated as a _____ of operant behavior; for example, the IRTs on a variable interval (VI) schedule of reinforcement are much longer than those on a variable-ratio (VR) schedule). |
|
|
Term
|
Definition
| ____ differentially reinforce long IRTs. |
|
|
Term
|
Definition
| _____ differentially reinforce short IRTs. |
|
|
Term
|
Definition
- Average time between reinforcers for an entire session - overall reduction in shock frequency |
|
|
Term
|
Definition
- time between any two responses - response-shock (R-S) interval |
|
|