Term
| concurrent-chain schedule of reinforcement |
|
Definition
| a complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link; once a choice has been made, the rejected alternatives become unavailable until the start of the next trial; allow for the study of choice with commitment |
|
|
Term
|
Definition
| a complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously; allow for the measurement of direct choice between simple schedule alternatives |
|
|
Term
| continuous reinforcement (CRF) |
|
Definition
| a schedule of reinforcement in which every occurrence of the instrumental response produces the reinforcer |
|
|
Term
|
Definition
| graphical representation of how a response is repeated over time, with the passage of time represented by the horizontal distance (x axis) and the total or cumulative number of responses that have occurred up to a particular point in time represented by the vertical distance (y axis) |
|
|
Term
|
Definition
| decrease in the value of a reinforcer as a function of how long one has to wait to obtain it |
|
|
Term
|
Definition
| the gradually increasing rate of responding that occurs between successive reinforcements on a fixed-interval schedule |
|
|
Term
| fixed-interval schedule (FI) |
|
Definition
| a reinforcement schedule in which the reinforcer is delivered for the first response that occurs after a fixed amount of time following the last reinforcer or the beginning of the trial |
|
|
Term
| fixed-ratio schedule (FR) |
|
Definition
| a reinforcement schedule in which a fixed number of responses must occur in order for the next response to be reinforced |
|
|
Term
| intermittent reinforcement |
|
Definition
| a schedule of reinforcement in which only some of the occurrences of the instrumental response are reinforced; the instrumental response is reinforced occasionally or intermittently; also called partial reinforcement |
|
|
Term
| inter-response time (IRT) |
|
Definition
| the interval between one response and the next; can be differentially reinforced in the same fashion as other aspects of behavior, such as response force or response variability |
|
|
Term
|
Definition
| a reinforcement schedule in which a certain amount of time is required to set up the reinforcer; response is only reinforced if it occurs after the reinforcer has been set up |
|
|
Term
|
Definition
| a restriction on how long a reinforcer remains available; in order for a response to be reinforced, it must occur before the end of the limited-hold period |
|
|
Term
|
Definition
| a rule of instrumental behavior, proposed by R.J. Herrnstein, which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response alternative |
|
|
Term
|
Definition
| a mechanism for achieving matching by responding so as to improve the local rates of reinforcement for response alternatives |
|
|
Term
|
Definition
| same as intermittent reinforcement |
|
|
Term
|
Definition
| a pause in responding that typically occurs after the delivery of the reinforcer on FR and FI schedules of reinforcement |
|
|
Term
|
Definition
| the high and invariabt rate of responding observed after the post-reinforcement pause on FR schedules; ends when the ratio requirement has been completed and the participant is reinforced |
|
|
Term
|
Definition
| a schedule in which reinforcement depends only on the number of responses the participant performs, irrespective of when those responses occur |
|
|
Term
|
Definition
| disruption of responding that occurs on ratio schedules when the response requirement is increased too rapidly |
|
|
Term
| schedule of reinforcement |
|
Definition
| a program, or rule, that determines how and when the occurrence of a response will be followed by the delivery of the reinforcer |
|
|
Term
|
Definition
| less sensitivity to the relative rate of reinforcement than predicted by the matching law |
|
|
Term
| variable-interval schedule (VI) |
|
Definition
| a reinforcement schedule in which reinforcement is provided for the first response that occurs after a variable amount of time from the last reinforcer or the start of the trial |
|
|
Term
| variable-ratio schedule (VR) |
|
Definition
| a reinforcement schedule in which the number of responses necessary to produce reinforcement varies from trial to trial; value of the schedule refers to the average number of responses required for reinforcement |
|
|