Applications of Operant-Related Learning Principles to the Real World
A reinforcing stimulus is roughly the same as a reward. If a person does something and receives a "reinforcer," he will probably do the same thing again the next chance he gets. Richard Malott, (1972)
A basic premise of behavioral modification is that most events are predictable consequences of learning. Nothing ever "just happens." Driving fast past a school is shaped when no childrenor policeare present and persists when children are present and at risk. Behavioral modifiers interpret a tragic incident in which a child leaving a school is hit by an automobile as the result of a learning trap in which the rewards of risky driving shape and maintain risky behaviornot an accident. Conditioning, not choice, determines behavior but people can modify almost anything (Fuller, 1991).?
Today applications based upon instrumental conditioning principles are likely to be applied in settings ranging from asthma relief in clinics to zoo programs for increasing animals' activity levels. The principles have been used with individuals with small groups, with whole wards and with natural populations of people. They have been applied by professionals, by paraprofessionals, and by individuals on themselves and/or their significant others.
This chapter will begin by presenting some general rules for applying the offshoot of Skinnerian conditioning theory often called contingency management. Contingency management is the art and science of controlling the rules (contingencies) relating behavior to the consequences of that behavior. Next will be illustrations of principles and techniques as applied to different types of subjects and environments. Finally I will present applications based upon mixed cognitive and instrumental learning principles of the type pioneered by Bandura. These interventions resemble the cognitive-behavioral programs derived from the Pavlovian paradigm (Chapter Nine). However, this chapter covers applications used by therapists identified with reinforcement learning and reported in journals that have traditionally reported contingency management applications.
General Principles of Contingency Management
There are three stages to all successful programs of contingency management. We will explore the various events and procedures applied to each stage. The three stages are: (l) Specification; (2) Observation; and (3) Consequation. Malott (1974) calls this the "SOC it to 'em!" model of contingency management.
One of the most powerful tools developed by the Skinnerians is the experimental analysis of behavior. This approach requires identifying cues, responses and reinforcers responsible for behavior and is essential for designing intervention programs. Before designing a program for contingency management, one must first specify: (1) the target behaviors; (2) the reinforcers one will use; and (3) the applicable contingencies.
Specifying target behaviors
The first step in the process of contingency management is to decide what "target" behaviors should be changed. Programs traditionally tried to change visible behaviors because inner changes, such as "an increased belief in personal effectiveness" could not be observed directly and therefore observations could not be verified by different observers. More recent techniques often attempt to modify cognitions and attitudes but assume there will be some overt measurable behavior correlated with successful internal changes. Overt behavior or covert behavior, observer reliability is enhanced if the modifier defines that dependent variable (the target behavior) so that any trained observer following a specified set of procedures can identify when it occurs. A definition in terms of procedures is an operational definition. Specifying the operationally defined, desired outcome of a program of contingency management is called stating the behavioral objectives. An example of a behavioral objective might be reducing a child's rate of talking back to the teacher from over 20 times per day to under two times per day. "Talking back" should be operationally defined so that multiple observers would agree both on each occurance of the behavior and when the program's goal had been reached.
There are two other important criteria for selecting behaviors to modify. First, they should be within the target person's capabilities. Unrealistic selection leads to frustration and extinction on the parts of both modifiers and target persons. Malott (1974) suggests a "think small" rule. By having small objectives, the probabilities of accomplishing big are increased. Once one goal is reached, it is always possible to set up a second and more demanding goal. Demanding too much too early strains your schedule control. Second, you must select an appropriate beginning point for change. Normally, the most efficient strategy is to "begin where the behavior of the target person is at."
Does the "think small" rule apply to all types of behavior change programs? In the treatment of alcohol abuse the proper goal to be specified is vigerously disputed. Most therapists identified with the Alcoholics Anonymous "alcoholism-as-disease" tradition believe abstinence is the only feasible and legitimate goal. Other therapists, often identified with the operant tradition, follow the "think small" rule and specify a goal of controlled drinking. Opinions are polarized so most members of either camp could not objectively evaluate the work of the other camp. Positions concerning cigarette addiction are less extreme allowing a test of the "abstinence" versus "control" theories.
Glasgow, Morrey, and Lichtenstein (1989) compared the results of specifying either abstinence or controlled smoking as the treatment goal for heavy smokers. They found little difference between their groups. Both treatments produced about equal frequencies of subjects who quit completely and those who significantly reduced consumption. In this type of application following the "think small" rule did not seem to result in accomplishing bigger. But, contrary to abstinence theory, it did not yield poorer results.
Behaviors to be modified should be defined in objective ways or operationally defined. This increases observer and treatment reliability. With the possible exception of addictions, goals of producing small changes in behavior are more likely to be successful. Usually the starting point for change should be the subject's current behavior.
Specification of reinforcers
The selection of reinforcers must also be carefully specified. Good reinforcers must be available to the modifier, must be reasonable in cost both in terms of money and of modifier's time, and must be easily deliverable. Reinforcers must be immediate to work best. Small, immediate reinforcers, such as those from playing computer games, can be more powerful than large, distant reinforcers, such as getting a term paper done on time. Mook (1987) cites evidence that frequent and immediate reinforcement has an impact of its own. Large, distant reinforcers may not work if they have to compete with small, immediate reinforcers.
Effective reinforcers are those that motivate positive changes in the target behaviors. A reinforcer for the modifier may not be a reinforcer for the target person. Praise from a teacher to a child who dislikes that teacher may be aversive. One good way to determine what to use as reinforcers is to list all the reinforcers available and then have the target person specify which of these he or she would work for. Another option is the Premack-Timberlake approach of observing baseline behavior and using access to frequent responses to reinforce infrequent behavior.
Applying the Premack Principle
The Premack Principle recognizes the fact that people have differential preferences. --Peter J. Makin and David J. Hoyle (1993, p. 17).
A popular misconception about reinforcers is that they must be physical, such as candy or money. Many persons object to such reinforcers as briberyas promoting either tooth decay or moral decay. One answer to these criticisms is to use behaviors as reinforcers. In Chapter Ten we explored the Premack principle, which says access to high-frequency behaviors can serve as reinforcers. This principle is widely applied partly because the procedure for identifying and specifying good reinforcers by observing baseline frequencies is clear, relatively simple, and non-disruptive (Timberlake and Farmer-Dougan, 1991). Following this principle you would tell the target person, "If you do this low-frequency behavior which you do not seem to like to do, then (the contingency) I will let you do a high-frequency behavior that you want to do a lot."
Of course, the would-be modifier must be able to control when the target person will be able to emit the reinforcer responses. Modifiers can control popular behaviors that require some physical supplies or accessories. Thus, not fighting might be reinforced by allowing the target person to play records that the teacher can lock up until the behavioral objectives are reached. The teacher's reading of an exciting story can not occur without the teacher's cooperation. This author once heard a talk describing access to a Freudian psychoanalyst as the reward for a mental patient's emitting desired behaviors in a hospital behavioral modification program.
Homme and colleagues (1963) used the Premack principle in controlling "acting out" (screaming, running around the room) behaviors in three three-year-old subjects. After baseline measures had been made of "acting out" behaviors (high probability) and of sitting quietly in a chair and looking at the blackboard (low probability), the researchers began the procedure of waiting for the few instances when the children did the low-probability behavior. The researchers then rang a bell and gave the instruction: "Run and scream." The subjects leaped to their feet and ran around the room screaming until the stop signal was given. Within a few days, rates of "acting out" decreased and "sitting quietly and watching the blackboard" greatly increased. At a later stage, the children earned tokens for emitting low-probability behaviors exchangeable for the opportunity to engage in high-probability activities.
Todd (1972) used a covert type of Premack principle to promote the development of self-esteem in a depressed woman. The woman was instructed to write down all the positive things about herself that she could think ofa very low-probability behavior. With considerable prompting from the therapist, she was able to find six such statements. These were printed on a card that was trimmed to fit inside the cellophane wrapper of a cigarette package. Since smoking was a high-probability behavior for her, she was instructed to read one or two of the items and to think positively about herself before taking a cigarette from a pack. Thus the cigarettes were both cue and reward. Within two weeks she reported feeling better than she had in years, and she had added 21 positive items to her list. This was an early form of cognitive-behavioral therapy for depression.
The Premack Principle has been incorporated into the branch of the behavioral modification movement called Organizational Behavior Modification. It is seen as an ideal tool for managers. If productivity is determined by resources, abilities, and motivation then the Premack principle is a practical, inexpensive way to increase motivation. Both the magnitude and the direction of motivation are important. Watching what people do readily and what they avoid doing identifies the reinforcers and behaviors most in need of being reinforced. Makin and Hoyle promoted having access to preferred activities as the reinforcer for engineers that finished formerly avoided tasks. Because engineers have a great deal of autonomy in allocating their efforts, the authors used feedback and social reinforcers to encourage these professionals to develop and use individualized Premack systems. This treatment raised the performance of the targeted section of a company from worst to best (1993).
The Premack Principle says that at any given time an organism is more motivated to do some behaviors than other behaviors. This allows access to the more desired behaviors to be used as powerful reinforcers to increase rates of the less desired behaviors. The reinforcing and reinforced behaviors can be either overt or covert.
Technical aspects of different kinds of reinforcers
Tokens or other types of conditioned reinforcers are usually less expensive and more convenient than primary reinforcers. When primary reinforcers are used, it is often difficult to deliver them immediately after the desired behavior occurs. Conditioned (secondary) reinforcers may be used to bridge the gap in such cases. These may be a signal to the target person that the response was correct, they may be marks on a blackboard or in a notebook, or they may be physical "tokens" such as poker chips. A person trying to self-shape behavior may use clicks of a golf counter as secondary reinforcers for reaching behavioral goals, such as not smoking for 15 minutes.
For severely retarded or disturbed target persons, primary reinforcers or physical tokens are usually necessary, and they should be large enough to discourage being eaten. Large size may also help reduce theft problems in ward settings. Tokens must be guarded as carefully as primary reinforcers, and, in the case of blackboard or notebook marks, the target persons must be prevented from cheating by erasing or adding marks. Secondary reinforcers help to prevent satiation resulting from too many primary reinforcers. Intermittent reinforcement can cause a partial reinforcement effect helpful in obtaining and maintaining superior resistance to extinction and satiation. It is often necessary to begin with continuous reinforcement using primary reinforcers, then gradually shape acceptance of secondary reinforcers, and finally progress to intermittent reinforcement. Failure to gradually shape the response to a leaner schedule often results in straining the schedule, or unplanned extinction.
Animal trainers call a secondary reinforcer a "bridging gap." They use them when faced with the problem of maintaining a long sequence of behavior that takes the animal out of range of the trainer and primary rewords (such as fish for porpoises). This reinforcer is either a hand or body visual signal or an auditory cue that has become associated with the primary reinforcer through a classical conditioning procedure. The signal is given whenever the animal successfully completes a portion of the sequence and "bridges" the gap to the next primary reinforcer.
Bringing behavior under the control of an external reinforcer may have undesirable side effects. Harlow (1953) found that rhesus monkeys given food rewards for solving mechanical puzzles subsequently solved fewer such puzzles when the food was discontinued than monkeys given the puzzles without food rewards. Thus some primary reinforcers may interfere with the operations of less powerful reinforcers (such as a love of learning), by a behavioral contrast effect. A second undesirable side-effect of reinforcers is the potential of the reinforcers themselves to be harmful. For example, the author found that while juvenile delinquents improved their room cleaning performance for cigarettes, this led to a sharp rise in their unhealthy smoking habits.
Symbolic secondary reinforcers work best with brighter adult humans. For younger or less bright humans, or for animals, either primary reinforcers or immediate or physical conditioned reinforcers work better. Cues that tell a subject that a response was correct and to expect the primary reinforcer are called bridging gaps. Once behavior is externally reinforced the behavior may cease to be self-reinforcing.
The final goal in the specification stage of a contingency management program is specification of the relationship, or contingency, between the desired behavioral outcomes and the reinforcers. Should the modifier use the same reinforcers throughout? Or is it better to progress from primary reinforcers and powerful, generalized secondary reinforcers (money)first to a token system, and finally to social reinforcers (praise)? Is the goal to maintain the desired behaviors indefinitely through by a schedule of reinforcement or is the plan to gradually terminate external reinforcers?
The rulethe contingencyspecifying the relationship of behavior to reinforcement should be very clearly communicated to the target persons. Even with sound-minded college students this author has found it necessary to be highly explicit about the nature and amount of behavior that will earn a specified number of credit points. Seligman (1973) showed that failure to perceive the relationship between behaviors and outcomes may result in learned helplessness and lack of motivation.
Changes in the rules may make the contingencies unclear. Therefore, shaping to leaner schedules, transferring control from concrete reinforcers to social reinforcers, and other attempts to reduce the response cost of a contingency management system to the modifier should be implemented carefully. These changes should be planned in the beginning or done after the original behavioral objectives have been met. The modifier should be alert for signs of resentment or breakdowns in schedule control, and be prepared to institute changes very gradually to prevent "strain."
The success of attempts to change the originally specified contingencies is related to the characteristics of the target persons. Brighter and/or better-adjusted persons have more intrinsic reward systems and more alternate sources of reward, and may tolerate changes better. For "normal" populations, considerations of flexibility and reduction of the efforts required by the managers may be most important. Ward populations and severely retarded persons may require planned contingency management for much of their institutional lives. Contingency management programs, because they deliberately specify contingencies in a simplified and exaggerated way, help such persons to be able to see the relationship between their actions and environmental consequences for the first time. For such persons, the contingency manager's emphasis should be on immediacy and simplicity.
Contingencies are the rules governing how much behavior will earn a certain amount of reinforcer. Contingencies should normally be clearly communicated and not changed unless the change was planned or the original goal has been reached. Failure to follow this rule can result in losing control of the target behaviors and in learned helplessness.
Should everything be sweetness and lightIs positive reinforcement enough?
As we saw in Chapter Ten, Skinner and Terrace both believed that the errors resulting from normal "trial-and-error" discrimination training were aversive and harmful. Both developed "errorless" learning procedures. Use of errorless procedures was especially advocated for children with learning disabilities because it was felt that exposure to errors was more emotionally harmful for these children (Jones and Eayers, 1992). Thirty years of development of Skinner and Terrace's basic techniques show that while such procedures improve the speed with which learning-disabled children master simple discriminations, this improvement comes at a price. With more complex learning situations several problems arise. First fading tends to narrow attention, which makes learning difficult in situations where the correct responses are controlled by multiple cues. Secondly it is often difficult to remove prompts without losing stimulus control of the behavior. Finally behavior learned by errorless methods often fails to generalize to natural environments where reinforcement is not constant and immediate (Jones and Eayers, 1992). This suggests that some exposure to the incidental aversive experience of making errors seems to be necessary for complex and durable, generalizable learning. Does this imply that planned aversive consequences may sometimes be necessary?
Should contingency management plans include planned aversive consequences?
Should only positive reinforcement be specified or should plans be made to use multiple contingencies that include aversive consequences? With "normal" subjects addition of a mildly aversive contingency will often neither help nor hurt. Hundert (1976) compared giving tokens, taking tokens away for failure to emit the target behaviorsa mildly aversive token cost punishment contingencyand combining giving tokens and taking tokens away. The goal behaviors were production of correct finished arithmetic problems and paying attention to the teacher. All procedures produced similar large gains in the elementary school student subjects. During the baseline two period, inattention, but not production of arithmetic problems, declined to baseline one levels. Arithmetic competency appeared to be self rewarding. Not paying attention, however, may be intrinsically more reinforcing than paying attention to arithmetic lectures.
If one child gets more reinforcement from the attention of peers than the modifier can deliver for desirable behavior, then not only will that child's undesirable behaviors continue, but contingencies applied to other children will be disrupted. In such cases, aversive controls must be added to the modification plan to suppress undesirable responses. The mildest technique is to ignore the undesirable behavior and hope it will extinguish. If this fails try a token cost plan or remove the subject the situation where positive reinforcers may be earned (time-out).
If these mild procedures fail, then more extreme punishment contingencies may become necessary. Physical punishment is rarely advisable because it may be difficult to use such punishers at a strength that will be effective without producing severe side effects in the person punished and without exposing the modifier to potential legal and ethical sanctions. Severe physical punishment is usually forbidden in most institutional settings including schools. Formerly, the rules were less stringent and Ivar Lovaas legally used electric shock to successfully treat institutionalized autistic children.
While there no longer seems to be widespread objections to behavioral interventions when these involve positive reinforcement procedures, a great deal of controversy has surrounded the use of aversive or restrictive procedures designed to decrease maladaptive behaviors.--O. Ivar Lovaas, 1987, p. 311.
Lovaas (Chance and Lovaas, 1974) reported dramatic success in treating "untreatable" autistic children by using severe physical punishment. He (in Lovaas, Schaeffer, and Simmons, 1965) listed three ways in which aversive events can be used as tools in therapy. The first approach used punishment procedures similar to the aversion therapy approaches (Chapter Nine). The second used the negative reinforcement paradigm, in which shock is removed or withheld, contingent upon specified behaviors (Chapter Four). The third conditions new SDs to pain reduction (negative reinforcement), with the goal of having these SDs become conditioned, positive reinforcers. These results replicate Dunham's finding that his subjects increased alternative behaviors associated with shock offset. According to Lovaas, the effects of this third kind of aversive procedure would be an increase in positive alternative behaviors, as a paradoxical by-product of pain. Let us now examine Lovaas's work.
Childhood autism is characterized by self-stimulatory behaviors, which may be self-destructive, and a general lack of social responsiveness. Autistic children do not respond well to traditional psychotherapy and shock procedures were used as a last resort. In the first experiment (Lovaas et al., 1965), two five-year-old children were placed barefoot on a shock grid floor and escape-avoidance procedures were initiated. One of the experimenters stretched out his arms and said, "Come here." Any movement towards the experimenter terminated the shock for that trial. If the child did not move, the second experimenter pushed him in the direction of the first experimenter and terminated the shock. This escape phase was followed by an avoidance procedure in which shock was withheld if the child approached the experimenter within five seconds after the "come here" command.
Shock was also used to punish self-stimulation and/or tantrum behaviors. The verbal command "No!" was associated with shock and acquired limited effectiveness as a conditioned aversive reinforcer. It was found that not only did the children learn to respond to the experimenters to avoid or escape shock, but the verbal command "come here" became effective in environments equipped with shock equipment. As predicted by Lovaas, alternative behaviors did appear. Surprisingly, these included the subjects' seeking the experimenters' company, showing affection, and increasing their alertness to the environment. Lovaas and colleagues (1965) commented that during successful avoidance trials the children ''appeared happy." There was also limited generalization of the adult-seeking and affectionate behavior to situations outside the shock-avoidance training environment.
Lovaas tested the hypothesis that the adults who had been associated with safety from shock following avoidance trials and who had hugged and fondled the children when the children approached would become conditioned positive reinforcers. The children were taught to operate a candy dispenser, which gave them both candy and a view of the experimenter's face. During extinction trials the photograph of the face of the experimenter (associated with shock reduction) was more effective in slowing down the rate of extinction than photographs of other faces. In addition, ward nurses reported that following the shock avoidance training, the children began, for the first time, to come to them for comfort when they were hurt in play. On the negative side, Lovaas and colleagues (1965) noted that the positive shock-produced changes in behavior often showed limited generalization to new environments and people and extinguished rapidly. The aversive techniques helped manage autistic children but did not "cure" autism.
Lovaas (1974), was deeply concerned with the ethical and practical issues surrounding the use of extreme aversive techniques such as shock. First, he recommended using shock only for dealing with extreme behavior such as self-mutilationsome autistic children have literally chewed off fingersand total lack of responsiveness to others. In these cases, shock inhibited destructive behavior that formerly had been reinforced by adults who had let the child have his or her own way to avoid temper tantrums or self-mutilation. Second, he recommended that therapists using aversive techniques have a deep love for children, be patient enough to provide large doses of affection for positive behavior, and be willing gradually to shape desired behaviors that can compete with the destructive behaviors. Third, he suggested training the parents of autistic children in operant control procedures including aversive techniques. The goal was for these parents to overcome their own feelings of inefficacy and frustration until they could successfully manage the behavior of their autistic children. This involved showing the parents how acknowledging tantrums and self-mutilation may have reinforced these behaviors and coaching the parents to "load the child up with love" for positive behavior. He taught the parents that suppressing bizarre behavior (such as self-mutilation) through aversive control provided the opportunity to begin building up appropriate behaviors (Lovaas, 1974). Fourth he advocated such treatments only if they were the least restrictive effective treatment. That is, to be used only after all nonaversive treatments have failed and the only alternative is physical or chemical restraints. Finally he recommended that extreme aversive consequences only be used by doctoral level professionals or other highly trained persons working under supervision (Lovaas and Favell, 1987).
Ethical codes require a balancing test of the benefits and costs of research or therapy techniques. Lovaas notes that the costs of NOT effectively treating severe aggressive or self mutilation behaviors are high. He concluded that the high benefits produced by his treatments justified the discomfort suffered by the subjects (Lovaas and Favell, 1987). Urged on by child's rights advocates, the California legislature came to the opposite conclusion. New laws made it impossible for Lovaas to continue his shock treatments at the University of California at Los Angeles. Currently California laws forbid such treatments on the theory that they constitute abuse. Many states have adopted stringent procedures to review and monitor use of aversive control in therapeutic settings and others have banned some treatments altogether (Repp, & Singh, Eds., 1990).
Using punishment and avoidance procedures with autistic children, Lovaas was able to stop self-mutilation and increase their attention to people. He found that positive alternative social behaviors increased as withdrawal was suppressed. He also found that experimenters associated with shock reduction became conditioned positive reinforcers for the children.
Time out as humane punishment
Did legal restrictions on aversive treatment mean the end of the use of punishment in contingency management? No because other punishment procedures were available and continued to be used in institutional settings where positive reinforcement alone was inadequate. "Punishment" refers to two types of procedures. One uses contingent delivery of aversive consequences and the second uses contingent nondelivery of positive reinforcers to suppress inappropriate behaviors. Today most type-one procedures limit the aversive event to a verbal reprimand; a rebuke, nagging or scolding. Forcing the punished person to do high levels of an incompatible alternative response in an over-correction procedure is still common (Lovaas and Favell, 1987). A pupil may be forced to write one thousand times; "I will not hit other children."
In the second type of punishment existing positive reinforcers are removed (a response cost contingency) or access to positive reinforcers is restricted. This last procedure is called time-out and it is used very widely. Time-out can involve varying degrees of restriction. In the most restrictive form the child or other target of the procedure is completely secluded from others. In milder forms a partial exclusion from groups may suffice. Even response cost and time-out procedures have risks.
The suppression of undesired behavior will not occur until attempts at avoidance or escape attempts cease. The would-be punisher must be alert to the risks that desirable behaviors might also be suppressed and that punishment may trigger emotional problems. Time-out may be counterproductive with escape-oriented children who may welcome removal from a stressful classroom (Abramowitz and O'Leary, 1991). However, for most children denying access to desired activities or other positive reinforcers is both effective and fairly safe. Bartlett and Swenson (1975), for example, used late access to recess as a punisher to control disruptive behaviors in groups of problem sixth graders.
Are any punitive contingencies necessary? Abramowitz and O'Leary (1991) review many studies showing that with normal pupils in schools, praising good behavior and ignoring bad behavior worked well. With children with learning disabilities, especially with attention-deficit-disorder children, punitive contingencies were absolutely necessary to reduce disruptive behavior. With these children explanations of why they were being reprimanded or given time-outs had no impact on the effectiveness of the treatments.
Even with college students, some types of aversive control may be needed. Deadlines for handing in assignments, if enforced, punish "dawdling and delaying." DuNann and Fernald (1976) reported using a "doomsday" contingency: If minimal assignments were not completed by a specified date, the slow-starting students could be forced to drop the contingency-management-based course. In practice time pressure often require use of some punishment contingencies. These should be used mainly to control a few highly disruptive persons (Lovaas and Favell, 1987).
High functioning target persons may be exposed to multiple concurrent positive reinforcement contingencies as long as each contingency helps in reaching the stated behavioral objectives. Punitive contingenciesverbal reprimands, reinforcement cost contingencies, over-correction and time-out from opportunities to earn positive reinforcementmay be essential with lower functioning children.
You must observe carefully in order to deliver consequences at the moment when they will have effects related to your behavioral objectives. To do this, you must have clearly specified what types of responses you will be observing. Your operational definition may specify your observation procedures. Most successful programs working with severely retarded or disturbed individuals have stressed extremely close observation of each individual .
Observing individuals in groups
While close observation of every desired response by every subject is the way to reach your behavioral objectives most quickly, such an approach is impractical in group environments. In classrooms, you are usually dealing with several target persons simultaneously and it is impossible to watch everyone at all times. Several partial solutions have been offered for this problem. One solution is to use time sampling. With this approach, observing behavior is either on a fixed- or variable-interval schedule. When the planned observing time approaches, you look around the room and quickly note what everyone is doing. If you remembered to "think small" and do not have too many or too complicated rules for determining desirable behaviors, you should be able to record key behaviors for each person. It is very helpful to have a simple prepared form to allow you to quickly code the appropriate categories of behaviors for each target person for each time interval (see Figure 10-2 for an example of a behavior recording form).
Figure 10-2: A sample data sheet for recording time-sampled information from a group. D = desirable, U = undesirable, N = neutral.
If an observer has access to a timing device or can glance at a wall clock or wristwatch at regular intervals, the fixed interval "observation window" technique works well. Another procedure is to record the intervals in advance on a cassette tape machine. This creates a timing tape that can be played in the classroom. Timing tapes should only be used in situations in which the modifier is able to hear the taped cues ("thirty seconds, one minute, one and one half minutes...") telling when to observe without allowing the sound of those recorded cues to disrupt the behaviors of the target persons.
If the observer tends to have difficulty remembering to check the time or if the target persons learn about the fixed interval and do most of their good behavior in the time just before each recording (the fixed interval scallop), then a variable interval schedule may work better. Using this technique the observation times are varied around a pre-selected mean time value such as 10 minutes. A good quasi-variable interval schedule is generated when a busy modifier attempts to check behaviors at regular intervals but is usually a bit early or late. Moderate inconsistency generates a desirable degree of randomness.
The time interval between "observation windows" may be important. Short intervals give more accurate information in less time but are more disruptive for the modifier. For a researcher focused on behavioral observation the response cost of relatively short intervals will not be too aversive. For a person required to teach or perform other functions, however, longer intervals are more practical. Short intervals are more tolerable in short-term studies or modification programs. Longer intervals between observations are better for programs expected to continue a long time, both because they require less time (reducing the probability of having the observer's schedule strained) and because sufficient data can be gathered over a long time period from widely spaced observations.
The focal individual observation technique
Another technique for observing behavior of persons in groups is the focal individual technique. In this technique, the names of the persons to be observed are randomized. Printing each name on a slip of paper, putting the papers in a container, and drawing out names one-at-a-time is a simple randomization procedure. Each person is observed individually in the order in which his or her name was drawn. It is best to redraw the names for each day's observations to control for time-related effects. If this is too much work, either generate a number of lists at one time or rotate the names through the list so that the person observed first on one day is observed last on the next. The focal individual technique is preferable to the time-sampling technique when the behavioral objectives for the different target persons are different or where you are simultaneously sampling many behaviors and recording each of them.
The second stage of planning a contingency management program is specifying observation procedures. You can record the frequency of target behaviors or obtain an estimate of both duration and frequency by using time sampling techniques. If you are observing multiple behaviors the focal individual techniques allows you to sample one person's responses at a time.
Charting and other monitoring methods
A part of good observation technique is to keep a record of the observations. A record of behavior over multiple time periods is called a chart, and filling out such a chart is called charting. Charting is both an observation technique and a reinforcement technique. However, combining charting with some additional reinforcement is usually more effective than charting alone. The author once provided charts for students to record their weekly accumulations of points and to mark cumulative point totals by weeks. Unfortunately, many students did not fill out their charts. The author found that rewarding students who filled out their charts with extra points increased both the percentage of students filling in the charts correctly and those doing more and better quality work (Swenson, 1975).
One advantage of charting is that it avoids the problem of the modifier's moods coloring evaluations of how well the modification program may be working. A depressed modifer may incorrectly judge a program a failure. Subjective impressions are more often inaccurate than not! Accurate observation and recording the results are both essential ingredients in the process of objective evaluation of progress towards meeting behavioral objectives. Objective data may also help the teacher or counselor to demonstrate that they are meeting their behavior objectives when they meet with administrators.
Charting can also be done by target persons themselves, and will often have desirable motivational effects. This is called self-monitoring or self-charting. For many adults, keeping a record of their progress towards a goal, such as losing weight, may by itself allow them to emit those behaviors (such as avoiding third helpings of food) compatible with meeting their behavioral objectives. In Chapter Nine I noted that many complex clinical programs included training in such self-charting.
Self monitoring is used in many clinical and research applications. Frost and Sher (1989) were interested in checking behaviorsas in checking if your wallet is still in your pocket. While there was considerable good data on clinical subjects that indicates that these persons suffer from various cognitive and behavioral deficits, they wanted to look at the phenomena in a nonclinical student population. They found it difficult to create a laboratory analog of the kind of stress that triggers compulsive checking and scores on their first efforts did not correlate with other measures of compulsivity. They finally hit upon using classroom examinations and having their subjects put a check mark (what else?) next to each multiple choice item each time the student rechecked it. This self-monitoring procedure yielded objective data that correlated nicely with other measures.
Because it is difficult to maintain the desired degree of accuracy in observations over long time periods, mechanical methods of observing behavior have been developed. One ingenious device monitors classroom noise levels and can be set to show the minutes remaining until recess or a special treat. When noise levels go over a specified level, the clock stops running. Portable event recorders have been developed that permit the operator to indicate the occurrence and real time duration of any selected behavior by pushing a button that causes a pen to make an ink line on a strip of moving paper.
The accumulating totals of counters like those used for scoring golf also represent mechanical data collection of a simple sort. These may be used by individuals to record their own urges to smoke or eat or do other undesirable habitual behaviors. They have the advantage of requiring less effortpush the button to record the urgethan writing down the urges. In biofeedback, the observation of the desired physiological change is always made by a machine that then tells the person producing the change how he or she is doing. By giving feedback on the formerly unobservable biological state, these machines make it possible for the person to modify physiological responses. As we noted for charting, knowledge (feedback) about being successful may be a powerful reinforcer.
Both accurate consequation of desired responses and evaluation of a program's effectiveness require accurate observation. Having either an observer or the subject (self-charting) record observations in graphic form is called charting. Charting is feedback and can change behavior, although charting plus other reinforcers usually works better. Mechanical devices can increase the accuracy of observation.
Evaluations, experimental errors and the multiple baseline procedure
Most of our discussion has been on observation intended to improve the accurate delivery of consequences. But what about the issue of program evaluation? Factors other than the validity of a technique help determine the failure or success of a program. Well-done charts of frequencies of target behaviors are useful, but they suffer from one serious flaw if they only chart behaviors beginning with a program's implementation and continuing until its (ideally successful) termination. This flaw is related to time-correlated changes in behaviors, which may be confused with the effects of a program. Just getting the sort of attention provided by a focused program may reinforce desired behaviors. Many token economy programs are presented as a type of gamecitizen of the week and the like. Many children find participation in these token economy programs interesting and respond with increased performance in school merely because of the novelty of the procedures. This is called the Hawthorne effect.
The would-be contingency manager must be aware that the teacher's enthusiasm, or lack of it, may determine the success of a given project. The teacher's expectancies of success (Rosenthal effect) may confound objective evaluation of the effectiveness of a contingency management procedure. Behaviors change for reasons not readily apparent; if such changes occur during a program and are positive, who could blame the modifier for taking credit for the changes? Having experimentally naive observers collect data, applying placebo treatmentssuch as the noncontingency tutoring used in a previous exampleand using multiple baseline designs all help to reduce errors in evaluating the success of programs.
One way to evaluate the precise effects of behavioral interventions is to use the multiple baseline procedure. The first stage of this procedure is to begin by collecting data on the frequencies with which the target behaviors occur before the program of contingency management begins. This allows the subjects to get used to the observation procedures, which may by themselves change behaviors. It is a time to practice observing without the necessity of also having to provide reinforcers and to sharpen original specifications. This period of time, called baseline one, serves as a control for the effects of the contingency management program. Then the modifier begins the intervention ("on-contingency") time period, and continues to record data for some specified time period or until the outcome of the intervention is obvious. Finally the modifier halts the consequation while continuing to record behavioral frequencies (baseline two). The second baseline allows evaluation of long-term effects independent of the administration of consequences. See Table 11-1. Multiple baseline procedures may alternate baseline periods with "on-contingency" time periods several times to separate the effects of reinforcement control and long-term gains from time-dependent effects.
However observations are gathered, the next step is using them properly by actually delivering the consequences for the specified behaviors by the target persons according to the rules of the specified contingencies. This last stage of the SOC model is called consequation.
Observation is required for precise consequation of target behaviors and for program evaluation. Multiple baseline measures of behavior before, during, and after consequation aid in program evaluation because they allow you to estimate time related effects and compare them with treatment effects.
When working with individuals with short attention spans, each observation of a desired behavior is followed with either a primary reinforcer or a tangible secondary reinforcer. With subjects who have intermediate attention spans, such as normal fifth-grade school children, putting a plus mark by a child's name may be enough of a secondary reinforcer. With older normal children and normal adults trying to change behaviors, the daily record may serve as a sufficient secondary reinforcer to maintain motivation.
It is a rule in conditioning that the closer in time the reinforcing event follows the desired response, the more effective the conditioning. In shaping operant behavior, the modifier should immediately respond to any movements that will are in the direction of the final target behavior. This immediacy requirement increases in importance as the intelligence level or stability level of the target persons decreases. Remember that secondary reinforcers can "bridge the gap" between primary reinforcers, as long as these secondary reinforcers are themselves delivered immediately after the desired response.
Don't have a sham plan: Consistency and control
Whatever the type of reinforcers employed, however, and whatever the types of contingencies linking the target behaviors to those reinforcers, one rule applies in all cases: Be Consistent! This means accurate observation and resistence to pressures to grant unearned "bootleg" reinforcers or to hold back from delivering deserved aversive consequences. Failing to observe and reinforce desirable behaviors causes them to extinguish. Letting severely disruptive behaviors pass without punishment after you have established a punishment contingency reinforces the target person for "testing the limits."
The best techniques for delivering reinforcers are those that ensure that the reinforcers are both motivating for the subjects and easily controlled by the modifiers. One such technique is "incidental teaching" which depends upon the subject approaching a particular stimulus such as a book. Continued access and behavioral interaction with the stimulus is contingent upon some target behavior. This technique is used to help autistic and other learning handicapped children learn descriptive words and prepositions by requiring them to use these words to continue to have access to the book or other desired object (McGee, Krantz, and McClannahan, 1985). The modifier is easily able to create a response deficit by withholding the object thereby making opportunity to interact with the object a reinforcer (Timberlake and Farmer-Dougan, 1991). An example would be a parent releasing a desired toy to a child only when the child says "please."
If the target person has contracted to turn in a paper by a specified due date to prevent its being marked down (active or Sidman avoidance contingency), the penalty should never be waived. To do so would reinforce procrastination, and shapes the student to develop skill in generating ever more creative excuses instead of learning to do papers on time. No responsible college professor or other teacher would wish to do such a horrible thing to a student! The responsible and humane course of action in the long run is, as Malott phrases it, "NOTHING IN MODERATION!" Modifiers must communicate inflexible contingencies clearly. If the target persons do not know what behaviors are expected then both the modifiers and the subjects may extinguish. This does not mean presenting a deaf ear to complaints about contingencies but rather noting them and acting on them either after reaching the behavioral objectives or when it is clear the program has failed. At that time the complaints may be useful in designing better future programs. These subsequent plans might use different schedules of reinforcement to achieve more precise behavioral control.
Delivery of reinforcers or aversive consequences is called consequation. Consistent delivery of reinforcers just after the emission of target behaviors is an essential step in effective contingency management. The modifiers must be able to control the reinforcers.
We have already discussed beginning with continuous reinforcement and progressing to intermittent reinforcement. Done well this will usually make behavioral changes more durablebecause of the PREand reduce the time and resources required from the modifier. Explaining the contingencies of intermittent schedules to some subjects may be difficult but the benefits of using these schedules will often outweigh that difficulty. Which intermittent schedule of reinforcement should be used? Generally interval schedules are more convenient for the modifier while ratio schedules generate higher rates of behavior and allow more precise control. Variable ratio schedules usually produce the most responses per unit of reinforcement and the highest resistance to extinction. The modifier is not limited to simple schedules of reinforcement and sometimes a more complex schedule will give better results. One of the important contributions of Skinnerian analysis has been the description of the effects of complex schedules of reinforcement (Chapter Four).
While it is usually considered desirable to eliminate disruptive behaviors, some behaviors are only disruptive when they occur too frequently. An example is talking out in class. Even though frequent talking out may be annoying to teachers, the thought of training silent, sullen students is no more pleasing. Rather, the ideal is low levels of talking-out behavior by all of the students. A schedule designed to achieve this is the DRL (differential reinforcement of low levels) schedule.
Deitz (1976) examined three methods of DRL administration in behaviorally disturbed children. The first method is called the spaced responding DRL method. In this method, only responses separated from each other by interresponse times (IRTs) over a specified criteria are reinforced. The second method is called the full-session DRL method. If the total number of target responses emitted during a given time period falls below a specified number, reinforcement is delivered. This was the method used in the "good behavior game" developed by Barrish, Saunders, and Wolf (1969) and modified by Bartlett and Swenson (1975). The third method is the interval method. If less than two talking-out responses occurred during the prescribed interval, reinforcement was delivered when the interval ended. If a second response occurred during the interval, the interval timer was reset and reinforcement was postponed. All three versions of DRL schedules reduced talking to about 15% of baseline rates.
Other forms of differential reinforcement schedules can also be used to reduce levels of inappropriate behavior in classrooms. Abramowitz and O'Leary (1991) reviewed use of DRO (differential reinforcement of any behavior other than the problem behavior), DRI (differential reinforcement of responses incompatible with the problem behavior) and DRA (differential reinforcement of alternative behaviors). All of these schedules can increase desired behaviors or reduce undesired behaviors. As reductive procedures they can be used as alternatives to punishment. They vary on a dimension of specificity. DRO schedules are used when most of the behavior is inappropriate and any other than the undesired behavior is an improvement. DRA schedules are more specific and reward desired behaviors that are alternatives to the inappropriate behavior. DRI schedules reinforce only behavior specifically incompatible with the desired behavior. For example, work completion is incompatible with daydreaming. Abramowitz and O'Leary (1991) note that the literature includes multiple examples of successful uses of these schedules.
Getting subjects to accept leaner and leaner schedules of reinforcement can be a useful technique in preparing for the day that the modification program ends. The ultimate goal of most programs is training behavior that makes the modifier obsolete.
Differential schedules of reinforcement can be used to selectively reinforce desired behavioral patterns. Differential reinforcement of low levels of behavior can stabilize behaviors such as asking questions at nondisruptive levels. Differential reinforcement of other behaviors allows reducing disrupting behaviors by reinforcing alternatives.
After the interventionPlanning for functional behavior
Some changes in consequation should be planned for at the beginning. To have target persons hold onto their behavioral gains after they have left a program, the modifier must shape functional behavior that can be reinforced by the environment or by the persons themselves. An example would be training a child to read. Once reading is mastered, the content of the material read will reinforce it. A more recent term for self-maintaining desirable behaviors is behavioral trapping. "Behavioral trapping refers to the process by which newly acquired behaviors come under the control of naturally occurring communities of reinforcement." (McConnell, Sission, Cort, & Strain, 1991, p. 474). Once [prosocial] behaviors are mastered in children with emotional problems, the more positive reactions of significant others may maintain the new behaviors.
The matter of shaping functional behavior is one that is not always given sufficient attention by behavioral modifiers. Many programs include no provisions for follow-ups. Rightly, the critics of the behavioral modification movement have noted that even though children or ward patients may indeed behave in ways judged as more appropriate by the modifiers, these gains may vanish when the reinforcers vanish. Where continued environmental control is present, such as on a mental ward for long-term patients, such criticisms may not be very important. Even behavioral gains that must be maintained by continuing contingency management programs may make a ward much more reinforcing for both patients and staff.
Planning for the day when the target persons will leave a program, however, is vital in counseling and school settings. The counselor should assign to the client simple "self-help problems" and then reinforce the client for successfully completing them. If the new skills help the client obtain reinforcers from the natural environment, they will usually be maintained, although occasional "boosters" may be needed. With school children, the most successful approach may be gradually withdrawing, or "fading out," reinforcers from the modifiers. The modifier should observe for signs of schedule strain as the schedule of reinforcement becomes leaner and leaner. Disappearance of newly acquired positive behaviors requires backing up and restoring some the programmed reinforcers.
The final goal of most programs is to shape functional or self-reinforcing behavior. This behavior trapping involves a target behavior coming under the control of natural reinforcers so that it will not extinguish when the experimental reinforcers are withdrawn.
Group social reinforcers
Another criticism of contingency management programs is that they may reinforce undesirable levels of competition. Such programs, however, can also be designed to increase cooperation. Bartlett and Swenson (1975) based fifth grader's reinforcers for low levels of disruptive behaviors on the total disruptive behaviors of all the pupils seated at a particular table. The entire table's consequation was yoked together. The result was that peer pressure was exerted on unruly individuals at each table to perform well at the "good behavior game," a group contingency system first described by Barrish, Saunders, and Wolf (1969). Another feature designed to reduce competition was having absolute criteria for reinforcement. All tables emitting less disruptive behaviors than the mean number of disruptive behaviors observed during the baseline-one period were allowed to go to recess early. Those acting out at about baseline levels went to recess on time. Those having a greater than average number of incidents at their table went to recess late. Thus, all tables could "win the game" each time.
Such grouped consequences have several advantages. First, they are easier to administer, since group rather than individual records are required. Second, they make better use of natural peer-based social reinforcers to supplement the effects of reinforcers controlled by the modifier. With relatively "normal" target persons, such group contingencies are almost always more efficient than individual contingencies. One highly disruptive individual, however, may demoralize a group and disrupt an entire program. Such an individual should be exposed to individualized aversive consequences, such as time-out (isolation) for extreme antisocial acts, or be treated as a "group of one."
While group contingencies are usually effective and efficient they may not produce the same effects as individual contingencies. McConnell, Sisson, Cort and Strain (1991) conducted research with a small group of preschool children with behavioral handicaps. They examined the effects of social skills training, group coaching of socially desirable behaviors, and individual coaching of the same behaviors. Each treatment produced a different pattern of responding. Social skill training produced high scores during formal role-play assessment sessions but did not change natural free play behaviors. Individual contingencies increased rates of subjects' rewarded social behaviors during free play but those behaviors were not effectively directed at peers and failed to increase sustained social interactions. Group contingencies resulted in the subjects eliciting and receiving more social overtures from other children. However, when other children approached, the subjects often refused to respond. Grouped contingencies are limited to environments such as schools where similar goals are appropriate for several people simultaneously. Pair or family contingencies may be useful in marriage counseling.
Consequating behavior by groups makes observation and consequation easier. Ideally peers will socially reinforce desirable behaviors of other group members and punish disruptive members to earn a group reward. This reduces the burden on the modifier.
So far, this chapter has presented the basic principles of effective contingency management. Now I will illustrate the wide range of situations in which such principles are applied. As you read these summaries of various recent studies, try to identify the principles applied. This may help you to integrate theory with application, and may suggest how you can eventually design your own contingency management programs.
The Application of Operant TechniquesPrograms and Principles
Contingency management for natural groups other than in schools
Proposals for the ideal living arrangements have spanned the history of western civilization from Plato's Republic to Skinner's Walden Two (1948). A major problem that any experimental living arrangement must confront is that of sharing the basic work of the community. Informal accounts suggest that contemporary communes experience a breakdown in the basic housework required by the group. Feallock and Miller, 1976, p. 277.
A dream of any applied science of behavior is to create the basis for a better way of life. Although Skinner's idea of utopia presented in Walden Two (1948), was realized as Twin Oaks commune (Kinkade, 1973), it was not been built on Skinner's experimental plan. Feallock and Miller attempted to remedy this in their work with a coed cooperative house at the University of Kansas. A feature of the labor credit system in Walden Two was that the value of a given job depended upon its popularity. This led to desirable jobs, such as picking flowers, paying a small fraction of what was paid for cleaning toilets. The Kansas house incorporated this feature, with the values of the most and least popular ten percent of the jobs adjusted for popularity.
At the beginning of the project, some of the 30 students living in the house expressed the view that a clean house was intrinsic reward enough. To test this, credits for cleaning were transferred to painting the outside of the house. At the end of the 18 days, the residents demanded the resumption of cleaning credits after serious deterioration of the cleanliness of the house. Almost all cleaning jobs were completed prior to the change from a credit system (baseline one). During the change, this decreased. When contingencies were restored for cleaning, the cleaning task completion ratio rebounded. See Table 11-2 for a summary of the results.
House members asked if it was necessary to make credits contingent upon cleaning successfully passing the gaze of student inspectors. "Some members suggested that it would be nicer if there could be trust in the house, so that the members who agreed to do a job would not have to have their work inspected and credits awarded on the basis of that inspection" (Feallock and Miller, 1976, p. 281). A reversed version of the usual reversal design alternated observation-only baseline periods with an intervening consequated period. During the "baseline" periods passing inspection earned credits. During the intervention phase no inspection was required. During baseline one, 96% of all cleaning jobs passed inspection. During the intervention phase, this rate dropped below 60% by the final five days. During baseline two, the rate recovered to 95%.
In the original plan, labor credits were convertible into rent reductions. A third question debated was the necessity of such a backup system. Some students argued that pride of achievement alone should be sufficient. A third experiment used a design with a middle phase in which all house members got a rent reduction irrespective of their work records. Twenty-seven of the 30 members completed less cleaning jobs; and total cleaning jobs passing inspection dropped from 94 percent to 67 percent during the "non-backup" phase.
Certainly a work-sharing plan of this type is less expensive than the paid cleaning and maintenance staffs need for conventional dormitories. But what about student satisfaction? Survey responses indicated considerably higher satisfaction for most participants. This may have resulted from other procedures built into the system. A requirement for living in the house was passing a quiz after completing a programmed self-instructional handbook on behavioral techniques. Student acceptance of the system was further increased by having all inspectors, contingency managers, and finance managers be house members who had passed "minicourses" in these areas. The first student administrators, in turn, developed 80 self-instructional manuals and trained all subsequent peer managers. As part of student self-governance, the credit values for various tasks could be modified by majority vote demonstrating viable group self-control.
While Feallock and Miller (1976) showed the utility of behavioral techniques in an environment where the participants determined the contingencies, what of less democratic institutions? Hobbs and Holt (1976) significantly improved the percentages of time spent in appropriate activities by 125 adjudicated delinquent boys from around 55 percent to over 78 percent through use of a token system. Tokens were backed up by dances at a girl's training school, cigarettes, toys, candy, soft drinks, access to football and other sport events, and early release. Cost of the experiment was only $7.85 per boy per month. The authors reported that 14 months after ending the experiment, administrative neglect of supervision and coordination of the program, coupled with an insistence on using the system to promote behaviors such as standing straight in line, had eroded program effectivenesss. The authors commented on the dilemma raised by providing the powerful tools of behavioral control to "community systems whose program interests may not all be in the best interests of the client" (Hobbs and Holt, 1976, p. 197).
One answer to the ethical issues raised by Hobbs and Holt is to give even imprisoned "offenders" more control over the contingency management process. Seymour and Stokes (1976), working with four girls confined in a maximum-security institution in Australia, were successful in increasing work behaviors and reducing disruptive behaviors for three of the girls. The girls were allowed to score their own work output, although provisions for the detection of cheating were built into the system. This self-recording procedure was successful in spite of the fact that a previous staff-directed token economy had failed. Token cost (response cost) provisions were necessary to reduce competing behaviors. The girls also role-played pointing out improvements in their work to staff, or "cueing" staff. The staff (which was distinct from the experimental team) was not aware of these cues. Both cues and praise from the staff members increased in the later stages of the project. The therapist recorded the cues and delivered tokens for the girls' efforts to bring their improving work to the staff's attention.
The success of the response cost procedure used to prevent cheating in the Seymour and Stokes study is parallel to extensions into other types of group settings where it is desirable to decrease specified behaviors. Marholin and Gray (1976) were able to reduce cash losses in a small business sharply by instituting a group response-cost contingency. A reversal design was used before the program was permanently instituted. Shortfalls on baseline days were around 4 percent of receipts; during "on-contingency" days, they dropped to less than 1 percent. Total fines to employees were $8.70 per person.
Behavioral techniques can also be used to increase prosocial public-benefiting behaviors such as picking up trash. Hayes, Johnson, and Cone (1975) were able to reduce littering on the grounds of a federal youth correctional facility in spite of a lack of public-spiritedness. Their method was a significant advance over previous methods using reinforcement contingent on the amount of trash turned in. These previous methods exposed the subjects to the temptation of generating new trash to supplement their reinforcers. Hayes and colleagues mention one case in which children living in a public housing project emptied trash cans into their collection sacks and collected the larger pieces of trash while leaving small pieces behind.
To avoid these problems, the experimenters secretly distributed a few marked items of litter in each of the study areas on each on-contingency day. This procedure used a variable ratio schedule compared to the volume/fixed ratio schedules of "amount-based" programs. Because the marked items were coded in a way known only to the experimenters, there was no possibility of the youths, picking up only marked items. Three of the areas included in the study were "seeded" with marked items. The youths were told when marking might be done. Baseline data was collected for the three "seeded" areas before they were first marked. The average reduction in litter in the marked areas during the times when they were marked was 71.3 percent. There was an increase in litter in the unmarked area, which may reflect a behavioral contrast effect. Of the youths eligible to participate in this program, 25 percent did so. In addition to special privileges, such as being allowed into the camp coffee house past regular hours, a total of $14.50 was earned by all participants over the 42 days of study. This technique is interesting because of its excellent cost-effectiveness ratio, because of the absence of aversive contingencies, and because it does not require much staff time or interpersonal skillsas in convincing a juvenile delinquent that he wants to collect trash.
A variation of this system used in a forest service area (Powers, Osborne, and Anderson, 1973) paid $.25 for full bags of trash but also incorporated a lottery system that paid $20.00 to the person whose lottery ticket, accepted in lieu of cash, had been selected. This method may be superior for areas in which cheating is not a problem.
People living or working in groups often respond to individual contingencies and neglect behaviors that benefit the group. Observing and rewarding behaviors that benefit the group increases these prosocial behaviors. Many such programs need response-cost contingencies as penalties for harmful behavior. Tokens and social approval are helpful but often a backup system of money improves the results. Using a VR component can increase effectiveness.
Contingency management and marriage counseling
Not all institutions are designed to be either punitive or profit making. One such institution is marriage. Israel Goldiamond has been a pioneer in developing self-applied behavior technologies in which the professional modifier serves as coach, consultant, and evaluator. The following case illustrates his approach, applied to participants in a failing marriage.
The couple concerned had been married for almost ten years and had limited themselves to sexual relations about twice each year. This was blamed on the husband by both parties. They were both intelligent professionals, Roman Catholics, and determined to maintain the marriage if only they could get sexual behaviors started before the wife was "driven" into extramarital affairs. It was suggested that the husband try reading Playboy to initiate amorous activity. He feel asleep while reading it. The wife had almost extinguished on "husband-shaping" behavior:
I don't know what reinforcements I have. The characteristic of good reinforcement is that it can be applied immediately and is immediately consumed. I could withhold supper, but that is not a good reinforcer, because I can't turn it off and on. I can't apply [sexual] deprivation because that's my problem. I don't know what to do [Goldiamond, 1965, p. 857].
Part of the problem was that the husband was a rising business executive who also attended evening courses and was either too busy or too tired to make advances towards his wife. He offered to schedule his wife in his appointment book for two evenings a week. In spite of his wife's dubious attitude, charting his "wife attention" appointments was initially effective. After two weeks, however, he began to cancel these appointments and it was necessary to search for an effective backup reinforcer. Both husband and wife took personal grooming very seriously. She visited her beautician weekly and he, his barber. Their clothing was always freshly dry cleaned. When termination of all such affectations was made the contingency for missed appointments, vanity succeeded where all else had failed.
The early behavioral marital therapy just described has been enhanced and developed into structured programs with fixed techniques to be applied during specific weeks of the couple therapy process (Jacobson, Schmaling, Holtzworth-Munroe, Katt, Wood and Follette, 1989). The treatment modules were presented in a fixed order. The early modules included behavior exchange training similar to that just described in Goldiamond's (1965) work. They also included companionship enhancement training designed to increase reciprocal positive reinforcement, receptive listening and expressive communication training. The later modules included training in conflict resolution and problem-solving skills, sexual enrichment focused on clearer communication and treatment of dysfunction's, and planning to maintain gains through self-therapy and functional behaviors. Jacobson et. al. (1989) compared this structured program with a program using the same techniques but in a flexible manner scheduled to best fit each couples needs. Both groups improved after 20 weeks of treatment but the couples receiving the flexible version of the therapy showed less deterioration and retained more treatment gains at six month follow-up.
Although most of the applications presented thus far were designed to modify human behavior, the operant methodology was originally developed by animal research and is thus ideally suited to animal applications.
People in close relationships often come to substitute aversive consequences, such as nagging, for reciprocally pleasing and reinforcing each other. Professionals acting as coaches and teachers guide couples in communication skills and carrying out self-help programs. Explicit marriage contracts [trading treats] can help. Programs work best when the participents help in planning.
Increasing activity in zoo animals
A pioneer in using behavioral technology to produce practical benefits by modifying animal behaviors is Hal Markowitz, former director of the Oregon Zoological Research Center at the Portland Zoo. Dr. Markowitz's work has implications both for animal husbandry practices in zoos and for theories of motivation.
A major problem in zoos is that animals fed their daily ratio of food once a day become bored, inactive, and, if dominant, obese. Thus, low-ranking animals may be malnourished while the high-ranking animals suffer the bad effects of easy living. The inactivity engendered by sloth is often not distinguished from the inactivity resulting from sickness. Inactive animals frustrate the viewing public, who may resort to feeding the animals "junk food." Dr. Markowitz developing devices that allowed the animals to "work" for food. The animals had to be shaped to use these modified Skinner boxes.
One example of this approach, which featured a two-stage learning problem, was used in the gibbon (a lesser ape) display of the Portland Zoo. First, the animals had to solve a light-dark discrimination task when the computer controlled machine notified them, by a combination of buzzer and light cues (the discriminative stimuli, or SDs), that reinforcement would be available. Then they had to swing arm over arm across their cage to the second apparatus, where they pulled a lever that caused an automatic feeder to release bits of highly preferred food. In a nice extension of reinforcement principles, the initial SD was triggered by a human's putting a dime in a slot of a box. An instruction panel explained the purpose of the apparatus. Over $3,000.00 was collected by this box in one year, to be used for the research program.
Other interesting illustrations included training Diana monkeys to perform a sequence of behaviors, which resulted in the apparatus dispensing a token. The monkeys could then either spend their tokens or hoard them. Sometimes the monkeys were even observed sharing tokens! Young monkeys born after the program was initiated did not need the elaborate shaping procedures required for the adults. Instead, they successfully modeled their parents' behaviors (Markowitz, 1974). Other products of this unusual approach to zoo displays included small wild cats that chased "flying meatballs," a bear that could trigger a fish-throwing catapult by nonaggressive growls near a hidden microphone, and a mandrill monkey who played a reaction-time game with zoo visitors for one dime a game (Markowitz, 1975a). While visiting Hal Markowitz during the fall of 1977, the author lost three games in a row to the speedy baboon.
Benefits of these programs included: more active and healthier animals; more interesting displays for the public; an opportunity to notice illness earlier (sick animals ceased operating the apparatus); less boredom for the animals; and research opportunities. This last advantage allowed extensive comparisons of many species, abilities on a variety of learning and simple concept-formation tasks. As the animals became more active, behaviors such as infant harassment decreased (Markowitz, Schmidt, and Moody, 1977).
Objections, however, were raised to this program. First, it was suggested that having animals operate mechanical devices for tokens are unnatural. Hal Markowitz defended his work in an article entitled, appropriately enough, "In Defense of Unnatural Acts Between Consenting Animals" (1975a) by pointing out that animals' behaviors in bare enclosures are not only equally unnatural, but more likely to be harmful to the animals. A second objection was that it is cruel to deprive animals of food in order to force them to operate the devices added to their environments. Markowitz countered by noting that all animals are given all the food they can eat at the end of the testing day (Markowitz, 1975b). The illustration that the animals' behaviors seemed to have more to do with deprivation of the opportunity to effectively control some aspect of their environments rather than with hunger motivation may represent a special case of the Premack-Timberlake principles. The animals were deprived of the opportunity to emit food-gathering behaviors; it was the opportunity to emit these behaviors (which would occur frequently in the wind) that provided most of the reinforcement. The general implication is that working for a living is, in a sense, a primary reinforcement.
Dr. Markowitz's program was eventually terminated because of the concerns about naturalness. Nonetheless in recent years many zoos have become concerned about his issue of "behavioral enrichment" and introduced a wide variety of often unnatural objects to stimulate activity in bored animals. Assistant research director of the Los Angeles Zoo, Thaya duBois (1993), described several techniques for behavioral enrichment including PVC pipes fitted as crickets dispensers to increase activity in insect eating animals.
Making operant devices available to zoo animals gives them a chance to show learning and to work at getting their food. This seems to prevent boredom, improve health, and is of interest to visitors even if it is "unnatural."
Applications of Contingency Management to Education
Applications with children
Most applications of contingency management principles have concentrated on the modification of overt behaviors. These techniques have been widely used in special education. An example was the work of Cooke and Apolloni (1976) with four handicapped children enrolled in an experimental classroom. It is of interest methodologically because three other children enrolled in the classroom served as control subjectsa rarity in operant applications, where multiple baseline designs are more common. During baseline one, frequencies of smiling, sharing, positive physical contacting, verbal complimenting, and combinations of these behaviors were recorded. Each of the four behaviors was then trained in successive five-day periods. Training methods Included the trainer modeling the desired behavior, praise contingent upon the specified behaviors, and direct instructions. Following each day's training session, observers recorded the generalization of the children's trained behaviors to interactions with untrained children. Following training, follow-up observations were conducted over a four week period. All trained subjects increased the frequencies with which they emitted the specified positive behaviors during the training periods, and three of them showed continuing generalized increases in their interactions with the untrained subjects in the follow-up observations. The three untrained subjects showed some increases also either as the result of the their modeling of the trained children or as the result of social-reciprocity effects. The three successful children developed functional behaviors and this new social competency was "trapped" by social reinforcers.
The control of pupil behaviors is not limited to socially relevant behaviors. Behaviors related to cognitive functioning may also be altered. Grover and Gray (1976) were able to demonstrate behavioral control of four behaviorally defined "creative" behaviors in eight fourth- and fifth-grade children. The behaviors were (1) number of different responses assumed to reflect fluency, (2) the production of a large variety of ideas assumed to reflect flexibility, (3) the development, embellishment, or completion of an idea labeled as "elaboration," and (4) the use of ideas "that are not obvious or banal or are statistically infrequent" (Grover and Gary, 1976, p. 79). Note that each of the underlying internal facets of creativity has been tied to an operationally defined class of verbal dependent variables. The verbal data were generated during class hours as part of writing assignments. During on-contingency periods, points were awarded based on the frequencies of the specified behaviors; during baseline periods, the students, papers were all marked "good" and everyone was told, "You are doing very well." The responses were scored by two graduate students in educational psychology who did not know the purpose of the experiment nor the variables involved. a double-blind procedure.
Students were also tested on the Torrance Test of Creativity before (pretest) and after (posttest) the contingencies were put into effect. They showed a statistically significant increase in their creativity scores. Each of the dependent variables was rewarded during one of the four experimental sessions. Each of the measures increased over baseline measures and, moreover, showed most of the increase only during the experimental session, when that particular class of verbal behaviors was subject to the point contingencies. The points gained were credited to one of the two four-child teams. The winning team was allowed to go to recess 10 minutes early and each team member received a snack. The authors suggest that their procedures might be useful to teachers who wish to improve creative writing and problem solving. A broader implication of this study is that operant conditioning can be used to maximize such complex behavioral patterns as creativity. Recall from Chapter Nine the studies by Neuringer, (1992) and Pryor et. al. (1969) showing that response variability is a basic operant dimension of behavior. This is exactly what Grover and Gray (1976) demonstrated.
A good evaluation of the effectiveness of a program examines the degree of behavioral control exerted by different reinforcement parameters. Robertson, DeReus, and Drabman (1976) compared the effect of giving positive feedback to children for behaving less disruptively with giving them either contingent or noncontingent tutoring. The tutoring of these highly disruptive and academically slow second graders was done by either fifth-grade volunteers or college students. The 18 children were divided into four groups and all children received each type of reinforcement condition at some point in the experiment. All four groups then entered a last phase of the study, during which they were again given feedback on their disruptive behavior but no tutoring. Feedback alone was not significantly effective in reducing the disruptive behaviors even though performance during the final feedback phase was better than during the first feedback phase. Pupils' behavior improved with both types of tutoring but they improved more when tutoring was earned by reductions in disruptive behaviors. College students and fifth-grade peer tutors were equally effective.
This study suggests that availability of help from other people may be an effective reinforcement system if it is made contingent upon low rates of "acting-out" behaviors. A disadvantage would be that pupils most in need of special help might be denied such help because of their bad behavior. Since one cause of bad behavior might be frustration with poor academic performance, contingent tutoring might perpetuate a vicious circle in which academic frustration leads to aggressive behavior in class, which prevents tutoring which keeps academic performance low which causes frustration . . .
Both social and academic behaviors in school children respond well to a wide variety of operant techniques. Having peers and paraprofessionals administer reinforcers decreases costs and increases program effectiveness. Earning access to tutors can helpful.
PSI: An Application for College Classrooms
While the SQR3 method utilized by Fox (Chapter Six) shows how individual students can use operant principles in the college environment, it did not provide a blueprint for a college instructor to use in developing a learning-theory-based classroom. Fred Keller developed such a blueprint. The essentials of the "Keller Plan," or personalized system of instruction (PSI), were first proposed in 1963 (Keller in Ulrich, Stachnik, and Mabry, 1966). His first report of its use to have a major impact on higher education was the pivotal paper, "Good-bye teacher . . . ," (1968) published in the first volume of the Journal of Applied Behavior Analysis.
Keller argued that the goal of education should be for students to learn all that they were capable of learning in a subject area. Conventional grading systems, however, by limiting the time available for a student to study the material, cause fast students to learn a great deal and slow students to learn much less. Keller suggested that time requirements, rather than grades, should be the variable element in higher education. He divided his courses up into self-contained segments, or learning modules. The student was required to pass a quiz to demonstrate mastery of a particular module before advancing to the next module. By the end of the course (which occurred at different times for different students), all students had learned the essentials of the course. Since a class composed of students studying different segments made conventional lectures impractical, students studied the modular materials while alone or with the aid of tutors, usually advanced students, who also gave the quizzes.
The self-pacing features of PSI, however, created problems for students with minimal self-motivation who completed few modules in an academic time unit. Bufford (1976) introduced a bonus system for high rates of module completion early in the semester. He reported 2.4 modules completed per week in the bonus condition and only .96 modules completed during baseline weeks. Supervising students who are completing their courses at many different times can be an administrative nightmare. Because of scheduling problems most professors using PSI eventually limited self-pacing and used front loading and doomsday contingencies to speed up student progress. Other factors that worked against incorporation of the PSI concept in most college classrooms included the ethical issues related to using volunteer advanced students as tutors and the preference of many students for lectures rather than individualized study. Paying tutors tended to raise the cost of PSI courses and reduced administrative enthusiasm. Graduate schools found egalitarian high grades useless for making admission decisions.
Hergenhahn (1976) maintained that students and professors involved with PSI classes preferred them to conventional lecture classes. DuNann and Fernald (1976) found their introductory psychology students preferred the PSI approach and had higher objective quiz scores. These authors followed up student retention of course materials after two years and found that PSI students with low and medium GPAs retained significantly more of the material learned (DuNann and Weber, 1976). Students with high GPAs did not benefit. Since, by the definition of mastery criteria, all students were required to learn the material in each module at the A or B level, more students mastered course content in PSI classes and all students finishing the course got good grades.
Keller's pioneering work was followed by a variety of other applications of contingency management techniques. Miller and Weaver (1976) developed programmed workbooks, which used discrimination training techniques to teach complex concepts. Using prompts (clues) and fading, these workbooks were reported to produce superior test performance. Inspired by Keller's use of students assistants, the author once enlisted volunteer superior students as Behavioral Technicians ("Behav-a-Techs") in a project to improve the distribution of comments in discussion groups (Swenson, 1973). Highly vocal students (who were naive to the research goals) were put on a DRL (differential reinforcement of low levels) schedule, and silent students received social and nonverbal reinforcement (intense attention, smiles) for any comments. The increased rates of participation of the shyer students were maintained during the baseline two measurement period. The talking rates of the most highly verbal students gradually recovered with the removal of aversive consequences (inattention, foot shuffling, yawns). Students preferred the experimental discussion groups over control discussion groups. The product pf a manipulative behavioral technology was a freer classroom environment with more equal student participation.
Today the bold innovations of the 1960s and 1970s have largely faded away. The Journal of Individualized Instruction is no longer published. Without support by administrations, colleagues, and students, the new methods were too much work to maintain. Some of the behavioral technologies are still used by some professors but if teachers say good-bye to the classroom it will be because of computers, not contingency management.
An innovative college-level application was the "personalized system of instruction" (PSI), which made grades contingent upon amount learned with unlimited time. Time constraints in many colleges required modifications. New techniques, such as early progress frontloading bonuses and doomsday (flunking) contingencies for laggards, were added. College level programmed instructional materials and use of point systems with student assistants were also effective but all such methods have fallen out of style.
Most applications of operant conditioning techniques in clinical psychology involved the use of token economies with large numbers of institutionalized psychotics. Hall, Baker, and Hutchinson (1977), working with chronic schizophrenic patients, compared two control groups with a token economy group. They found that the token patients decreased their outputs of three types of unwanted behaviors more than both no-treatment controls and no-token-contingent-social-reinforcement controls. The patients who were most deteriorated initially gained the most. Some patients, however, increased their rates of alternative behaviors, demonstrating behavioral contrast. At the end of 15 months the contingent social reinforcement control group had caught up with the token group. A reason the token patients did not maintain their lead was reported to be because of the attitudes and expectations of the ward nurses dispensing the tokens. The extra work required to note behaviors and distribute tokens eventually made the response cost of the program too high for the nurses.
Operant behavioral modification techniques can maintain mental health staff efficiency. Iwata and colleagues (1976) conducted two experiments on four units of a residential facility for multiply handicapped retarded persons. Attendants in the experimental condition, meeting specific performance criteria, became eligible for a lottery. The prize for winning the lottery was the opportunity to choose their days off the following week. The control condition used supervisor specified staff assignments. Attendants participated in both the conditions at different phases of the study. A multiple baseline replication showed that the lottery technique was more effective.
While token economies are the most efficient way to modify the behaviors of large numbers of patients, primary reinforcers can be used to good advantage with individual psychotic patients. Fichter and colleagues (1976), allowed a chronic schizophrenic patient to escape "nagging" by emission of specified target behaviors (clear speaking and placing arms and elbows on the arm rests of his chair rather than making strange gestures with them). With time, the patient learned to attend to the SDs for emission of the target behaviors and to avoid the "nagging" entirely. During baseline two, the clear-speaking behavior was maintained, but arm and elbow placements were not. This suggests that speaking was an example of functional behavior, while arm and elbow placements were maintained only by the experimental contingencies.
Contingency management and the alteration of bodily functions
Behavioral treatments of epilepsy
Even behaviors usually thought of as treatable only through medication have been modified through contingency management. Zlutnick, Mayville, and Moffat (1975) noted that specific behaviors predicted the occurrence of epileptic seizures in children. These behaviors varied from arm raising to facial grimaces. The authors hypothesized that the preseizure behaviors were part of a behavioral chain ending with the full-blown seizure. Following earlier research suggesting that interruption of the behavioral chain could prevent seizures, they applied aversive consequences to the preseizure behaviors. These consisted of shouting "no!" while grabbing the subjects with both hands and shaking them vigorously. For one subject, this system was supplemented by giving social and primary reinforcers contingent upon the subject's halting the preseizure behavior. Four of the children showed significant decreases in seizure frequencies, and preseizure behaviors declined in the other three children. Seizures returned during the reversal day but decreased again once the interruption procedure was reinstituted.
An advantage of the interruption procedure was that parents and teachers were able to learn itthereby reducing the need for continued professional supervision and lowering the cost of treatment. Before the parents administered the treatments, they were each trained for five hours by the investigators, followed by one to three phone calls per week to monitor progress and collect data. This study applied the triadic model (therapeutic pyramid) described by Tharp and Wetzel (1969). In this model, the professional develops and implements programs and teaches techniques to nonprofessionals or paraprofessionals. The change agents who actually delivers the reinforcers, are these trained nonprofessionals or paraprofessionals. The professional functions as the supervisor and consultant at the top of the "therapeutic pyramid," to ensure that the mediators, or change agents, carry out instructions. Advantages of this approach include more contact time between client and change agent and a good cost effectiveness ratio.
Stress often triggers seizures. Bennett (1987), recommends reconditioning stress responses so adaptive responses and thoughts replace maladaptive ones. He suggests combining this with progressive muscle relaxation, autogenic training, and group psychotherapy to help the patient separate "me from my epilepsy" (p. 43). He recommends biofeedback as a valuable component in a treatment program for epilepsy.
Another approach to modifying bodily functioning by reinforcementBiofeedback
Biofeedback is the application of operant conditioning methods in the control of visceral somatomotor, or central nervous system functions. David Shapiro, a pioneer in developing biofeedback, 1977, p. 15.
The most common procedure is to amplify the weak electrical signals associated with a body function and then to use the amplified signal to drive an auditory or visual display. Because the display tells people when they are succeeding, which most people find reinforcing, the probability of emitting similar (desired) behaviors increases.
As previously discussed, the success of the Russian psychologists in demonstrating that the autonomic nervous system could be conditioned inspired Miller and DiCara (1967) to use operant conditioning procedures to train rats to control behaviors mediated by the autonomic nervous system. Oddly, Miller and DiCara's original demonstrations of the instrumental control of autonomic nervous system processes in rats were never replicated. Before this was discovered many successful human therapeutic applications of the new biofeedback technologies had already been reported.
Three objectives of biofeedback research relevant to therapists can be found in the 1976-1977 Aldine annual, Biofeedback and Self Control:
¨ ?Finding out how much control a healthy individual can obtain over physiological processes using biofeedback under a variety of experimental conditions such as stressful environments and different induced expectations.
¨ ?Improving treatment and prevention for various psychological and physiological disorders through biofeedback-aided learning. This includes disorders related to stress and those caused by specific physiological dysfunctions (epilepsy and stroke for example).
¨ ?Enhancing increased self-awareness. (Kamiya, Barber, Miller, Shapiro, and Stoyva, 1977).
Using biofeedback to reduce the effects of stress
The potential importance of the development of behavioral technologies for controlling body states is related to a dramatic shift in disease patterns. While communicable diseases once were the leading causes of sickness in Western countries, today stress-related and degenerative disorders predominate. Medical practices derived from the assumption that sickness is caused by specific agents have been spectacularly successful against communicable diseases. This crisis oriented "medical model" has been much less successful against heart diseases, ulcers, arthritis, and other stress-related diseases. Holmes and Rahe (1967) have collected data that suggests that stress provoking life changes are more related to a wide range of illnesses than traditional causes, such as chilling and exposure to germs.
Using biofeedback to alter a person's pattern of responding to stress on a physiological level may help in maintaining or regaining health. As we shall see, however, biofeedback has limits that reduce its effectiveness as a primary tool for maintaining health by reducing stress. It is unlikely to replace alternative treatments. Shapiro (1977) in his presidential address to the Society for Psychological Research noted problems that limited the use of biofeedback in holistic-behavioral medicine. These include:
¨ ?nonbiofeedback self-control procedures such as meditation and relaxation training supplement biofeedback or can replace it,
¨ ?instructions and other cognitive elements are important in determining the success of biofeedback procedures, and
¨ ?the maintenance of the changed behavior under stress and outside of the biofeedback training area is difficult.
Woolfolk, Carr-Kaffashan, McNulty, and Lehrerand (1976) found meditation to be equal to progressive (Jacobson) relaxation and both to be superior to no treatment in treating insomnia. Harris Katkin, Lick, and Habberfield (1976) found that learning of breathing exercises (paced respiration) reduced autonomy reactivity to real and anticipated aversive events. Attention to and control of breathing is an important part of many types of meditative exercises. Chandler and Grings (1976) and Cheney and Shelton (1976) found Jacobson progressive relaxation exercises to be superior to visual or auditory biofeedback for reducing the electromyographic (EMG) response to stress. even reported. These results show that less costly and less invasive alternative methods can reduce general tension as effectively or more effectively than biofeedback.
The research on the effects of instructions further shows the extent to which interacting factors other than biofeedback can alter physiological functioning. Shapiro (1977) reported that subjects instructed to alter palm of hand sweating were unable to do so. When they were instructed to increase heart rate, however, not only were they able to achieve as much control as subjects given feedback, but their palmar sweating also changed in consistent directions! Bouchard and Granger (1977) have reported similar results for heart rate slowing. They found no difference between subjects given instructions alone and subjects experiencing instructions in combination with feedback. Feedback may not enhance the effects of instructions in controlling heart rate, but it does help in increasing blood pressure. Subjects given feedback alone, or instructions alone, were much less effective in increasing blood pressure than subjects given combined instructions and feedback. All treatments were equally effective in reducing blood pressure. These results show that cognitive-verbal variables and biofeedback may interact in different ways with different physiological measures.
One of the most important questions in biofeedback is the extent to which the newly trained "relaxation-related" responses can be maintained under stressful conditions. One of the early physiological responses to be modified was the EEG (electroencephalograph) or brain wave response. It was found that most persons could learn to produce more alpha rhythms (8-13 Hz waves) if they were given feedback when producing such waves and that they reported feeling tranquil when producing them. Kamiya and colleagues (1977) have noted, however, that alpha may also controlled not looking for afterimages with closed eyes and similar techniques. This suggests that producing alpha is not much different from just trying to feel relaxed. Moreover, Chisholm, DeGood, and Hartz (1977) examined subjects' abilities to stay relaxed during aversive (shock) situations in their laboratory after alpha biofeedback training. They found that although many participants could continue to produce alpha rhythms during the stress sessions, the heart rates of these participants were elevated and they reported themselves as being as tense as control subjects.
Another physiological response that has been extensively investigated in the hope of producing long-lasting relaxation states is the EMG response. It was hoped that by reducing EMG, muscle tension headaches and general feelings of tenseness could be alleviated. Recording electrodes are usually placed over the frontalis muscle of the forehead because this muscle produces tension headaches when over-contracted (Epstein and Abel, 1977). However, Epstein and Abel (1977) reported that while increased EMGs in the frontalis muscle were related to increased reports of headaches, successful decreases in EMGs were often not related to decreases in headaches. Strangely enough, three of the six patients had less headaches following the biofeedback treatments and maintained these gains after 18 months. Upon follow-up several months after training, no evidence of continued self-control of muscle activity was found. Therefore, the improvements of the three patients represent placebo effects. Shedivy and Kleinman (1977) found that biofeedback produced frontalis EMG reductions did not generalize to other muscles.
If both alpha and EMG biofeedback have failed to live up to the early hopes of biofeedback enthusiasts, is there any biofeedback procedure useful in producing lasting resistance to stress effects in the outside environment? Hutchings and Reinking (1976) reported that EMG biofeedback in combination with Jacobson progressive relaxation exercises or with autogenic training (practicing feeling the hands as heavy and warm) works better than relaxation exercise used alone. Training essential hypertensive clients to reduce blood pressure has been reported to lower blood pressure under stress in and out of the laboratory and to improve performance on the category test of cognitive functioning (Kleinman, Goldman, Snow, and Korol, 1977). Shapiro (1977) suggested combining biofeedback with systematic desensitization of probable stress-arousing situations and stimuli. He advocates feedback training in stimulating or stressful laboratory conditions to prepare the person for dealing with a stimulating and stressful world. He reports success in using this approach with the heart rate response. Many of his subjects actually learned to reduce their heart rates when anticipating and experiencing shock! A more recent application of this "arousal inoculation" approach is that of Larkin, Manuck, and Kasprowicz (1990). They used biofeedback to train subjects to reduce heart rate while playing video games. Performance was not harmed by low heart rates and the biofeedback group achieved more control than an instruction-only group..
To conclude this discussion of the effectiveness of biofeedback procedures in dealing with stressful situations, most treatments do not produce sufficient lasting benefits, used alone, to justify their continued clinical use. However, biofeedback may boost the effects of other methods. Mastenbroek and McGovern (1991) review a report of the successful reduction of conditioned nausea effects from cancer chemotherapy using EMG biofeedback with progressive relaxation and guided imagery.
Biofeedback in the treatment of specific physiological disorders
Some of the most promising research in biofeedback involved changing more specific responses. Whitehead, Renault, and Goldiamond (1975) trained four women to control their rates of stomach gastric acid secretion with combinations of visual feedback and money. When money was made contingent upon increased secretion in a differential reinforcement of high rates (DRH) schedule, secretions increased to three times that of the base rates. When the scheduling was changed to a differential reinforcement of other behaviors (DRO) schedule, secretion rates returned to baseline levels. Other physiological parameters, such as increased EMGs, respiration rates, and heart rates, did not always change systematically with secretion. As is normal, subjects differed in their abilities to control the selected response through biofeedback. Because medications is more cost-effective than biofeedback these procedures are not the primary treatment of choice for ulcers. However, biofeedback has gained widespread acceptance as the primary treatment for a variety of other gastrointestinal disorders (Whitehead, 1992).
Earlier we reviewed behavioral methods for reducing epileptic seizures. There are more direct ways to try to control seizures than interrupting the pre-seizure behavior. Biofeedback can directly alter brain activity. Lunar and Bahler (1976) used biofeedback to train eight severely epileptic patients to increase their output of 12-14 Hz EEG sensorimotor rhythms. This sensorimotor rhythm (SMR) is produced from the sensory-motor or central fissure area of the cerebral cortex and is assumed to inhibit motor activity. It was expected that increased SMR would inhibit the motor activity that causes epileptic convulsions. Patients were also trained to inhibit theta (4-7 Hz) and epileptiform spike activity. Baseline measures were collected, and the half of the cortex whose output was to trigger feedback was alternated to let the EEG of the other hemisphere serve as a "within-subject control." Two patients became free of seizures for months at a time, and most of the other patients had reduced seizure incidents and reduced need for anticonvulsant medication.
Vacations from training, substituting non-contingent (pseudofeedback) reinforcement for real biofeedback, and the eventual suspension of the program were all followed by a gradual increase in the numbers of reported seizures. To be fair, muscle tone from exercise also "extinguishes" when the exercise program is suspended. The hope of finding extinction-free therapeutic methods may be a holdover from the medical assumption that removal of a specific cause for each disease will cure the patient. Since the first uses of biofeedback to treat epilepsy in 1972 over 50 published studies have shown that EEG biofeedback can help manage seizures. However the cost and complexity of the procedures has deterred use. In the first 12 years of use of these techniques only about 250 patients were treated (Bennett, 1987). Bennett states that the most effective biofeedback training combines training of the SMR with theta training and sometimes with epileptiform spike activity training (review Lunar and Bahler, 1976). Bennett recommends biofeedback as a useful supplement to anticonvulsant medication.
One way to make sense of the varying effectiveness of different biofeedback methods with different disorders is to assume that the more "new" information the feedback provides to the user, the more likely the training is to be effective. Since the average person has considerable information about relaxation, simple procedures for increasing relaxation through biofeedback contribute little new information. Most ulcer or epileptic patients, however, are unable to discriminate physiological changes related to gastric acid secretion or sensorimotor EEG rhythms. Hence, biofeedback works better for such patients than other behavioral techniques because it provides unique information.
Many responses, however, are discriminable by some individuals and not by others. Kaplan (1975) has suggested that males who suffer from premature ejaculation during sexual intercourse do so because they are unable to discriminate accurately the subtle cues that distinguish the stage of sexual arousal (plateau stage) from the stage of "ejaculatory inevitability." Support for this theory has come from the work of Rosen, Shapiro, and Schwarz (1975) and Kantorowitz (1977) who succeeded in treating premature ejaculation by providing precise feedback on penile diameter, which is closely correlated to the level of sexual arousal. When provided with feedback, most subjects were able to learn to maintain arousal at the plateau stage for as long as desired.
The range of applications of operant control of internal states is wide indeed. Sakai and Hartey (1973) were able to train male subjects first to raise their finger temperatures and later to raise the temperatures of their scrotums (testicles). In three of the five subjects, the resulting temperature rise was sufficient to kill all sperm. This experiment suggests using behavioral methods of birth control. It would, of course, be essential for the male not to become slack about his daily temperature biofeedback exercises. EMG biofeedback has been extended into the medical areas of deficient neuromuscular control. Inglis, Campbell, and Donald (1976) reviewed applications of EMG biofeedback in treating peripheral nerve muscle damage, the effects of strokes, partial paralyses, and cerebral palsy (early brain damage having a motor component). They cite considerable evidence to suggest that patients can learn to gain more control over the involuntary activity of voluntary muscles. This neuromuscular reeducation approach has been successful in restoring function to paralyzed limbs where some neural control remains or where neural control has been reintroduced through transplanting intact nerves, which had formerly controlled other muscles. Within a couple of hours, those patients who had at least a few intact nerve endings were producing sufficient motor unit action potentials from these surviving nerve endings to achieve large percentages of normal, voluntary muscle functioning. The various studies reported 50% to over 85% of patients benefited from such treatments . The basic technique also works for spastic, overconstricted muscles. Haggerty (1977) has reported on the development of miniature sensor transmitter units (disguised as ladybugs) which can be left affixed on the legs of children with cerebral palsy and send wireless data about the levels of muscle activity for prolonged time periods. This technology, a spin-off from the space program's need for continuous biotelemetry, provides a means for biofeedback during normal activities.
Not all the possible applications of biofeedback involve treating pathological conditions. The techniques have also given rise to the dream of increasing human abilities and altering consciousness in beneficial ways. Sheer (1977) has reported that the 40-Hz fast-wave component of the EEG spectrum is related to memory consolidation and can be increased through biofeedback. Ormund, Quintanella, and Swenson (1978) found that while male college students initially produced higher average levels of fast-wave EEGwhich is presumed to be related to active cognitive functioningfemales using biofeedback caught up with the males and passed them.
For the future, we may expect further sophisticated use of more operant principles. Standard biofeedback practices will be combined with other behavioral modification techniques, including those derived from Pavlovian principles, and this may help increase our understanding of the relationship between association learning and reinforcement learning. Biofeedback is still of some practical importance. It has considerable theoretical importance for our understanding of physiological response self-control. Biofeedback was the hot behavioral technology of the 1970s. Today the hot applications are the cognitive behavioral ones. Bandura is the father of most of these babies.
Biofeedback means giving a person electronically amplified feedback about changes in physiological responses to allow voluntary control of the responses. The control is assumed to be a reinforcer and the learning is assumed to be operant learning. Biofeedback is influenced by cognitive variables. It is no more effective than meditation or relaxation techniques for stress relief. It is therapeutically useful for disorders involving physiological responses with little natural feedback.
Cognitive Behavioral approaches in the Operant Tradition
Studies emphasizing applications of modeling
The modeling procedure has been successfully employed in two major areas The first of these is teaching small children. A study by Martin (1975) illustrates the basic procedure. Two retarded children were exposed to daily imitation training; in which the teacher or a nurse modeled and instructed each child to imitate 12 sentences containing one of six animal names. Praise was given for correct verbal imitation. Each of the sentences also contained descriptive adjectives related to the color and/or size of the animals. During probe sessions at another time of day and in a different environment, the children were asked to describe 12 pictures of animals different from those described during the modeling sessions. Not only were the children able to imitate the sentences modeled, but the adjectives used generalized to the new animals' pictures.
The second area of application has been in the clinical area. Bandura, Grusec, and Menlove (1967) treated children with dog phobias by exposing them to peer models who interacted in a progressively more fearless manner with a dog. At the end of eight 10-minute sessions held over four days, the majority of the children in the modeling treatment groups were able to approach either the original stimulus dog or another dog, feed them, and remain alone in the room with them. This study shows two important innovations over simple modeling: (1) progressive modeling was used, which Bandura feels reduces initial fear and facilitates the speed of treatment. Progressive modeling uses a graduated series of modeled behaviors parallel to the fear stimuli hierarchies used in systematic desensitization. (2) the children were treated in groups of four, which is a more efficient approach than individual treatment of phobics. Group treatment may be even more effective than individual treatment. Nemetz, Craig, and Reith (1978) treated women suffering from debilitating sexual anxiety by the process of symbolic modeling, using videotapes. Treatment began with relaxation training, followed by the viewing of 45 videotaped vignettes depicting graduated sexual behaviors. The experimental subjects were randomly assigned to either individual or group treatment. There was a trend towards greater improvementdecreased anxiety and increased sexual behaviorafter the group treatment. The six control subjects deteriorated slightly.
Most recent attempts to reduce phobias have involved a technique usually called participant modeling. This method introduces fearful situations as in progressive modeling. In this procedure, the therapist demonstrates approaching the feared object or situation while imparting verbal information. The subjects are then asked to practice the approach. This is followed by the therapist's modeling of the next stage of the hierarchy, with the subjects again practicing overcoming their avoidance behaviors, and so on. Smith and Coleman (1977) found females with rat phobias improved on several measures. Subjects, however, who had self directed practice following the formal participant-modeling procedure showed greater and more generalizable improvement than subjects who received additional therapist-directed practice.
As a last variation on the modeling theme, let us briefly examine another symbolic modeling techniqueself-modeling. Subjects observe and model video recordings of themselves performing target behaviors. Using this procedure with hospitalized children, Miklich, Chida, and Danker-Brown (1977) were able to improve bed-making behaviors in their 12 subjects, without the subjects reporting any awareness of behavior change or even that the purpose of the video recording was to affect them.
Although most applications of operant techniques have been concentrated in institutional settings, the modeling procedure has been found useful by individual mental health workers for treating neurotic or anxious clients. Horne and Matson (1977) compared modeling, desensitization, flooding, study groups, and control groups as procedures to reduce test anxiety in college students. The modeling procedure consisted of listening to tapes of students, who expressed considerable test anxiety in the first tape, and progressively less anxiety in subsequent tapes. Test Anxiety Scale scores were reduced most by modeling, followed by desensitization, flooding, study skills, and no treatment. Measures of pulse rate showed desensitization to have produced the lowest pulse rates, and examinations of final course grades found desensitization-group students to do best, followed by modeling-group students, study-skills students, flooding-group students and controls. While these results do not clearly show modeling to be more effective than desensitization, they show it is highly effective.
Rosenthal, Hung, and Kelley (1977) compared two types of modeling procedures. They found that when the therapist was "businesslike," as opposed to "warm," the clients were more successful in approaching feared objects and reported less fear. This study is noteworthy for examining the details of a procedure to "fine-tune" it. This is a necessary but neglected step in making psychological interventions more precise and powerful. Another direct comparison of the effectiveness of two methods of modifying behavior found direct reinforcement to be superior to a modeling procedure. Bondy and Erickson (1976) attempted to increase the rate of question asking by 12 retarded children. One group got points to be exchanged for food, one group had question-asking behaviors modeled by a trainer, and one group both received points and had the behaviors modeled. Modeling alone had only a minimal effect. The modeling plus points group learned the fastest, but their final level of performance was no higher than the points-only group. This study suggests direct shaping of behavior is superior for low functioning subjects.
Bandura's professional attentions shifted during the 1980s from modeling per se to person's self perceptions of competenciesto self efficacy. Self efficacy in a sense is a cognitive model of how things will turn out. This model influences actual performance just as external models do.
The modeling procedures popularized by Bandura work in a variety of clinical and educational settings. The model can be a therapist, a peer, a video of an expert, or even a video of the client's past performance. With bright, anxious clients, modeling procedures work wellespecially if the model acts in a businesslike manner. For learning disabled individuals, direct shaping seems more effective.
This theory states that psychological procedures, whatever their form, alter the level and strength of self-efficacy Albert Bandura (1977).
The opposite of the learned helplessness investigated by Seligman is learned efficacy or learned confidence that the learner is competent to overcome obstacles. Bandura (1977) originally proposed self efficacy theory to unify models of therapeutic change. He suggested that encouraging persistence in activities that seemed frightening but were actually relatively safe produced experiences of mastery, increased self-efficacy, and reduced defensive behaviors. He stated that a persons beliefs about efficacy came mainly from four sources of information. These were vicarious experience (modeling), verbal persuasion, performance accomplishments, and physiological states. Note this is a multiple response model similar to that of Peter Lang (Chapter Nine).
People who believe strongly that they are good at problem-solving tend to be efficient in using analytical thinking when they are making complicated decisions,. Good analytical thinking, in turn, predicts higher levels of performance and accomplishments (Bandura, 1989). Visualizing successful activities tends to improve the skill level shown in later activities. Believing that you are effective helps in constructing visualizations of effective action and this then raises the level of self-efficacy. This is a bootstrap theory of motivation and effectiveness. "By cognitive representations in the present conceived future events are converted into current motivators and regulators of behavior." (Bandura, 1989, p. 729).
Banduras theory is a theory of motivation and performance. The stronger a persons belief in his or her abilities, the more persistent will be his or her efforts, the less likely the person will be to imagine threats from potential stressors, and the less physiological arousal will result from stressors. Depression is the result of ruminative thoughts about a persons low efficacy much as in learned helplessness theory. Knowing about and controlling ones own thoughts is a metacognitive activity essential for increasing self-efficacy. While it is possible to obtain a measure of overall self-efficacy, feelings of efficacy are also domain related. That is, you may feel very competent about one activity and less efficacious about another. Bandura, who like all of us is not getting younger, notes it is easier to maintain self-efficacy if you compare yourself to age peers (Bandura, 1989). This is a cognitive strategy for maintaining self-efficacy. Banduras ideas have generated a rich literature of animal and human research supporting his main contentions.
Troisi, Bersh, Stromberg, Mauro, & Whitehouse (1991) taught rats that they could escape most shocks. This served to immunize the rats against later learned helplessness experiences. Serial presentations of escapable and inescapable shocks had a prophylactic (preventative) effect on the development of learned helplessness. This learned efficacy could be brought under stimulus control by Pavlovian conditioning so that some CSs signaled that the animals could escape the shock and other CSs signaled that the animal was likely to be helpless. These CSs acted as superordinate stimuli to elicit general emotion-related responses to each condition.
Bernier and Avard (1986) found that the amount of weight lost by overweight women was predicted by their preexisting levels of self-efficacy and in turn, successful weight loss was related to further increases in self-efficacy. Women who completed the program had higher levels of self-efficacy than dropouts. Desharnais, Bouillion, and Godin (1986) note a similar relationship between beginning levels of self efficacy and persisting with an exercise program. They suggest manipulating expectancies of success since expectancies should be more easily modified than more stable personality factors.
Bandura and coworkers have continued to refine the self-efficacy paradigm and to expand its explanatory reach. They have conducted sophisticated analyses of the causes of anxiety and feelings of threat. They note that people experience anxiety and stress reactions only when they cope with tasks that are beyond their perceived self-efficacy range (Ozer and Bandura, 1990). Thus the repeated observation from studies of avoidance behavior that most subjects, human and lower animal, usually show signs of anxiety only early in the learning process. Ozer and Bandura distinguish two basic types of self-efficacy. The first is related to coping skills and is a self-perception of physical abilities. The second is related to cognitive coping mechanisms including the ability to control painful "bad" thoughts. They note that skills can be learned without altering self-efficacy and this results in the skills not being used at critical times. They also note two reasons people avoid potentially risky situations. One is related to anxiety and the other is because of low self-efficacy. I.e.a belief that they will not be able to cope with the perceived risks.
Self-efficacy can be increased in four basic ways. Subjects can have mastery experiences. Subjects can increase their self-efficacy by observing successful models. Social persuasion, as in a coachs pep talk, can strengthen a person's belief in his or her abilities. Finally, alteration of the physiological symptoms of fear and confidence can be achieved by various techniques including biofeedback and meditation. A calm body can lead to attributions of a tranquil mind.
Ozer and Bandura (1990) conducted an elaborate study on groups of women taking self-defense courses and demonstrated these technique's effects. They used an enhanced multiple baseline design in which some groups experienced multiple measurement procedures and others only a single measurement session during baseline one. These two types of groups were compared to see if measurement alone was producing changes and for most of the multiple dependent variables it had no effect. Subjects then received all four types of treatment to increase self-efficacy. The authors measured a wide range of behaviors, negative thoughts and attitudes including three subtypes of coping-related self-efficacy identified by statistical factor analysis techniques. Anxiety related to possible sexual assaults was related to low self-efficacy in controlling negative thoughts. Perceptions of vulnerability and specific risks however, was related only to coping self-efficacy. The subjects with the highest anxiety levels were most likely to show active avoidance of activities with some risk (jogging, attending movies at night etc.). During baseline one, women who had experienced forced sex felt more vulnerable, less capable of coping, and were more avoidant. By baseline two (follow-up) these women had similar scores to the other subjects. Mastery modeling was reported to remove preexisting sensitivities. They noted that cognitive self-control developed more slowly than coping self-efficacy. Using statistical path analysis the authors analyzed the causal relationships between different attitudes and behaviors. During the pretest period only cognitive control efficacy predicted actual behaviors such as outside activities. During the follow-up period several months after training actual behavior was predicted both by ability to control negative thoughts and by perceptions of risk mediated by perceptions of efficacy in self-defense techniques (Ozer and Bandura, 1990).
Self-efficacy is a persons feelings about being effective. It is the expectancy of success. It predicts emotional reactions to stressors and actual success. Success in turn increases self-efficacy. Self-efficacy can be increased by imagining success. Basic types of self-efficacy include the ability to control negative thoughts and coping (skill) related self-efficacy. Coping skill self-efficacy is often domain (skill) specific. Low self-efficacy as well as anxiety can produce avoidant behavior.
Contingency and cognitive. Which is more effective?
Currently cognitive behavioral approaches are considered an essential component of treatment programs for children with behavioral handicaps (McConnell, et. al., (1991) and especially for children with Attention Deficit Hyperactivity Disorder (ADHD) (Abramowitz and O'Leary, 1991). All of these include many treatments based on contingency management but increasingly cognitive techniquessuch as social skills trainingare being included in the programs. As with the modern clinical applications reviewed in Chapter Nine, education, behavioral practice in group settings, and training dealing with appropriate expression of feelings are all common components of the packages. One major difference between many programs developed within the classical conditioning tradition and those developed by professionals working within the instrumental learning paradigm is that the latter programs are more likely to incorporate the triadic model and to train parents and peers to actually administer reinforcers and/or to observe behavior. Contingency-based approaches are more likely to try to shape self-management behaviors like self-monitoring and self-reinforcement. Cognitive-based procedures focus more on the antecedents of behavioron acquiring cognitive mediational skills through a process of training in self-instruction. Typically children learn to "talk to themselves" and to guide their overt behavior by verbal behaviors (Cole and Bambara, 1992)
Today cognitive-behavioral techniques are more fashionable than contingency management procedures with many professionals. Is there evidence that they work better in educational settings with children with handicaps? McConnell, Sisson, Cort and Strain (1991) reported results with four children using both social skills training (a cognitive-behavioral intervention) and group and individual coaching and reinforcement, which are contingency management procedures. Social skill training covered social initiations, social responses, and extended social interactions. The trainer introduced each topic, explained its importance, described it, and modeled it during role play sessions. In the coaching conditions the trainers prompted appropriate social behaviors and reinforced them when they appeared. These authors report little generalization of the social skill training to free play and much better results for their contingency management group. However, the very success of contingency management approaches in training skills should not make the modifiers complacent. Reid, Phillips, & Green (1991) report that while these programs "work" with multiply handicapped populations, "there is little evidence that such interventions have resulted in meaningful behavior change according to currently accepted criteria for beneficially affecting the quality of life of persons with serious handicaps." (p. 319). Life is more than skills and the effectiveness of a therapy for one purpose does not mean another approach would not be desirable for other purposes.
Abramowitz and O'Leary (1991) reviewed two types of cognitive-behavioral treatments; those teaching self-monitoringself-reinforcement and those teaching cognitive skills such as self-instruction and problem solving. The first type are often used to enhance maintaining gains from contingency management programs while fading out teacher observation and consequation. The second type are intended to develop metacognitive skills helpful in successfully completing tasks. A child would learn to self-instruct by repeating instructions, describing the task, working through possible approaches and likely results of those approaches, and evaluating their performance at the end of the task. Modeling and rehearsal are used to develop the self-instruction skills. Like McConnell et. all (1991), Abramowitz and O'Leary (1991) did not find evidence that cognitive behavioral applications had lived up to the initial high expectations. Generally these approaches were difficult to implement correctly, and rarely produced lasting benefits. Evidence of good transfer (generalization) of the skills trained to natural school environments is meager (Cole and Bambara, 1992).
The studies we have reviewed show that a fairly straight-forward application of operant principles is a more effective approach to classroom management with younger or handicapped children than more cognitive treatments. Abramowitz and O'Leary (1991) note that cognitive-behavioral approaches may work better with high school age normal students. Gerber and Hall, (1989), recommend that cognitive behavioral training in educational settings become less like clinical treatment and more like ongoing teaching within a curriculum. That is, teaching is cognitive-behavioral training and dependent upon the teachers knowledge of subject matter and teaching skills. Normally cognitive behavioral training consists of a general explanation of the purpose and benefits of training followed by practice with several behaviors modeled by the trainer. Students behavior is shaped by reinforcement until performance is judged satisfactory. Gerber and Hall, (1989), note that in over 70 years of investigating spelling little has been learned about trainable metacognitive strategies for teaching students to spell better. Generic metacognitive skills do not fit all content areas nor all potential learners.
Contingency management methods are effective in educational settings. With retarded and/or highly disturbed children, direct shaping of behavior seems superior to modeling. With bright normal older children and adults cognitive-behavioral methods may be superior.
Out of the Skinner box has come a plethora of ways to put psychology to work in the "real" world outside the campus. The traditions of the laboratory include powerful methodologies for critically evaluating applications. Because operant theory is straightforward enough to be reduced to formulas understood by those who would develop and use applications, the methods have been widely tested. Through this testing, these methods seem to be evolving towards greater effectiveness.
Yet, ironically, for all the inventiveness of the operant applicators, an important effect of these applications has been to call into question the basic mechanistic assumptions of the connectionist approach to learning. The critical debate over the processes involved in classical and instrumental conditioning, together with the continued demonstrations of the success of applying operant principles, suggest that we are becoming more and more successful in using tools we understand less and less. Connectionist theory seems to be evolving towards the cognitive viewpoint. What is the cognitive viewpoint? Let us now go to Chapter 12 and examine some examples of it.
The best way to get primary access to the literature would be to scan recent issues of the following journals: (1) Journal of Applied Behavior Analysis, with applications to education, industry, personal habits, corrective institutions, and miscellaneous social institutions, (2) Behavior Research and Therapy, or BRAT, which focuses on clinical applications, as do (3) Behavior Therapy, and (4) Journal of Behavior Therapy and Experimental Psychiatry.
Abramowitz, A. J., & O'Leary, S. G. (1991). Behavioral interventions for the classroom: Implications for students with ADHD. School Psychology Review, 20/2, pp. 220-234.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84/2, pp. 191-215.
Bandura, A. (1989). Regulation of cognitive processes through perceived self-efficacy. Developmental Psychology, 25/5, pp. 729-735.
Bandura, A., Grusec, J. E., and Menlove, F. L. Vicarious extinction of avoidance behavior. Journal of Personality and Social Psychology, 1967, 5, 16-23.
Barrish, H. H., Saunders, M., and Wolf, I. M. Good behavior game: Effects of individual contingencies for group consequences on disruptive behavior in a classroom. Journal of Applied Behavior Analysis, 1969, 2, 119-124.
Bartlett, L. A. and Swenson, L. C. (1975). A contingency management System using positive reinforcement and peer pressure to reduce disruptive classroom behavior. paper presented at Western Psychological Association annual meeting, Sacramento, California, April.
Bennett, T. L. (1987). Neuropsychological aspects of complex partial seizures: Diagnostic and treatment issues. The International Journal of Clinical Neuropsychology. IX/1, 37-45.
Bernier, M., & Avard, J., (1986). Self-efficacy, outcome, and attrition in a weight-reduction program. Cognitive Therapy and Research, 10/3, pp. 319-338.
Bondy, A. S. and Erickson, M. T. Comparison of modeling and reinforcement procedures in increasing question-asking of mildly retarded children. Journal of Applied Behavior Analysis, 1976, 9, 108.
Bouchard, M. and Granger, L. The role of instructions versus instructions plus feedback in voluntary heart rate slowing. Psychophysiology, 1977, 14, 475-482.
Bufford, R. K. Evaluation of a reinforcement procedure for accelerating work rate in self-paced course. Journal of Applied Behavior Analysis, 76, 9, 208.
Chance, P. and Lovaas, I. After you hit a child, you can't just yet up and leave him; You are hooked to that kid. Psychology Today, January 1974, 7, 76-84.
Chesney, M. A. and Shelton, J. L. A comparison of muscle relaxation and electromyogram biofeedback treatments for muscle contraction headache. Journal of Behavior Therapy and Experimental Psychiatry, 1976, 7, 221-225.
Chisholm, R. C., DeGood, D. E., and Hartz, M. A. Effects of alpha feedback training on occipital EEG, heart rate, and exponential reactivity to a laboratory stress or. Psychophysiology, 1977, 14, 157-163.
Cole, C. L., & Bambara, L. M. (1992). Issues surrounding the use of self-management interventions in the schools. School Psychology Review, 21/2, pp. 193-201.
Cooke, T. P. and Apolloni, T. (1976). Developing positive social-emotional behaviors: A study of training and generalization effects. Journal of Applied Behavior Analysis, 9, 65-78.
Deitz, S. M. (1976). An analysis of programming DRL schedules in educational settings Behavior Research and Therapy, l5, 103-111.
Desharnais, R., Bouillion, J., & Godin, G. (1986). Self-efficacy and outcome expectations as determinates of exercise adherence. Psychological Reports, 59, pp. 1155-1159.
duBois, T. (1993). Environmental enrichment: The zoo's new challenge. Zoo View, 27(4), 4-11.
DuNann, D. H. and Fernald, P. S., (1976). An experimental comparison of a contingency managed course with large lecture method. Journal of Applied Behavior Analysis, 9, 373-374.
DuNann, D. H. and Weber, S. J. Short- and long-term effects of contingency managed instruction on low, medium, and high GPA students. Journal of Applied Behavior Analysis, 1976, 9, 375-376.
Epstein, L. H. and Abel, G. G. An analysis of biofeedback training effects for tension headache patients. Behavior Therapy, 1977, 8, 37-47.
Feallock, R. and Miller, L. K. (1976). The design and evaluation of a work sharing system for experimental group living. Journal of Applied Behavior Analysis, 9, 277-288.
Fichter, M. M., Wallace, C. J., Lieberman, R. P. and Davis, J. R. Improved social interaction in a chronic psychotic using discriminated avoidance ("nagging"): Experimental analysis and generalization. Journal of Applied Behavior Analysis, 1976, 9, 377-386.
Frost, R. O. & Sher, K. J. (1989). Checking behavior in a threatening situation. Behaviour Research and Therapy, 27/4, pp. 385-389.
Fuller, R. (1991). Behavior analysis and unsafe driving: WarningLearning trap ahead! Journal of Applied Behavior Analysis. 24/1, pp. 73-75.
Gerber, M. M., & Hall, R. J. (1989). Cognitive-behavioral training in spelling for learning handicapped students. Learning Disability Quarterly, 12, pp. 159-168.
Glasgow, R. E., Morray, K., & Lichtenstein, E. (1989). Controlled smoking versus abstinence as a treatment goal: the hopes and fears may be unfounded. Behavior Therapy, 20, 77-91.
Glover, J. and Gary, A. L. (1976), Procedures to increase some aspects of creativity. Journal of Applied Behavior Analysis, 9, 79-84.
Goldiamond, I. Self-control procedures in personal behavior problems. Psychological Reports, 1965, 17, 851-868.
Haggerty, J. J. Spin-off 1977, An annual report. National Aeronautics and Space Administration Technology Utilization Office. Washington, D.C.: U.S. Government Printing Office, 1977.
Hall, J. N., Baker, R. D., and Hutchinson, K. A controlled evaluation of token economy procedures with chronic schizophrenic patients. Behavior Research and Therapy, 1977, 15, 261-283.
Harlow, H. F. Motivation as a factor in the acquisition of new responses. In Current theory and research in motivation: A symposium. Lincoln: University of Nebraska Press, 1953.
Harris, V. A., Katkin, E. S., Lick, J. R., and Habberfield, T. Paced respiration as a technique for the modification of autonomic responses to stress. Psychophysiology, 1976, 13, 386 391.
Hayes, S. C., Johnson, V. S., and Cone, J. D. The marked item technique: A practical procedure for litter control. Journal of Applied Behavior Analysis, 1975, 8, 381-386.
Hergenhahn, B. R. An introduction to theories of learning. Englewood Cliffs, N. J.: Prentice-Hall, 1976.
Hobbs, T. R. and Holt, M. M. The effects of token reinforcement on the behavior of delinquents in a cottage setting. Journal of Applied Behavior Analysis, 1976, 9, 189-198.
Holmes, T. H. and Rahe, R. H. The social readjustment rating scale. Journal of Psychosomatic Research, 1967, 11, 213-218.
Home, A. M. and Matson, I. L. A comparison of modeling, desensitization, flooding, study skills, and control groups for reducing test anxiety. Behavior Therapy, 1977, 8, 1-8.
Homme, L. E., DeBaca, P. C., Devine, J. V., Steinhorst, R., and Rickert, E. J. Use of the Premack principle in controlling the behavior of nursery school children. Journal of the Experimental Analysis of Behavior, 1963, 6, 544.
Hundert, J. The effectiveness of reinforcement, response cost, and mixed programs on classroom behaviors. Journal of Applied Behavior Analysis, 1976, 9, 107.
Hutchings, D. F. and Reinking, R. H. Tension headaches: What form of therapy is most effective? Biofeedback and Self-Regulation, 1976, 1, 183-190.
Inglis, J., Campbell, D., and Donald, M. W. Electromyographic biofeedback and neuromuscular rehabilitation. Canadian Journal of Behavioral Science, 1976, 8, 299-323.
Iwata, B. A., Bailey, J. S., Brown, K. M., Foshee, T. J., and Alpern, M. A. A performance-based lottery to improve residential care and training by institution staff. Journal of Applied Behavior Analysis, 1976, 9, 417-431.
Jones, R S P. & Eayers, C. B. (1992). The use of errorless learning procedures in teaching people with a learning disability: A critical review. Mental Handicap Research, 5/2, 204-209.
Kamiya, J., Barber, T. X., Miller, N. E., Shapiro, D., and Stoyva, J. (eds.). Biofeedback and self- control, 1976-77. An Aldine annual. Chicago: Aldine, 1977.
Kantorowitz, D. (1977), A biofeedback approach to premature ejaculation in college students. A paper read at Western Psychological Association annual meeting, Seattle, April,
Kaplan, H. S. The illustrated manual of sex therapy. New York: Quadrangle/New York: Times Book Co., 1975.
Keller, F. S. A personal course in psychology. In R. Ulrich, T. Stachnik, and J. Mabry (Eds.) Control of Human Behavior. Glenview, Ill: Scott, Foresman, 1966.
Keller, F. S., Good-bye, teacher... Journal of Applied Behavior Analysis. 1968, 1, 69-89.
Kinkade, K. Commune: A Walden-two experiment. Psychology Today, January 1973, 6. 35-42.
Kleinman, K. M., Goldman, H., Snow, M. Y., and Karol, B. Relationship between essential hypertension and cognitive, functioning II: Effects of biofeedback training generalize to non-laboratory environment. Psychophysiology, 1977, 14, 192-197.
Larkin, K. T., Manuck, S. B., & Kasprowicz, A. L. (1990). The effect of feedback-assisted reduction in heart rate reactivity on videogame performance. Biofeedback and self-regulation, 15/4, 285-304.
Lovaas, I.. (1974). After you hit a child, you can't just get up and leave him; You are hooked to that kid (A conversation with P. Chance) Psychology Today, January, 7, pp. 76-84.
Lovaas, I., Schaeffer B. and Simmons J. Q. (1965). Building social behavior in autistic children by use of electric shock. Journal of Experimental Research in Personality, 1, 99-109.
Lovaas, O. I., & Favell, J. E. (1987). Protection for clients undergoing aversive/restrictive interventions. Education and Treatment of Children, 10/4. 311-325.
Lubar, J. F. and Bahler, W. W. Behavioral management of epileptic seizures following EEG biofeedback training of the sensorimotor rhythm. Biofeedback and
Makin, P. J. and Hoyle, D. J. (1993). The Premack Principle: Professional engineers. Leadership & Organization Development Journal, 14/1, 16-21.
Malott, R. Contingency management in education; Or I've got blisters on my soul and other equally exciting places. Rev. ed. Kalamazoo, Michigan: Behaviordelia, 1974.
Marholin, D. and Gray, D. Effects of group response-cost procedures on cash shortages in a small business. Journal of Applied Behavior Analysis, 1976, 9, 25-30.
Markowitz, H. Analysis and control of behavior in the zoo. Research in Zoos and Aquariums, 1975b, National Academy of Sciences.
Markowitz, H. (1975a). In defense of unnatural acts between consenting animals. Paper presented at 51st annual American Association of Zoological Parks and Aquariums conference, Calgary, Alberta.
Markowitz, H. New methods for increasing activity in zoo animals: Some results and proposals for the future. Paper presented at the Centennial symposium on science and research, Penrose Institute, Philadelphia, 1974.
Markowitz, H., Schmidt, M. I., and Moody, A. Behavioral engineering and health in the zoo. Paper presented at Western Psychological Association meeting, Seattle, April, 1977.
Martin, J. A. Generalizing the use of descriptive adjectives through modeling. Journal of Applied Behavior Analysis, 1975, 8, 203-209.
Mastenbroek, I., & McGovern, L. (1991). Effectiveness of relaxation techniques in controlling chemotherapy induced nausea: A literature review. The Australian Occupational Therapy Journal. 38/3, 137-142.
McConnell, S. R., Sisson, L. A., Cort, C. A., & Strain, P. S. (1991). Effects of social skills training and contingency management on reciprocal interaction of preschool children with behavioral handicaps. The Journal of Special Education. 24/4, pp. 473-493.
McGee, G. G., Krantz, P. J., & McClannahan, L. E. (1985). The facilitative effects of incidental teaching on preposition use by autistic children. Journal of Applied Behavior Analysis, 18, 17-31.
Miklich, D. R., Chida, T. L., and Danker-Brown, P. (1977). Behavioral modification by self modeling without subject awareness. Journal of Behavior Therapy and Experimental Psychiatry, 8, 125-130.
Miller, K. L. and Weaver, H. F. A behavioral technology for producing concept formation in university students. Journal of Applied Behavior Analysis, 1976, 9, 289-300.
Miller, N. E. and DiCara, L. Instrumental learning of heart rate changes in curarized rats: Shaping, and specificity to discriminative stimulus. Journal of Comparative and Physiological Psychology, 1967, 63,12-19.
Nemetz, G. H., Craig, K. D., and Reith, G. Treatment of female sexual dysfunction through symbolic modeling. Journal of Consulting and Clinical Psychology, 46, 62-73.
Ormund, J., Quintanella, A., and Swenson, L. C. A companion of the effects of biofeedback and three control procedures on the output of fast wave EEG in male and female college students. Paper presented at Western Psychological Association annual meeting, San Francisco, April, 1978.
Ozer, E. M. & Bandura, A. (1990). Mechanisms governing empowerment effects: A self-efficacy analysis. Journal of Personality and Social Psychology, 58/3, 472-486.
Powers, R. B., Osborne, J. G., and Anderson, E. G. Positive reinforcement removal in the natural environment. Journal of Applied Behavior Analysis, 579-586.
Reid, D. H., Phillips, J. F. & Green, C. W. (1991). Teaching persons with profound multiple handicaps: A review of the effects of behavioral Research. Journal of Applied Behavior Analysis. 24/2, pp. 319-336.
Repp, A. C., & Singh, N. N. (Eds.), (1990). Perspectives on the use of nonaversive and aversive interventions for persons with developmental disabilities. Sycamore, IL: Sycamore.
Robertson, S. J., DeReus, D. M and Drabman, R. S. Peer and college-student tutoring as reinforcement in a token economy. Journal of Applied Behavior Analysis, 1976, 9, 169-177.
Rosen, R. C., Shapiro, D., and Schwartz, G. E. Voluntary control of penile tumescence. Psychosomatic Medicine, 1975, 37, 479-483.
Rosenthal, T. L., Hunt, J. H., and Kelley, J. E. Therapeutic social influence; Sternly strike while the iron is hot. Behavior Research and Therapy, 7/15, 253-259.
Sakai, S. and Harkey, N. Scrotal temperature fluctuations in euspermic males. Paper presented at Western Psychological Association annual meeting, Anaheim, California, April, 1973.
Schandler, S. L. and Grings, W. W. An examination of methods for producing relaxation during short-term laboratory sessions. Behavior Research and Therapy, 1976, 14,419-426.
Seligman, M. E. P. Fall into helplessness. Psychology Today, June 1973, 7, 43-48.
Seymour, F. W. and Stokes, T. F. Self-recording in training girls to increase work and evoke staff praise in an institution for offenders. Journal of Applied Behavior Analysis, 1976, 9, 41-54.
Shapiro, D. Presidential address, 1976: A monologue on biofeedback and psychophysiology. Psychophysiology, l977, 14, 213-226.
Shedivy, D. I. and Kleinman, K. M. Lack of correlation between frontalis EMG and either neck EMG or verbal ratings of tension. Psychophysiology, 1977, 14, 182-186.
Sheer, D. E. Biofeedback training of 40-Hz EEG and behavior. In I. Kamiya, T. X. Barber, N. E. Miller, D. Shapiro, and J. Stoyva (eds.), Biofeedback and self-control: 1976-77, An Aldine Annual. Chicago: Aldine, 1977.
Skinner, B. F. Walden two. New York: Macmillan, 1948.
Small, L. Neuropsychodiagnosis in psychotherapy. New York: Brunner/Mazel, 1973.
Smith, G. P. and Coleman, R. E. (1977). Processes underlying generalization through participant modeling with self-directed practice. Behavior Research and Therapy, 15, 204-206.
Swenson, L. C. Application of contingency management principles to the college classroom: The con game project. Paper presented at Western Psychological Association annual meeting, Anaheim, California, April, 1973.
Swenson, L. C. The effects of requiring charting on course work output in college students enrolled in a point system based college course. Paper presented at Western Psychological Association annual meeting, Sacramento, California, April, 1975.
Tharp, R. G. and Wetzel, R. J. Behavior modification in the natural environment. New York: Academic Press, 1969.
Timberlake, W., & Farmer-Dougan, V. A. (1991). Reinforcement in applied settings: Figuring out ahead of time what will work. Psychological Bulletin, 110/3, 379-391.
Troisi, J. R., Bersh, P. J., Stromberg, M. F., Mauro, B. C. & Whitehouse, W. G. (1991). Stimulus control of immunization against chronic learned helplessness. Animal Learning and Behavior. 19/1, 88-94.
Whitehead, W E. (1992). Biofeedback treatment of gastrointestinal disorders. Biofeedback and self-regulation, 17/1, 59-76.
Whitehead, W. E., Renault, P. F., and Goldiamond, I. Modification of human gastric acid secretion with operant-conditioning procedures. Journal of Applied Behavior Analysis, 1975, 8, 147-156.
Woolfolk, R. L., Carr-Kaffashan, L., McNulty, T. F., and Lehrer, P. M., Meditation training as a treatment for insomnia. Behavior Therapy, 1976, 7, 359-365.
Zlutnick, S., Mayville, W. J., & Moffat, S. (1975). Modification of seizure disorders: The interruption of behavioral chains. Journal of Applied Behavior Analysis, 8, 1-12.