Operant Conditioning

Another type of learning studied by behaviorists is operant conditioning. Operant conditioning, sometimes referred to as instrumental conditioning, is a process by which a response is followed by a consequence (either positive or negative), and that consequence teaches you to either repeat the response or decrease its occurrence. The main difference between classical conditioning and operant conditioning is that in classical conditioning, the learning process is reflexive and is not dependent upon consequences, whereas in operant conditioning, the learning process is more complex and reliant upon the type of consequences the response elicits. In other words, the learning process results in different types of voluntary behavior. You can't decide whether or not to salivate, but you can decide to go to work if you like getting paid.

Cats and Rats

Edward Thorndike introduced and laid the foundation of operant conditioning in his experiment with cats. He was the first psychologist to systematically study how consequences impacted voluntary behavior in animals. In this experiment, Thorndike constructed cages that he called puzzle boxes, in which he placed hungry cats. He then set a bowl of food outside the box where the cat could see it but could not get to it unless the cat was able to escape. The escape route was triggered by a mechanism that the cat had to work. Of course, the cat had no idea it was supposed to trigger this mechanism, so it ran about, bounced off the walls, and cried — in short, it tried everything it knew to try to get to that food. Eventually, the cat accidentally tripped the mechanism, thus escaping and reaching the food. Thorndike placed the cat back in the box several more times, and each time it took the cat less time to find the trigger. After several times in the box, the cat eventually learned to trip the mechanism immediately to escape and get its reward. Thorndike thus came to the conclusion that the cat's behavior was controlled by the consequences: tripping the mechanism and getting the food, rather than not tripping the mechanism and being stuck in the box.

B.F. Skinner took this idea and elaborated on it. Skinner used rats instead of cats for his experiments. He created a “Skinner box” in which he placed a rat. The box held a device that delivered pellets of food when a bar was pressed. Much like Thorndike's experiment, the rat at first scurried about without any purpose or motive behind its actions. Instead of waiting for the rat to accidentally press the bar and learn on its own, Skinner rewarded the rat with food as it got closer to performing the appropriate response (pressing the bar). Gradually, Skinner was able to shape the rat's behavior and teach it to press the bar. Eventually, the rat was intentionally pressing the bar for food as fast as it could. Skinner said that to understand a person, one must look outside the person, at the consequences of his or her actions, instead of trying to figure out the inside of the person. In other words, the person's experiences of past and current consequences are what shape that person's behavior.

In Thorndike's day, this wasn't known as operant conditioning. Instead, he had termed his theory the law of effect. This theory basically stated that a consequence had an effect on the behavior of an animal or individual and that if the consequence was good, then that particular behavior would be repeated in the future.

How It Works

Operant conditioning is reliant upon the behavior being reinforced, either positively or negatively, which in turn will determine whether the behavior increases or decreases its likelihood of future occurrences. According to Skinner there are three types of consequences to a response: reinforcers, punishment, and neutral consequences. Reinforcers increase the likelihood that the response will occur again. Positive reinforcement is when you add a good thing after the behavior, like a treat. Negative reinforcement is when you take away something bad after the behavior, like the seat-belt buzzer that stops when you put on your seat belt. The important thing to remember is that reinforcements always increase the likelihood of a behavior occurring again, whether they are positive or negative. Punishment decreases the likelihood that the response will occur again. For instance, if you touch a hot burner and your finger gets burned, you are not likely to repeat the behavior. Of course, there are also neutral consequences. These have no bearing on whether the likelihood of a response will increase or decrease. These are typically ignored and do not affect any of your decisions.

  1. Home
  2. Psychology
  3. Conditioning and Learning
  4. Operant Conditioning
Visit other About.com sites: