eileenanddogs

Category: Aversive stimulus

I Just Show Him the Water Bottle and He Behaves—I Don’t Have to Squirt Him!

I Just Show Him the Water Bottle and He Behaves—I Don’t Have to Squirt Him!

Some people make claims like the one in the title out of true ignorance. They can’t identify how the behavior change is working. I’ve been there. It’s easy to believe that if one can get a dog to do something without discomfort or physical force in the moment, the training method is benign. We forget what transpired before.

There are others who make claims who, I suspect, do understand the method they are using. For them, it’s a game of “let’s pretend I’m not using force.” Some trainers use those statements to entice customers that their methods are humane or based on positive reinforcement. Some may have an interest in throwing fog into arguments on social media.

These methods are the topic of this post. Here is why waving a stick (at a dog who has been hit with one), or showing the spray bottle (to a cat who has been sprayed by one), and countless other things that don’t touch the animal are working through aversive control.

The Little Whip

When I was a kid, we had horses. I rode from a young age until we moved to town when I was about 15. For gear, we usually used hackamores and perhaps a bareback pad. More often bareback. Very rarely did we actually saddle up the horses or use bridles. Before the equine folks step up to the podium, I now know that the hackamores, with their pressure on the sensitive nose, were likely not comfortable either. But it appeared that the hackamores were less intrusive to our particular horses than the bitted bridles we also trained them to accept.

But don’t be misled. The methods we used were not kindly, except compared to those of some of our neighbors. We used pressure/release, yanking on the lead rope, kicking with our heels, smacking the horses with the reins or a whip, and using the reins to turn or stop the horse. I may have had spurs; I know my sister did.

We didn’t use positive reinforcement when riding. There were no appetitives involved except whatever pleasure the horses got from getting out in the world to walk and gallop around, and the feed we gave them before and after, as we were preparing for and cooling down after rides.

A quirt, or small whip. Except for the metal, it looks like a great tug toy!

I used a quirt, a short whip. It looked something like the image to the right. I don’t remember where I got it or whose idea it was. But I remember using it when I rode.

When I wanted my horse to go faster, I would swing the quirt around behind me to strike her on her butt. I’d do that a few times until she had sped up to my liking. We all knew how to do that with the ends of the reins, too.

I noticed after I had used the quirt for a while that I didn’t actually have to hit her anymore. With her excellent peripheral vision, she would see me swing the quirt forward, winding up to land a blow on her butt. She started speeding up when she saw the quirt moving and before I actually hit her with it. I adapted my behavior, whether out of kindness or efficiency, I don’t know. But I rarely hit my horse after I learned that all I had to do was to threaten her with the little whip.

Even at that young age, I realized what was happening, although I didn’t have the words for it. I do now. In response to my use of the quirt, my horse was changing her behavior from escape (speed up to make Eileen stop hitting her) to avoidance (speed up sooner to prevent Eileen from hitting her).

Escape and avoidance are the two faces of negative reinforcement. My horse’s behavior was under aversive control.

What Did I Think about It?

I could have gone around saying, “Using the quirt isn’t cruel; I don’t touch her with it.” I don’t think I said that because I understood that the quirt worked because I had hit her with it, and could hit her with it. The movement of the quirt had become a threat. That’s still aversive control.

If I had never hit her with the quirt, if she hadn’t gained that history, she would have had no reason to speed up in response to the swing of it unless the movement itself scared her. But she would probably have habituated to the movement if there had been no following slap. There would be no threat.

Note: If this post appears on the websites Runbalto, Scruffythedog, Snugdugs, or Petite-Pawz, or frankly, anywhere else, know that they are reposting without permission and in most cases without credit. This is my intellectual property, not theirs. I haven’t had time to file DMCA takedown notices yet.

Spray Bottles

When I was in my late teens and living on my own, I got a cat. Nobody I knew then talked about training cats. We lived with the “cat” things they did or interrupted them in unpleasant ways, usually yelling or using a spray bottle with water. Some people even used lemon juice or vinegar.

I used a spray bottle with water. I found out, over time, that the spray bottle worked the same way as the quirt. I remember using the spray bottle when my cat would get on the dining room table. I’d spray him as long as I needed to until he’d jump off. This was the escape flavor of negative reinforcement. He made the aversive stimulus stop with his action of jumping down.

But the same thing happened with the spray bottle that had happened with the quirt years before. It took fewer squirts to get him to move, and finally, all I had to do was wave the squirt bottle in his direction or even walk over to get it. I didn’t have to spray him at all. This was avoidance. Still negative reinforcement.

Was there also positive punishment involved? Maybe. I don’t remember for sure whether the behavior of getting on the table decreased, but I don’t think so. So there may not have been P+. But there was definitely negative reinforcement, two flavors of it.

It would have been easy to eliminate, decrease, or prevent my cat from getting on the table to begin with. I could have used management and positive reinforcement. I could have provided him with several elevated beds and perches. And I could have taught him to target my hand or a target stick so I could move him off the table using positive reinforcement. I did not know of those options then.

Is Avoidance Better than Escape?

Most dogs will work to escape or avoid body pressure

You will hear people proclaiming that they don’t have to use force anymore.

  • “I don’t have to vibrate the collar anymore; he behaves when I just make it beep.”
  • “I just show him the spray bottle.”
  • “I just start to roll up a newspaper and he shapes right up.”
  • “I just walk toward him and he pops back into a sit.”
  • “I don’t have to throw the chain anymore; she stops when I wind up to throw.”

Is this force-free training? Of course not. There would be no avoidance if the animal hadn’t experienced the unpleasant thing first. And not usually just once. They likely experienced it repeatedly until 1) they learned how to make it stop, and 2) learned the predictors that it was about to happen and responded earlier.

In learning to avoid the unpleasant stimulus, the animal may prevent pain or even injury. So of course those are benefits. But is that an advantage to brag on? What about the pain or injury it took to get there? “I don’t have to whip the horse anymore. That was so unpleasant that she learned how to avoid it.” Yay?

How to Tell When Avoidance Is Involved

Avoidance is complex. A lot of behavior scientists have put their minds to the question of why an organism will work for the goal of nothing happening. I’m not even going to get into that here, but if you are interested, most behavior analysis books have a section on it.

Besides being complex, avoidance can be hard to spot. Again, it’s because we don’t see a blatant aversive in use. Think of the videos by aversive trainers of a bunch of dogs on platforms lying very still for long minutes. We don’t see them getting hit, yelled at, or shocked. But they are usually frozen and shut down. They have learned that the way to avoid being hurt is to stay on their platform. Body language is one tell. They are often crouched, not relaxed. Their eyes are either fixed on the human, or they have checked out and are going, “La la la” in their heads. They are not casually looking around the room or wagging their tails.

But the other thing to look for is this. Do you see any appetitives in the picture? Is anyone going around giving the dogs a nice morsel of food every few minutes or even more often? Rewarding them with a game of tug? Granted, some trainers use both aversives and positive reinforcement. So even if you do see food, there still may be aversives involved. But if you see frozen dogs not moving a muscle and no food or toys in evidence, you are probably seeing avoidance.

Another easy place to see it is in traditional horse videos. Horses are so attractive and look so beautiful being put through their paces that we dog people can often be fooled. There will be some nice verbiage about the natural method or the “think” method or what neuroscience proves. But look for the appetitive. Look for the yummy treat or the butt scratches. Something the horse enjoys, not the relief of something uncomfortable stopping. If you don’t see the fun stuff, the good stuff, you are probably seeing aversive control. The horse is performing because of discomfort or the threat of it: avoidance.

Things That Can Work through Avoidance

  • Squirt bottles
  • Shock or vibration collars, both manually triggered or as part of boundary systems
  • Prong collars
  • Choke collars
  • Bark collars
  • Body pressure
  • Eye contact
  • Citronella spray
  • Whips
  • Plastic bags on a stick
  • Verbal threats
  • Chains or “bean bags” that are thrown near the dog
  • Penny cans
  • Picking up a stick or anything you might hit your dog with

Aggression

The use of aversive tools and methods can prompt an aggressive response. Some of the milder aversives are probably less likely to do that with the average animal. But it’s the animal that gets to define “mild” or not. I watched a YouTube video of a domestic cat aggressing at a woman who is threatening to spray him with a spray bottle. I’m not embedding or linking it because I don’t want to give it that support, but it’s among the first hits if you search for cats vs. spray bottles on YouTube. Here’s a description (not an exact transcription):

A small orange tabby cat is sitting on a wooden table next to a potted plant. A woman’s arm and hand come into the frame. She is holding a squirt bottle. The cat squints his eyes when the spray bottle first appears. She shoves the spray bottle nozzle into his face as she says things like, “Back up from the plant.” “I said, back up from the plant.” The cat responds to her movement and statements by repeatedly slapping the woman’s hand holding the bottle with his paw. He meows and whips his tail around. He actually advances on the hand with the spray bottle rather than retreating. She finally squirts him point-blank in the face, and he shrinks back a little and moves laterally but doesn’t get off the table. He goes to the other side of the plant. There are at least three aspects to her threat: the spray bottle itself, her advancing on him, and her verbal threats.

But this cat is not showing avoidance. He retreats only when sprayed directly in the face, and then only a few steps. But instead of avoidance, his go-to method is to lash out. My cat was more easygoing and merely worked to avoid the spray.

I’m not ashamed to say I was rooting for the cat in the video, but with a mental caveat. He’s lucky he’s small. If this were a large dog or a horse, similar behaviors would be extremely dangerous for the human, and the animal would be in danger of being euthanized for aggression. Even the small cat could be in danger of losing his home of life if he escalates further, except that his owner is making money on YouTube.

I include this story for two reasons: progressing to avoidance is not inevitable, and we can’t predict what kind of aversive use will elicit an aggressive response.

Avoidance Doesn’t Earn You a Pass

Teaching behaviors through escape and avoidance is generally unpleasant for the learner. Even in situations where we can’t see anything bad happening. if the animal is working to avoid something, something bad did happen. It could happen again, and the animal knows it.

Copyright 2021 Eileen Anderson

Space Invaders: How Humans Pressure Dogs & Other Animals

Space Invaders: How Humans Pressure Dogs & Other Animals

Let’s say you are standing at a party, or in your office, or on your front lawn. Someone you vaguely know walks up to you. He walks up very close, face-to-face like the Seinfeld close-talker. Close enough that you can see up his nose and smell his breath. He starts a conversation. What do you do?

You will probably have a strong urge to step back. You may or may not do it, depending on the social situation and a host of other factors. But when someone we don’t know well enters our personal space bubble, it can be very uncomfortable.

Continue reading “Space Invaders: How Humans Pressure Dogs & Other Animals”
“I Will Never Use the Shock Collar Again!”

“I Will Never Use the Shock Collar Again!”

foxhound and black lab playing in a field

This is a story from a client of one of my professional trainer friends. Let’s call my friend “Phoebe.” My friend had met the client for some coaching for her young, exuberant dog, Raven. But it was a very long distance for the client to come. My friend received this email after she hadn’t heard from the client in a while. Some details were altered for privacy, but I’ve left the email essentially as the client wrote it because she tells the story so eloquently.

Continue reading ““I Will Never Use the Shock Collar Again!””
All That’s Unpleasant Does Not Punish

All That’s Unpleasant Does Not Punish

I’ve written a lot about the behavior science definitions of reinforcement and punishment. That’s because they can trip us up so easily. Something can be attractive, but not always reinforce behavior. Something can be unpleasant, but not serve to decrease behavior even when it looks like it should. This story is about a natural consequence that seemed like it would decrease behavior but didn’t.

Continue reading “All That’s Unpleasant Does Not Punish”
Positive Punishment: 3 Ways You Might Use It By Accident

Positive Punishment: 3 Ways You Might Use It By Accident

Positive reinforcement-based trainers never use positive punishment, right? At least we certainly try not to. But it can sneak into our training all the same.

Brown and white dog being grabbed by the collar in example of positive punishment
Collar grabs can be aversive

Punishment, in learning theory, means that a behavior decreases after the addition or removal of a stimulus. In positive punishment (the addition case), the stimulus is undesirable in some way. It gets added after the dog’s behavior, and that behavior decreases in the future. Some examples of that kind of stimulus would be kicking the dog, jerking its collar, shocking it, or startling it with a loud noise. You can see why positive reinforcement-based trainers seek not to use positive punishment.

Continue reading “Positive Punishment: 3 Ways You Might Use It By Accident”
Don’t Be Callous: How Punishment Can Go Wrong

Don’t Be Callous: How Punishment Can Go Wrong

This post includes discussion of animal experimentation from the 1950s and 1960s using shock. It is unpleasant to contemplate. But to me, it makes it even worse that the knowledge gained by those studies is not widely known. Studying that literature gives one a window on how punishment works. I hope you will read on.

The studies I cite are all included in current behavior science textbooks, and my descriptions are in accord with the textbooks’ conclusions. The conclusions are different from the common assumptions about punishment. 

Graph shows typical response to mild-to-moderate punishment. X axis represents sessions over time. Y axis is the suppression ratio. There is a drop in the behavior immediately after the aversive is applied, but the behavior gradually returns to its former level.
This is a typical response to application of a mild-to-moderate aversive. I created this graph because 1) I don’t have rights to the ones in textbooks, and 2) standard behavior change graphs are difficult to interpret if you are unfamiliar with them. I made a different type of graph, but what I have represented is the same response you see in the textbooks and research papers. The X-axis represents sessions over time. The Y-axis shows the ratio of behavioral decrease. The shape of the graph roughly correlates to the frequency of the behavior and shows that the suppression of behavior was only temporary.

I’ve written a lot about making humane choices in training and about the fallout that accompanies aversive methods. But the immediate risk of hurting, scaring, or bothering your dog is not the only problem with using aversives. It turns out that using positive punishment is tricky.

In the term positive punishment, positive doesn’t mean “good” or “upbeat.” In behavior science, it means the type of punishment in which something is added and a behavior decreases. The added thing is something the animal wants to avoid. If every time your dog sat you shocked her, played a painfully loud noise, or threw something at her, your dog would likely not sit as often.  Those things I mentioned would act as “aversive stimuli.” If the dog sat less after that, then punishment would have occurred.

There is another type of punishment called negative punishment. It consists of removing something the dog wants when they do something undesirable. I’m not discussing that type of punishment in this post. For the rest of the post, when I refer to punishment, I am referring to positive punishment.

The Punishment Callus

Some trainers and behavior professionals warn about something called the punishment callus. A punishment callus is not a physical callus. It is one name for the way that animals (including humans) can develop a tolerance for an aversive stimulus. When that tolerance is developed, that stimulus does not decrease behavior. It is not an effective punisher. The animal has become habituated to punishment.

This is not just a piece of folklore. It has been demonstrated repeatedly in studies, and it happens way more often than we realize in real life. I’m going to describe some of the research.

Reinforcement First

The first thing that happens in most punishment experiments is that the animal is taught a behavior using positive reinforcement. The pigeon learns to peck a disk to get some grain. The rat learns to press a lever or run down a chute to get food. There will be dozens, hundreds, or even thousands of repetitions. Then, after the behavior is strong, the researchers introduce punishment. This is usually in the form of shock. The shock is generally contingent on the animal touching the food or performing the behavior that gets access to the food.

At first glance, this seems weird, not to mention wildly unfair. Why would they be starting off a punishment study with reinforcement? Then why would they punish the same behavior?

Think about it a little and it makes sense. You can’t use punishment if you don’t have a behavior to punish. Reinforcement is what makes behaviors robust. You can’t measure the effects of unpleasant stimuli on a behavior unless you have a strong, consistent behavior to begin with.

In some studies, they cease the reinforcement after the punishment starts. In others, the reinforcement continues. In these experiments, the animals and birds get shocked for trying to get their food in the same way they learned to get it through many repetitions of positive reinforcement.

But this is not at all unique to lab experiments. A hard lesson here is that we do the same thing when we set out to punish a behavior. Animals behave because they get something of value (or are able to escape something icky). The behavior that the dog is performing that annoys us is there because it has been reinforced. It didn’t just appear out of the blue. So if we start to punish it, the animal is going to go through the same experience that the lab animals did. “Wait! This used to get me good stuff. Now something bad happens!” And punishment and reinforcement may happen together in real life, just as in some of the studies.

How We Imagine Punishment to Work

I think most of us have an image of punishment that goes something like this:

The dog has developed a behavior we find annoying. Let’s say he’s knocking over the trash can and going through the trash. The next time Fido does that, we catch him in the act. We sternly tell him, “No! Bad dog!” Or we hit him or throw something. (I hope it’s obvious I’m not recommending this.) The next time he does it, we do the same thing. In our minds, we have addressed the problem. In our mental image, the dog doesn’t do it anymore.

But. It. Doesn’t. Work. That. Way.

Real life and science agree on this. It’s much harder than that to get rid of a reinforced behavior.

Punishment Intensity

Many studies show that the effectiveness of a punishing stimulus correlates to its intensity (Boe and Church 1967).   The higher the intensity, the more the behavior decreases. Very high-intensity punishment correlates to long-term suppression.

Skinner was one of the first to discover that low-intensity punishment was ineffective. He taught rats to press a bar to get food. Then he discontinued the food and started to slap the rats’ paws when they pressed the bar. For about a day, the rats whose paws got slapped pressed the bar less than a control group. Then they caught up. Even though they were getting slapped, they pressed the bar just as often as the control rats (Skinner 1938). Other early punishment studies also used mild punishment, and for a while, it was assumed that all effects of punishment were very temporary (Skinner 1953). This was determined to be incorrect in later studies with higher intensity aversives.

Dog owners who try to use low-level punishment are faced with an immediate problem. Ironically, this situation usually comes from a desire to be kind. Many people do not feel comfortable doing anything to hurt or startle their dogs, but these are the methods they have been told to use. So they figure that they should start with a very low-intensity action. They’ll yell just loud enough to get the dog to stop. They’ll jerk the dog’s collar just enough to interrupt the pulling on leash. They’ll set the shock collar to the lowest setting.

But if a behavior is valuable enough to a dog (i.e., it gets reliably reinforced), a mild punishment will barely put a dent in it. It may interrupt the behavior at the moment and suppress it for a short time, and people are fooled into thinking it will continue to be effective. But it almost certainly won’t.

So the next thing the humans do when the dog performs the behavior is to raise the level of the punishment a bit. They yell louder, jerk harder, or turn up the dial on the shock collar.

Lather, rinse, repeat. If this pattern continues, the humans are successfully performing desensitization to punishment. The desensitization can continue up to extremely high levels of punishment. That is the punishment callus, and it has been excruciatingly well documented in the literature.

Miller’s Rats

In one study (Miller 1960), hungry rats were trained to run down a walled alleyway to get a moist pellet of food at the other end. The rats repeated this behavior many times as they got acclimated to the setup. Each rat’s speed of running down the alley was recorded as they gained fluency. The behavior of running down the alley was reinforced by access to food. This continued (without punishment) until the researchers determined that the rats had reached their maximum speed.

A shock mechanism was then initiated so the rats’ feet would get shocked when they touched the moist food. The rats were divided into two groups. They were referred to as the Gradual group and the Sudden group, indicating the way the shock was introduced. The Gradual group started with a shock of 125 Volts, which caused virtually no change in behavior. The shock was raised in each subsequent session. The rats’ speed slowed down somewhat each time the shock was raised. Then it recovered and leveled off as they got accustomed to the new intensity. The shock was raised in nine increments up to 335 Volts.

The rats in the Sudden group didn’t experience the gradual shocks. Their first introduction to the shock was at 335 Volts. Their movement down the alley slowed drastically. Often they would not touch the food.

In the last 140 trials (5 trials each for 28 rats total) the results were telling. Out of 70 trials at 335 Volts for the rats in the Gradual group, only 3 trials resulted in the rat not going all the way to the food. In the Sudden group at the same voltage, 43 trials, more than half resulted in the rat not going all the way to the food.

To repeat: These two groups of rats responded differently to shocks of the same high voltage due to how the shock was introduced.

Now take careful note of the differences in their behavior:

The [subjects] in the Gradual group flinched and sometimes squealed but remained at the goal and continued to eat. Those in the Sudden group seemed much more disturbed, lurching violently back, running away and crouching a distance from the goal (Miller 1960).

There’s the clincher. At 335 Volts, some rats were still approaching the food and eating while getting shocked. In other words, those behaviors were not effectively punished. For the other rats, the behaviors were definitely punished–and the rats were traumatized.

So there you have it. Two of the most common outcomes of using punishment are:

  • a spiral of ever-increasing punishment intensity that the animal learns to tolerate; or
  • a shut-down animal.

This information has been available for 50 years. Yet aversive techniques are still casually recommended to pet owners with no education in behavior science, no exposure to the mechanical skills involved, and most important, no clue of the harm to the animal.

Punishment meme

The Resilience of Behavior

One of the things I finally “got” about punishment as I studied the graphs in these studies is that complete cessation of a behavior is rare. Again, our mental image of the results of punishment is incorrect. In the Miller experiment, the traumatized rats in the Sudden group did sometimes approach and eat the food despite intense punishment. The rats in the Gradual group consistently did so.

The rats in the Gradual group correspond to dogs who are trained with gradually increasing punishment. They acclimate and the behavior continues. They get a punishment callus. The rats in the Sudden group probably resemble the heavily punished dogs I describe in my post Shut-Down Dogs, Part 2. 

One more thing about the graphs. When punishment is initiated or taken to a higher level, there is an immediate drop-off in behavior. It’s usually of short duration. The rate of behavior generally rises back up again.  This is what I modeled in the diagram above. You can see a bunch of these graphs in the Azrin study linked below.

Increasing the punishment intensity seems to have the same general effect as the initial addition of punishment. In both instances, the new punishment intensity produces a large suppression at the moment of changeover, with substantial recovery after continued exposure to this new intensity. Only at severe intensities of punishment has further increase failed to produce an abrupt decrease in responding (Azrin 1960).

One of the tragedies of this pattern in dog training is that the drop-off causes the human to believe the punishment is working. Raising the level of the punishment is reinforcing to the human.

The deliberate use of positive punishment as a training method is already ruled out of consideration for most positive reinforcement-based trainers. This is because of humane concerns and punishment’s known fallout. But I believe it is also important for us to know how difficult it would be to use effectively and that it does not work the way most of us imagine it to. We can see habituation to punishment all around us once we learn of its existence. My takeaway from the studies is how vastly superior and straightforward it is to build behavior in our pets than to try to squash it down.

Note: Please don’t quote this article to claim “punishment doesn’t work.” High-intensity punishment does work. But it has unacceptable side effects that can destroy our dogs’ happiness and wellbeing, not to mention their bonds with us.

References

Azrin, Nathan H. (1960). Effects of punishment intensity during variable‐interval reinforcement. Journal of the Experimental Analysis of Behavior 3(2), 123-142.

Boe, E. E., & Church, R. M. (1967). Permanent effects of punishment during extinction. Journal of Comparative and Physiological Psychology, 63(3), 486-492.

Miller, Neal E. (1960). Learning resistance to pain and fear: Effects of overlearning, exposure, and rewarded exposure in context. Journal of Experimental Psychology 60(3), 137-145.

Skinner, B. F. (1938). The behavior of organisms: an experimental analysis. Appleton-Century. New York.

Skinner, B. F. (1953). Science and human behavior. Simon and Schuster.

Copyright 2016 Eileen Anderson

How Did The Aversive Get There? A Call for Honesty

How Did The Aversive Get There? A Call for Honesty

I am mystified by one particular argument of those who use protocols for fearful or reactive dogs other than desensitization/counterconditioning (DS/CC). These other protocols often use negative reinforcement; if not that, then sometimes desensitization without counterconditioning; sometimes extinction; sometimes habituation.

People who practice these protocols intentionally expose their dogs to their triggers at an aversive level at times, as opposed to people who practice pure DS/CC, which is ideally practiced at a distance or intensity such that the trigger is not aversive to the animal.

The argument that bothers me is this:

It’s OK to expose the animal to a trigger at a potentially aversive level as long as we are not the ones who put the aversive there for them to be exposed to. We’re not adding an aversive; it’s already there.

I wrote a post a while back addressing this idea in part. I pointed out that for negative reinforcement protocols, the ethical and definitional difference is not about how the aversive got there. To say so is to invoke the naturalistic fallacy.  The ethical difference rides on whether the trainer chooses to put a contingency on the animal getting away from it, not whether the aversive is “natural.” Do they ask for or wait for a certain behavior before retreating? Because that is a choice. If the dog gets close enough to the trigger that she starts showing stress, there is always the option of getting her humanely out of there, with no requirements on her behavior from the handler.

Where the aversive came from is ethically irrelevant, since the trainer makes a choice whether or not to use it, however it got there. Most would agree that such a use is an ethical choice, to be carefully considered.

So the fact that people are still mentioning this irrelevancy about “who put it there” seems like a lot of hand waving to shoo away the real issue: choosing to use an aversive.

But wait–in case it matters–how did it get there?

How It Really Got There

My hand, my voice, my phone.
My hand, my voice, my phone.

I have a formerly feral dog with whom I have been working for a few years, gradually getting her socialized to people, and making lovely progress with DS/CC.

Even though my goal is to keep the triggers (people, in her case) under the threshold of aversiveness, I realize that I am dealing with potentially aversive situations when we go out into the world. And I arrange for and seek out those situations for her sessions. For instance, I make phone calls at times to arrange for a controlled session with a person unknown or partially known to her.

If I do this and blow it and let her get too close or stay too long, I have exposed her to an aversive. How’d it get there? Me! Entirely through my choices! I arranged it. I deliberately sought it out with her. I made the phone call, drove my dog to the meeting place, and exposed her to the trigger. I added it to her environment, or added her to an environment where I knew it to be.*

People following any protocol generally arrange for triggers to be present in this way, including people, dogs, specific things like people on bikes or scooters, or other animals. So if someone is doing any type of exposure treatment, how can they claim that they are not responsible for the aversive being there? Did the Tooth Fairy bring it? Can their dog pick up the phone and drive the car?

It is not logical to claim to have nothing to do with the aversive being in the environment if you planned it, arranged for it, or sought it out in the first place. And that includes stealth sessions. If you are out there looking for triggers to use without their knowledge, you are still the one choosing to expose your dog to them. Finding = adding.

Empathy

You can probably detect that I find this irritating, but I seek to look at it in an empathetic way.

I have been reading some posts by Behavior Adjustment Training (BAT) practitioners in particular who express that they feel attacked and beleaguered by questions about negative reinforcement and humane training attached to their protocol. I get that they feel pushed into a corner.

I can empathize with that. Here is something you believe in, and people are asking difficult, pointed questions about it. Sure, anybody would be defensive. As a blogger, I have to deal with all levels of criticism. Even the most reasonable of criticism hurts.

There are people who react to these questions with dignity, though. They say yes, they are using negative reinforcement at times if they use certain protocols. They have thought it out, see good results, usually use other protocols as well, and are ultra careful about side effects. They don’t play like the presence of the aversive has nothing to do with them. Although I may not agree about all methods these folks use, I can appreciate their transparency and honesty about the science.

But it really worries me that there are still people who claim not to be responsible for getting the aversive into the environment. If they are trying to elude responsibility for that, even though it’s completely a side issue, what else are they willing to overlook, justify, or push out of their minds?

Thank you to all the people who do their best not to adjust the science (or even basic logical thinking) to justify their own preferences.

Coming Up:

  • The Girl with the Paper Hat Part 2: The Matching Law
  • Punishment is not a Feeling
  • Why Counterconditioning Didn’t “Work”
  • How Skilled are You at Ignoring? (Extinction Part 2)
  • What if Respondent Learning Didn’t Work?

Eileenanddogs on YouTube

* I am not confusing positive punishment and negative reinforcement, here. To use negative reinforcement, there has to be an aversive in the environment to be removed or escaped. We’re talking about how the aversive got there in the first place.

5/25/14 Addendum

This post is an urge to be honest about one aspect of the use of aversives.  I believe that all trainers, regardless of method, should be honest about their training choices and philosophy. You do it: own it. That’s the message in a nutshell. And I directed it to an argument that I believe does the opposite of “owning it.”

However, one of the common responses I have gotten over the past week  is comparisons of the ranges and setups of DS/CC protocols and those using negative reinforcement, often in an apparent attempt to minimize the differences.

I have previously provided a webinar and a movie on the differences and similarities of the major protocols for addressing fear in animals, with particular emphasis on their ranges and setups.

To review a few relevant points: Debating who starts further from the stimulus is a moot point.  No matter how far away you start, you are required to go into the aversive range for a negative reinforcement protocol to work.  In desensitization and counterconditioning you have no need to cross into the range of stimulus aversiveness in order to get effective results. In R-, aversive exposure is necessary. The protocol depends on it. In DS/CC, aversive exposure is by accident and hopefully rare. That is an important distinction between DS/CC and negative reinforcement-based protocols.

The other important distinction is that you can get a positive conditioned emotional response from DS/CC. With DS/CC and negative reinforcement there are two very different types of learning going on.

 

 

Copyright 2021 Eileen Anderson All Rights Reserved By accessing this site you agree to the Terms of Service.
Terms of Service: You may view and link to this content. You may share it by posting the URL. Scraping and/or copying and pasting content from this site on other sites or publications without written permission is forbidden.