My Dog Is Afraid of the Clicker. What Should I Do?

I wrote this article especially for people who are either new to using a clicker or have not dealt extensively with a fearful dog.

If your dog is scared by the noise of the clicker, slow down. Switch to a verbal marker for now. Don’t immediately focus on trying to achieve softer clicks. Here’s why.

A brown and white rat terrier is looking eagerly up at her human

Rat terrier Kaci says, “Train me!”

Some years ago, I used to train my friend’s rat terrier Kaci. She is the star of my “backing up” video and was an all-around champ of a dog. She had a littermate named Cookie. After I started training Kaci and she enjoyed it so much, I started training Cookie as well. Cookie was less extroverted than her sister and had some fears. But she was comfortable at home, personable, and so, so sweet.

I remember our first session. I took Cookie out on my friend’s back porch. I had a box clicker and some treats. I don’t remember what behavior I was trying to train. But I do remember that Cookie was super responsive. She was offering behaviors right and left soon after we started. I was thrilled. She seemed to be having great fun. Then suddenly she wilted. She started to flinch when I clicked. The clicker was obviously bothering her so I stopped the session.

I stopped training Cookie after scaring her with the clicker

I looked for articles on “what to do if your dog is afraid of the clicker” and found some advice. I’ve since learned that these are standard recommendations. The same recommendations will pop up in any Internet discussion forum if you ask the question. They center on creating a quieter clicking noise. They include using a retractable ballpoint pen, a stapler, or a metal bottle lid instead. Putting some duct tape on the clicker to dampen it. Getting a different type of clicker. Putting the clicker behind your back, starting with the dog in another room, or taking the training outside.

Some people will recommend switching to a verbal marker, but they are usually outnumbered by all the methods offered to get a quieter click.

For Cookie, I chose the tape method. I applied little squares of duct tape to the “tongue” of my box clicker and softened the click quite a bit. The next day I started to train Cookie and she fled after the very first (soft) click. I decided not to use a clicker for a while and switch a verbal marker. In a couple days I tried again to train. I had a pocketful of good food but this dog who had known me since she was eight weeks old wouldn’t come near me.

I had sensitized her to the clicker. My attempts at de-intensifying the stimulus were too late and the sound was too close to the original scary click. She was terrified of clicking noises now, and that fear had generalized to the other signs of training sessions and even, for a while, to me.

Because I didn’t live with her, I didn’t have a good way to fix the situation. And frankly, I might not have had the skill to do it at that time. She had a happy life and after that, we just did other things together. But I have always felt terrible for adding that fear to her life. I did what lots of us do: I took sensible-sounding advice from the Internet. But it only sounded sensible since I didn’t know enough about fear in dogs. I wish I had stopped the instant Cookie had responded poorly to a click and stopped trying to train for a few days. Then I could have started again at a different location and training a different behavior—only this time with only food and using no marker. Ah, hindsight.

What To Do if You Find Out Your Dog Is Scared of the Clicker

My advice is a lot more conservative than the other articles I have read on this topic. If your dog is scared of the clicker, stop using any form of a click for now. If you must use a marker, use a verbal one.  Most people recommend trying some method of dampening the sound of the clicker and giving it another try. That’s what I did, and I ended up permanently scaring a dog.

If you searched for an article on this subject because your dog is scared of the clicker, you may be a comparative beginner at clicker training or you may never have dealt with a fearful behavior in a dog. What you may not know is that fears are super easy to create and very hard to get rid of. If you don’t have experience with dealing with fear, trying to find a click sound that won’t scare your dog is not worth the risk, in my opinion. The clicker is not so important that you should risk worsening a fear in your dog. Instead, drop the whole clicking sound idea for now.

For many behaviors, you don’t need a marker at all. You can just give the dog the food or toy. If you need a marker, you can use a verbal one.  If you need some reassurance that you don’t have to use a clicker, here is an excellent write-up of a recent study from the Companion Animal Psychology blog. There are lots of studies about clickers and other markers and the results are mixed. I’m not arguing that a verbal marker is better. It’s just that any perceived benefit of using a clicker should be outweighed by the risk of installing fear.

Again, this decision doesn’t have to be forever. You can change course later as you get to know your dog better and as you develop some training skills.

You don’t lose anything by being conservative. You can lose a lot by experimenting with clicks. Continuing with a modified, softer clicker can attain what I got with Cookie. Fears are super easy to install and can be terribly difficult to get rid of. Play it safe. Lose the clicker for now.

Problematic Advice Related To This Topic

When researching this article, I read from a positive reinforcement trainer that it is practically mandatory to “cure” your dog of clicker fears, because if they are afraid of the clicker, imagine what other sounds they will be afraid of! How will they adjust to the big, bad world? (This question is related to the “Dogs need stress in their lives” argument.) But again, if you are new to the training thing, you probably can’t assess how bad the problem is. Marching out to do desensitization and counterconditioning if you don’t have a lot of experience with that can dig you in deeper. DS/CC with sounds can be especially tricky.

Another well-regarded site recommends working with the clicker-fearing dog in an enclosed space or tethered. Click/treat, click/treat. The dog can’t get away. Doing this actually constitutes flooding and is very likely to make a sound sensitive dog much worse, not better.

Am I Saying Not To Do Desensitization?

Nope, I absolutely believe in performing desensitization and counterconditioning to help dogs get used to scary sounds. I’m saying to wait. Clickers are optional sounds. If it turns out that your dog is sound sensitive, there are probably more important sounds to work on than an optional training marker. Plus you should be working with a veterinary behaviorist.

On the other hand, if the clicker sound was just a situational startle, time will probably help. You’ll observe your dog over time for other responses to sudden sounds. If you don’t see any more fear, you can get a qualified trainer to help you introduce the clicker later on if it’s important to you.

It Didn’t Work for My Dog!

I’ve written before about the dangers of claiming that something “worked for my dog” and then making a general recommendation of it. In this post, I am doing the opposite of recommending something that “worked for my dog.” I’m cautioning against something that terrified my dog and could terrify yours.  Granted, many dogs do fine with the normal solutions. Some people may even find my recommendations hysterical or overdone. They may not have had a sound phobic dog and haven’t seen the absolute misery that can cause in a dog’s life. There is a chance that your dog will be like Cookie. Do you really want to take the risk when it isn’t necessary?

I don’t often give straight up advice. I don’t have the credentials to tell you what to do about a fearful dog. But urging caution does no harm. You can wait. Slow down. Get more experience and information. Hire professional help if you and your dog need it. You don’t have to use a clicker today, this week, or this month. If you back off now, you may be able use it later. But if you keep at it now and scare your dog, you may lose a lot more than the ability to clicker train.

Related Page

6 Ways to Prepare Your Dog For Fireworks Starting Today

Copyright 2018 Eileen Anderson

Share Button
Posted in Fear, Sound sensitivity, Stress Signals | Tagged | 6 Comments

Which Pavlov Is on Your Shoulder?

Which Pavlov Is on Your Shoulder? Two photos of Pavlov, side by side, one with a halo and one with devil's horns.

The trainer Bob Bailey is often quoted as saying that when one is training an animal, “Pavlov is on your shoulder.” He is reminding us that while we are training operant behaviors (sit, down, fetch, weave), there are also respondent behaviors and respondent conditioning occurring. Respondent behaviors are behaviors that are generally involuntary and include reflexes, internal surges of hormones, and (probably) emotions.

But there’s another part that is not quoted as often. Bob Bailey also says that while Pavlov is on one of your shoulders, Skinner is on the other. (B.F. Skinner was known for exploring operant conditioning.) In his Fundamentals of Animal Training DVD, Bailey states that both fellows are always on your shoulders. Depending on what you are doing, one may shrink while the other grows in importance. (Great visual, eh?)

When we are training operantly, there are still classical associations going on. And when we are doing classical conditioning, there are operant behaviors that come along and get reinforced.

I find these mixtures fascinating and have some posts and video examples.

• In this post, I describe and show how I was using Skinner for operant behaviors but Pavlov was on my shoulder. You can see the results of good associations (food, play) with agility behaviors. I was teaching operant behaviors, but many aspects of the training (including me) got a positive dose of classical conditioning.

• In this post and video, I discuss how I performed Pavlovian conditioning. But Skinner came tagging along close behind. You can see the results of classically conditioning my dog Clara to respond favorably to another dog barking. Yes, she even drools. We can assume that her body is getting ready to ingest food. But we also see tail wagging, orientation to me (food lady), and in the end, running to me when she hears barking. Those are all operant behaviors. She was performing operant behaviors that reflected her emotional state and expedited the food delivery, and those behaviors got reinforced.

What About Fear Conditioning?

Wait, what?

skinner box with rat inside. The box has a food dispenser, a floor with an electrified grid, an audio speaker, and lights that can blink

Skinner box. Note electrified grid in the floor.

We often use classical conditioning procedures with the goal of alleviating fear. We do this by classically conditioning an appetitive response, and, via desensitization, slip it in place of the fear response. But the term “fear conditioning” means something else in the literature. It refers to using classical pairing to create a fear response to a formerly neutral stimulus. Scientists including Pavlov did this. Such experiments have been performed for decades. Animals learned that a buzzer, light, or other stimulus predicted a shock to the floor of their cage or to some apparatus. They started showing fearful behaviors and a general suppression of behavior when exposed to the predictive (conditioned) stimulus.

carolina wren closeup

Carolina wren: I am little and very noisy

These pairings happen in life all the time. Some years back, a pair of Carolina wrens were nesting in a rubber bucket on my back porch, but I didn’t know it. The bucket was up on a table and one day I went to put it away. I grabbed it and a bird flew out noisily right into my face. It scared the crap out of me. Even though the startle wasn’t painful, nor was my life in danger, that’s the way I reacted. And I got leery of that bucket. I avoided moving it again until six months later in the dead of winter. Even then I touched it only after getting on a ladder to peer into it from a distance. I remained wary of the bucket for a year or two after as well. The bucket was not directly responsible for my scare. It was just a bucket. But I couldn’t shake the anxiety that got attached to it for quite a while.

Not all associations are dramatic or come from trauma. Repeated unpleasant experiences can become associated with the place they happened or the sight of the person who caused them. How about when that person who bugs you comments on Facebook? All you have to see is her name—not the content of the current remarks—to get a sinking feeling.

Back to dog training. People who train using aversive stimuli also have Pavlov on their shoulders. But trainers who use shock and prong collars, molding and body pressure, or who throw things are not generally accompanied by Nice Pavlov*. Not the one who floods the animal’s body with “let’s eat!” or “let’s play!” chemicals and causes pleasant anticipation. They get Bad Pavlov, the one associated with fear conditioning. The one who causes the animals to hunker down in fear. Here is an example of a dog who looks like Bad Pavlov is hanging around. (I have used the Do Not Link function in hopes of not adding hits to this shock training video.)

There’s nuance in reading the body language, of course. If an animal has enough training to know how to control the aversive stimulus with its behavior, it won’t necessarily look miserable. Also, you can see happy body language on an animal being trained with aversives if the activity itself is fun. Hence the dogs who are said to “get excited” when the prong collar comes out—because it predicts a walk. Disassociate the walk and you will find that the prong collar—surprise—is not intrinsically fun. In a mixed case like this, though, I envision the aversive control as a heavy weight that always has the potential to pull the dog’s emotions in a negative direction or to suppress its behavior. The dog’s happiness is despite the aversive stimulus, not because of it.

Nice Pavlov Is Not Automatic

Just as the presence of aversives doesn’t always squelch all of the pleasure out of a situation, the presence of appetitive stimuli doesn’t guarantee rainbows and unicorns. We tend to assume that if you train with positive reinforcement, you automatically get a positive conditioned emotional response.  But it ain’t automatic. There are ways to mess it up.

Bad Pavlov was on my shoulder

See below for the reason Bad Pavlov is hanging around poor Clara.

If you are generous with reinforcement, minimize extinction frustration, and don’t frustrate or scare your dog—Nice Pavlov is probably on your shoulder. Your dog will build good associations to the training experience and to you.  But what if you frustrate or scare your dog? What if you repeatedly get in your dog’s space and don’t notice that she doesn’t like the pressure? Is Nice Pavlov going to show up and save the day just because you are using food? Probably not. You are not going to get a sweet, positive conditioned emotional response to your cues, to your training sessions, or to you if you are regularly letting aversive events creep into your training.

Here’s what it can look like if our own training session—with food—is less than fun for the dog.  I’ve set the link to start the video in the middle where training of my older dog, Summer, begins. It was a moderately stressful session for her, with too many competing, slightly scary stimuli from the environment. I was also asking for stationary “leave-it” behaviors that need a lot of self-control, and I was using kibble, a low-value food. Not a great combination.

Summer was a trouper, worked hard, and although she was clearly anxious through much of the session, succeeded at what I asked of her. I wouldn’t say this session damaged our relationship or tainted training in general—we had much too strong a history of happy training. But consider this: what if every training session I ever had with her was like that? Mildly scary stuff, hard tasks, low-value treats. Even though it would still be “R+ training” I don’t think I would have built up much of a positive response to training. Nice Pavlov would not have joined us—or if he had, he would have been smaller than Bad Pavlov. (Too many dudes on our shoulders!)

The photo above shows Clara on a day in 2015 after she knew I was getting ready to trim her nails. I have always used tons of good food, and Clara had been comfortable with nail trims for quite a while. But nice Pavlov was not in sight. Clara was recovering from Rocky Mountain Spotted Fever. While she had been sick, her joints had become painful, and having her feet handled and nails trimmed hurt. (I should not have trimmed them.)

Are Pavlov and Skinner All Jumbled Up?

Yes and no. Operant and respondent behaviors are both going on all the time. But I’ve also seen some trainers use it as an excuse. They say that since Pavlov and Skinner are both present, we shouldn’t get “hung up” on which one is primary. Well sure, we can’t crawl inside the animal’s body to check. But our training procedure should reflect our goal(s). And we should definitely get hung up on whether the dog is enjoying herself.

Teach Your Children Puppies Well

Positive reinforcement-based training done even moderately well comes with lovely side effects for the trainee (and also the trainer).  But badly attempted R+ training that regularly lets aversives, coercion, or too much difficulty in the picture can stress out dogs.  If I grab my dog’s collar to move her and I haven’t conditioned her to collar grabbing, she may dodge when I grab the next time. That was aversive. If I push into her space to get her to move and she starts jumping back when I approach her, that was aversive. If I play a game with a fearful dog where they need to come closer and closer to me to get the treat and I go too fast—I am the aversive.

So take this friendly reminder from someone who has made plenty of mistakes. Yes, Pavlov is always on your shoulder. But even if we use food to train, it doesn’t mean we will automatically get a beautiful positive conditioned response to training. There are other stimuli that can creep into our training sessions that can knock the fun right out of it.

*Pavlov wasn’t actually what we would consider “nice” to dogs in this day and age, although he was likely better than many experimentalists. I’m using some rhetorical license here.

Copyright 2018 Eileen Anderson

Pavlov photo credit Wikimedia Commons. Additions in color are mine.
Skinner Box diagram credit Wikimedia Commons.
Carolina Wren photo credit Wikimedia Commons.
Photo of Clara credit Eileen Anderson

Share Button
Posted in Classical conditioning | Tagged , , | 5 Comments

A Quadrant by Any Other Name is Still a Cornerstone of Operant Learning

This 2003 edition book is $4.89 on Amazon. Contents: priceless.

There is a science that deals directly with how organisms learn and how to use that information to change the environment in order to change behavior. It’s called applied behavior analysis (ABA). It is the applied version of behavior analysis, which was referred to as the experimental analysis of behavior earlier in the 20th century.  It is descended from the work of the behaviorists such as Skinner and is a sub-discipline of psychology.

It is a rich field of study. Universities offer graduate degrees. At the same time, it is approachable. Many of the entry-level ABA college textbooks currently in use are readable to someone with a strong high school education and certainly to someone with a college education. They are generally self-contained, in that they don’t require a lot of previous exposure to terminology to be able to work through.  The books contain fascinating information about what makes us tick, why we do what we do, and how we might go about changing behavior if we needed to. They also teach skills in ethics and kindness.

Because they are written by experts in learning, the texts are generally well organized, interesting, and approachable. A sidebar in Paul Chance’s Learning and Behavior starts off, “What would you do if, while camping miles from the nearest hospital, you were bitten by a poisonous snake?” It goes on to discuss superstitious behavior. Other sidebars are titled “Punks and Skinheads,”  “Variable Ratio Harassment,” and “Learning from Lepers.” I’ll leave you to go find out the subject matter. This topic is a goldmine for the curious. It is relevant to everyday life and can teach knowledge and skills that are very practical. If you buy older editions of textbooks, as I usually do, the prices are quite reasonable. (For instance, here’s a link to Paul Chance’s Learning and Behavior, with the oldest editions first. You can scroll forward to newer editions as your pocketbook allows. The most recent edition is 2013.)

Like any field of study, ABA has its own terminology. When we first encounter it, two things typically happen. First, we think we know it already. Who doesn’t know what punishment is, right? Motivating operation—doesn’t sound too hard to figure out! Then we go a little deeper, and even though the words are familiar, the concepts may not be. Some are extremely unfamiliar. That can cause dismay. One of the problems in the dog training world is that a lot of people get stuck at that point.

Skinner was familiar with the bump in the road when we begin to learn about behavior.

We all know thousands of facts about behavior. Actually there is no subject matter with which we could be better acquainted, for we are always in the presence of at least one behaving organism. But this familiarity is something of a disadvantage, for it means that we have probably jumped to conclusions which will not be supported by the cautious methods of science. … A great deal of unlearning generally takes place in our early contact with a science of behavior.–B.F. Skinner Science and Human Behavior

Even though the texts are readable, the words are familiar, and the topics are fascinating, there is some work to be done on the front end before you get to the really juicy stuff. But a layperson can get there with careful study and attention.

Translating Some of the Principles of Learning and Behavior

So here is this rich academic discipline that is rigorous, approachable, and applicable to daily life. It doesn’t discount other scientific disciplines covering cognition. It provides a great “on the ground” view of behavior that can mesh with and inform other study. But what has tended to filter down into the dog training world from ABA is a piece of a piece of a piece of the discipline, with a bit of a misnomer to boot.

Behavior analysis covers all types of learning, including respondent, operant, and other types (e.g., habituation and sensitization).  Let’s talk about operant behavior for a moment, since that’s the part that usually filters into the training world. (It’s not the only part we need to train dogs, though!) There are four contingent processes of operant learning that are often grouped into a table or list: positive reinforcement, negative reinforcement, positive punishment, and negative punishment. See my article for explanations and video examples. Contingent in this context refers to the relationship, or dependency, between behavior and consequences. There also exist operant behaviors that don’t have this type of contingency.

The operant contingencies are presented in a number of different ways in learning and behavior texts and articles, but generally over several chapters. Sometimes positive and negative reinforcement are grouped together. Sometimes negative reinforcement and positive punishment are grouped together since they involve aversive stimuli. Sometimes operant extinction (see below about extinction) appears with punishment, since both decrease behavior. Sometimes it merits a whole chapter of its own.

In textbooks, the charts or summations are generally presented after the learning processes have been discussed. They come after the student has gotten a chance to get familiar with the processes.  Here is a link to a contingency table from a publicly available slideshow (since I can’t copy one from a textbook). Most books on ABA have such a table or list, but they don’t always look exactly the same. But they convey the same basic information.

Now, here’s a problem. In the dog training world, the material that is usually presented over several chapters in a behavior analysis textbook is frequently reduced to one of these tables or lists. And for the last 20 years in this world, the shorthand for referring to these four learning processes has become “the quadrants.” This is not the way ABA people usually refer to these processes. More often it is described as a “contingency square” (Chance, 2013) or a 2×2 matrix (Mazur, 2016). It is a convenient depiction of two related variables each with two levels, i.e., function (increase and decrease) by operation (add and remove). At some point, people started using the term “quadrant,” i.e., the cell in which the term appears in this classification scheme, to refer to the contingent learning process itself. To me, it is important to remember that the definition is more than the quadrant, cell, or sector it lives in. It’s a real-life process.

If you want to use the best terminology and be understood by people certified or degreed in behavior analysis, instead of saying “quadrants,” you can instead refer to the “contingent processes of operant learning.” I dropped the “quadrant” terminology a few years back after I started talking to more ABA people, some of whom had not even heard the expression. At first, it was awkward. It’s a shortcut that virtually everybody in the dog world uses, and some people don’t know any other name for it. But after learning to use different words it now seems odd to me to refer to a learning process as some kind of area on a grid. As one-fourth of something. Most important, dropping that term filtered some of the mud from my understanding of learning science.

But Terminology Is Not the Main Problem

But there’s a worse problem than the propagation of idiosyncratic terminology. It is that people assume that the chart of four contingent processes that makes its way to the dog world represents the sum of learning science. Many think that with only the information on that chart, they can go ahead and use (or criticize) behavior analysis.  But unless they have been well taught through coursework and self-study, their understanding of how these processes work is often scanty and insufficient. (I know because I’ve been there.)  There are more processes to learn about.  Between getting limited information and the irritation of having to use terms that mean something different in everyday English and can be counterintuitive to boot, people discount these basic building blocks.

So we will read diatribes about the “limiting nature” of these particular operant processes. Even claims that using the contingent operant learning processes to learn and to classify behavior is somehow evil. I understand these criticisms because I’ve been there myself. There is help for the people who make these claims, though. The help is to get a learning and behavior textbook and read it. Or take a Behavior Analysis course at a university. Or take Susan Friedman’s professional course.

The four contingent operant processes are necessary but not sufficient to apply the science of behavior analysis. They are but some of the basic building blocks to master before you can get to the juicy stuff. But in my experience, it’s best to know them cold before you go on to the treasure trove that lies beyond.

Extinction, Recovery, Escape, Avoidance

There are some important learning processes that aren’t in that chart. One important one is extinction.  It’s an operant process (and also a respondent process, but let’s not go there now) and often has its own whole chapter in a textbook. It’s complex. But it’s not on most of the charts that filter into the training world. The result is many dog trainers assume that if a behavior decreases, it must be due to punishment. In extinction, the reinforcement that was previously maintaining a behavior is withheld or no longer present. You need to know about extinction and in particular how it relates to reinforcement if you are to use the basics of behavior analysis to train animals. Also important to know about are recovery from extinction and the two forms negative reinforcement takes: escape and avoidance. Avoidance is such a complex process that learning and behavior books often have a whole chapter on it.  It will lead you into some of the more fascinating riches in ABA, including more ways we learn. And there are still more processes I have not listed here.

What’s Missing and What Happens as a Result?

Summer hasn’t learned what the desired behavior is yet, but she is happy to work on it.

Typically people get into trouble if they assume that the contingent processes of operant learning are all there is to behavior science, or if they think they can safely extrapolate the rest from their own knowledge (aka preconceptions) without learning more about ABA.

I have identified four omissions that create common errors. These errors lead to claims that the learning processes are “too vague” or “have many gray areas” or make other claims that are based on assumptions and not learning science. I have committed all of these errors. I will be expanding on them in a separate post, but here’s a quick list.

First, to analyze behavior we need to identify an antecedent, a behavior, and a consequence. This “ABC”  is the basic unit of behavior. The four-part chart (or one of its cells) is not. When people claim that the processes on the chart are problematic, it is often because only two of these three items have been identified. There is insufficient information.

Second, in behavior analysis, we start by identifying and describing a specific behavior. That sounds simple, but it’s not, at least at first. If you start anywhere else, you are instantly off the path and into storytelling and conjecture.

Third, we need to keep in mind that the four contingent consequence procedures are defined by their effects, not what we think the effects should be. A reinforcer increases or strengthens behavior over time; a punisher decreases or weakens behavior over time. When we assume that we know what process is present because of the type of stimulus that’s added or removed, we fall off the path in a different way.  (There are a lot of those ways to fall off until we become more fluent!)

Fourth, we need to be clear that the plus (+) and minus (–) symbols that are used describe the operations, not values. Positive in this context means the behavior produced the addition of a stimulus and negative means the behavior removed a stimulus.

Getting Over the Bump

A “quadrants” chart is not the only gateway into the world of ABA but it is a working gateway. The contingent processes of operant learning are necessary—but not sufficient—to analyze behavior. Many of the criticisms of the quadrants come off like questions from someone who has memorized the periodic table of the elements but not taken a chemistry class. (Ahem—I’ve been there, too.) How can radon have a larger atomic weight than lead when everybody knows radon is a gas? Learn more about chemistry, and you will know.

In behavior analysis, you need the definitions, and memorizing the four processes of operant learning with contingent consequences is a great start. But they are not useful until you know how to apply the information. To correctly use the periodic table to help build an equation, you need to study chemistry. To learn the long list of operant processes (of which contingent consequences are only four) and effectively analyze and change behavior humanely you need to study ABA.

Sable colored dog leaps off a pink mat towards her female handler's outstretched hand

Summer is learning to exit a mat only when a particular cue occurs

My Previous Post

This post replaces a previous one that I removed because it was causing confusion and being misapplied. I removed it because of some poor choices I made, including that the title was misleading. Also, there is a ravenous audience out there in dog blog land that is hungry for criticism of “the quadrants” and they believed that was my point. I’m really sorry I provided fodder for that. It was the opposite of my intent.

In that post, I focused more on the terminology problem of “the quadrants” and less on the larger problem—that people in the dog training world tend to think that’s all there is to learning science (and many don’t even understand that part). And I committed the writer’s gaffe of claiming some terminology was wrong without suggesting how to make it right.

Also what that post did not convey is how gratifying it is—and fun—to progress in the understanding of ABA. To have epiphany after epiphany. How it can enrich one’s understanding of behavior and the animals and people around us, and help us make informed and kind decisions if behavior needs to be changed.

From ABA I learned that intervening in the behavior of another is a serious step that carries responsibilities. It usually entails preventing them from getting something they want in a way that is natural and works well for them. To interfere with that that bears an ethical weight. I’m reading further in behavior analysis and getting even more guidance on that front.

The quadrants, by any name, are necessary to an understanding of ABA. But they are just the beginning.

Big thanks to Debbie Jacobs, Yvette Van Veen, Kiki Yablon, Randi Rossman, and Susan Friedman for discussions about the previous post and/or suggestions about the current one. As always, all mistakes are my own.


Chance, P. (2013). Learning and behavior. Nelson Education.

Mazur, J. E. (2016). Learning & behavior. Routledge.

Copyright 2018 Eileen Anderson

Share Button
Posted in Behavior Science, Extinction, Operant conditioning, Terminology | Tagged , | 17 Comments

Herrnstein’s Matching Law and Reinforcement Schedules


Chocolate cookies on a cookie sheet. The baker may do other activities while the cookies are baking as long as she shows up at the right time. Her behavior follows the matching law.

When we bake cookies, some reinforcement is on a variable interval schedule.

Have you heard trainers talking about the matching law? This post covers a bit of its history and the nuts and bolts of what it is about. I am providing this rather technical article because I want something to link to in some other written pieces about how the matching law has affected my own training of my dogs.

In 1961, B.J. Herrnstein published a research paper in which there was an early formulation of what we call the matching law (Herrnstein, 1961). In plain English, the matching law says that we (animals including humans) perform behaviors in a ratio that matches the ratio of available reinforcement for those behaviors. For instance, in the most simplified example, if Behavior 1 is reinforced twice as often as Behavior 2 (but with the same amount per payoff), we will perform Behavior 1 about twice as often as Behavior 2.

The matching law can be seen as a mathematical codification of Thorndike’s Law of Effect:

Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond. — (Thorndike, 1911, p. 244)

It gets complex fast, though.

Following is Paul Chance’s simplified version of the basic matching law (Chance, 2003). I’m using Chance’s equation because Herrnstein’s original choice of variables is very confusing. Please keep reading, even if you don’t like math. This is the only formula I’m going to cite.

Matching law formula



where B1 and B2 are two different behaviors and r1 and r2 are the corresponding reinforcement schedules for those behaviors.

This is the same thing I said in words above about Behavior 1 and Behavior 2.

Note that this version of the formula deals with only two behaviors. However, the formula is robust enough to extend to multiple behaviors, and also holds true when you choose one behavior to focus on and lump all other behaviors and their reinforcers into terms for “other.” Later, a researcher named William Baum did some fancy math and incorporated some logarithmic terms to account for some commonly seen behavioral quirks (Baum, 1974).

There is something else we need to understand to really “get” the matching law. We need to understand schedules of reinforcement, both ratio and interval schedules. We often use ratio schedules in training our dogs. We don’t use interval schedules as often. They are very common in life in general, though. First, let’s review ratio schedules.

Ratio Schedules

Woodpacker on a tree represents a ratio schedule of reinforcement. The Matching Law says the bird will find the easiest bugs to access and peck there

Woodpeckers who are pecking for food are reinforced on a variable ratio schedule: a certain number of pecks will dislodge a bug to eat or uncover some sap.

Most of us are at least a bit familiar with ratio schedules, where reinforcement is based on the number of responses. When training a new behavior, we usually reinforce every iteration of the behavior. Some trainers deliberately thin this ratio later, and only reinforce a certain percentage of the iterations.

There has been a ton of research on schedules and their effects on behavior. The terminology of ratio schedules is, for instance, that if your schedule reinforces exactly every 5th behavior, that’s called Fixed Ratio 5, abbreviated FR5. (This is generally a bad idea in real life. Animals learn the pattern and lose motivation during the “dry spells.”) If the schedule keeps the same ratio but the iterations are randomized, and you reinforce every 5th behavior on average, that’s called a variable ratio. That would be referred to as Variable Ratio 5 or VR5. Introducing that variability makes behaviors more resistant to extinction, but there are many other factors to consider when deciding how often to reinforce. More on variable ratios in later posts!

Interval Schedules

In an interval schedule, a certain amount of time must elapse before a certain behavior will result in a reinforcer.  The number of performances of the behavior is not counted. Interval schedules have fixed and variable types as well. A fixed interval schedule of 5 minutes would be noted as FI5; a variable interval schedule of 5 minutes (remember, that means that the average time is 5 minutes) would be VI5.

Interval schedules are not the same as duration schedules, which are also time-based. We use duration schedules much more often in animal training. During duration schedules, there is a contingency on the animal’s behavior during the whole time period, such as a stay. In interval schedules, there is no contingency during the “downtime.” The animal just has to show up and perform the behavior after the interval has passed in order to get the reinforcer.

Interval schedules happen in human life, especially with events that are on schedules. Consider baking cookies.  You put them in the oven and set a timer for 15 minutes. But you are an experienced baker and you know that baking is not perfectly predictable. This means your schedule is Variable Interval 15, so there’s a chance that the cookies could be ready before or after 15 minutes. So you start performing the behavior of walking over to the oven to check the cookies after 12 or 13 minutes. But only after the cookies are done and you take them out of the oven do you get the positive reinforcement for the behavior chain: you get perfectly baked cookies. And at the beginning of the baking period, you are likely to do something else for a while. You know there is no reinforcement available for visiting the oven at the beginning of the period; the cookies won’t be ready.

Your dogs heed scheduled events, as well. If you feed them routinely at a certain time, you’ll get more and more milling around the kitchen (or staring into it) as that time approaches.

Tigers are ambush predators and are reinforced on a variable interval schedule for some attacks after their patient waiting.

Interval Schedules and the Matching Law: Translating Herrnstein’s Experiment

Let’s apply what we learned about interval schedules to better understand the matching law. Here is an example that is parallel to Herrnstein’s original experiment.

Imagine you are playing on something like a slot machine, except that it is guaranteed to pay out approximately (not exactly) every 3 minutes, no matter how many times you pull the handle, with the stipulation that you must pull the handle once after it is “ready,” for it to pay off. This is an interval schedule. (In real life, slot machines are on ratio schedules, that is, their payoffs depend on the number of times the levers are pulled and are controlled by complex algorithms that are regulated by law.) The schedule of our interval-based machine would be called VI3, the time units being minutes. When you first start playing with it, you don’t know about the schedule.  

There is another machine right next to it. It pays out about every 15 minutes, VI15, although again you don’t know this. Let’s say that each payout for each machine is $1,000 and you can play all day. No one else is playing.

It won’t take you long to realize that Machine 1 pays out a lot more often. But every once in a while, Machine 2 pays out, too. Pretty soon you’ll start to understand the machines’ schedules. You’ll gravitate to Machine 1. But no matter how many times you pull the lever, it won’t pay out more often than every few minutes. There is a downtime. Also, you’ll find out that you can’t just let the payouts “accrue.” If you miss pulling the lever during one period where a payout is available, you miss out on that reinforcement.

So you will continue to modify your behavior. You won’t bother to pull the lever for a while after Machine 1 pays off. It rarely pays again that fast. You’ve got time on your hands. What do you do? Go over to Machine 2 and pull that lever. Every 15 minutes or so, that gets you a payout as well. As long as there is no penalty for switching and no other confounding factors, switching will pay off. You can “double dip.” Then you go back to Machine 1 so you won’t miss your opportunity, and start pulling the lever again.

Which machine will you play more?

After you have spent some time with the machines, and if there are no complications, your rate of pulling levers will probably approach the prediction of the matching law. If r1 is once per three minutes (1/3) and r2 is once per 15 minutes (1/15), r1/r2 = 5. That predicts that you will pull the lever on Machine 1 approximately 5 times for every pull on the lever of Machine 2.

What About Ratio Schedules?

Some people say the matching law doesn’t apply to ratio schedules. But it does. It’s just dead simple. If there are two possible behaviors concurrently available, both on ratio schedules, with the same value of reinforcer, the best strategy is to note which behavior is on a denser schedule and keep performing it exclusively. If you are playing a ratio schedule slot machine that pays off about every 5th time, you have nothing to gain by running over and playing one that pays off about every 10th time. In the time it takes for you to go pull the other lever multiple times, you could have been getting more money for fewer pulls. So you find the best payoff and stay put.

To review the different strategies, let’s let some experts explain it. Paul Chance summarizes the different successful approaches with ratio vs. interval schedules:

In the case of a choice among ratio schedules, the matching law correctly predicts choosing the schedule with the highest reinforcement frequency. In the case of a choice among interval schedules, the matching law predicts working on each schedule in proportion to the amount of reinforcers available on each. —(Chance, 2003)

Michael Domjan lists some situations in which there are complications when computing the matching law:

Departures from matching occur if the response alternatives require different degrees of effort, if different reinforcers are used for each response alternative, or if switching from one response to the other is made more difficult.— (Domjan, 2000)

Life Is Not a Lab

There’s a reason psychologists perform their experiments in enclosures such as Skinner boxes, where there are very few visual, auditory, and other distractions. It is to minimize the competing reinforcers: positive and negative.

Let’s say our little experiment with the machines is in the real world, though. There are a thousand reasons that your behavior might not exactly follow the simple version of the matching law formula for two possible reinforcers. Someone might be smoking next to Machine 1 and it makes you cough. There may be a glare on the screen of Machine 2 and you get a headache every time you go over there. Maybe you are left-handed and one of the machines is more comfortable for you to play. Heck, you may need to leave to go to the bathroom.

But you know what? The complications added by these other stimuli don’t “disprove” the matching law. They just force you to add more terms to the equation. In this example, most of the competing stimuli would lead to behaviors subject to negative reinforcement. Yep, the matching law works for negative reinforcement as well.

The matching law applies to reinforcer value, too. If the schedules were the same as described above but Machine 1 paid out only $50 and Machine 2 paid out $1,000, you might still pull the Machine 1 lever more times. But you would be intent on not missing the opportunity for Machine 2. The ratio would skew towards Machine 2 as you pulled its lever more often.

In the real world, there are always multiple reinforcers available and they are all on different but concurrent schedules. And concurrent schedules—in the lab and in life—are where we see the effects of the matching law. And they are why the matching law can bite us in the butt, time and again in training. How about with loose leash walking?  The outside world is full of potential reinforcers on concurrent schedules. On the one hand, there is your own schedule of reinforcement for loose leash walking, hopefully a generous one.  Because you are competing with things like the ever-present interesting odors on the ground. Things like fire hydrants and favorite bushes probably pay off 100% of the time. Birds and squirrels are often available to sniff after and try to chase. What does this predict for your dog’s behavior when she is surrounded by all those rich reinforcement schedules?

When our dogs stray from the activities that we may prefer for them, they are doing what comes naturally to any organism. They are “shopping around” for the best deal. It correlates with survival.

That “shopping around” is codified in the matching law. Given multiple behaviors on different schedules, the animal will learn the likelihood of payoffs for all of them and adjust its behavior accordingly. That includes that we trainers can adjust our behavior—specifically our reinforcement schedules—so the matching law doesn’t make mincemeat of our training.

Helpful Resources

For further reading:

This post is part of Companion Animal Psychology’s 2018 #Train4Rewards Blog Party! For other articles from the Blog Party, click the box.


Baum, W. M. (1974). On two types of deviation from the matching law: Bias and undermatching. Journal of the experimental analysis of behavior, 22(1), 231-242.

Chance, P. (2003). Learning and behavior. Wadsworth.

Domjan, M. (2000). The essentials of conditioning and learning. Wadsworth/Thomson Learning.

Herrnstein, R. J. (1961). Relative and absolute strength of response as a function of frequency of reinforcement. Journal of the experimental analysis of behavior4(3), 267-272.

Thorndike, Edward L (1911) Animal intelligence: Experimental Studies. Macmillan.



Photo Credits

Cookies photo courtesy of Sarah Fleming via Wikimedia Commons

By Sarah Fleming (originally posted to Flickr as Oven) [CC BY 2.0 (], via Wikimedia Commons

 Woodpecker photo courtesy of JJ Harrison via Wikimedia Commons  

By JJ Harrison ( [CC BY-SA 3.0 (], from Wikimedia Commons

Tiger photo courtesy of SushG via Wikimedia Commons

By SushG [CC BY-SA 4.0 (], from Wikimedia Commons

Slot machine photo adapted from Vincent Le Moign via Wikimedia Commons

Vincent Le Moign [CC BY 4.0 (], via Wikimedia Commons

Copyright 2018 Eileen Anderson


Share Button
Posted in Behavior Science, Operant conditioning, Reinforcement, Research | Tagged , | 4 Comments

A Dog With Spinal Cord Concussion: Zani’s Recovery on Video

Zani, a little black and tan dog, one day after her spinal cord concussion

Zani could use her front legs to balance a little while lying down on Day 1

This is a follow-up to Dog with Spinal Cord Concussion: Zani’s Story Part 1

In February I told the story of my dog Zani’s accident and traumatic spinal cord injury. Today, almost four months out from the accident, I’m publishing a video diary of the first days of her recovery.

There are several types of spinal cord injuries in dogs. Many of them are debilitating. My previous article describes how my small dog Zani got a traumatic spinal cord injury on February 8, 2018, after running full speed into a fence.  I didn’t know what we were dealing with, but I knew what to do. I called a friend, moved Zani carefully to the car, and we went straight to the vet.

Zani was semi-awake but as limp as a rag doll. But it turned out that considering the severity of the blow, her injury was probably the luckiest one she could have had.

After taking her to the vet immediately after the accident and getting her X-rays and a CT scan, Zani got the diagnosis of a spinal cord concussion. I then took her home again. I was shocked that they sent her home with me since she had no use of her legs. She couldn’t walk, crawl, or even use them to steady herself while lying down. But the vet was confident Zani would regain the use of her legs over time, possibly even making a full recovery. The X-rays and CT scan showed no fractures, nothing dislocated, no obvious bruising of the spinal cord. She told me that when the cord is bruised, damage can be permanent.

Zani’s ability to use her legs did come back, beginning the next day and increasing gradually.

The embedded video shows Zani’s daily progress at walking, starting the day after the accident. I created the video so people whose dogs get this rare injury can see the progress of a dog who recovered.

Small black dog standing in yard, recovering from spinal cord consussion

Zani looking pretty steady on Day 9

Starting the first day, I had to take Zani out to the yard so she could try to pee and poop. She is one of those dogs who won’t eliminate if she is not comfortable in a situation, including that she will “hold it” for 36 hours or more. No indoor solutions would work and she would hate a diaper. So I knew I needed to try to get her outside even though she could only flail and struggle.

The first few days as captured in the video are hard to watch. I had to let her stumble around because she wouldn’t even try to pee if I was close or trying to support her. She did work out how to pee on her own the very first day, and I was able to swoop in and help her stay steady when she got in position to poop. (I got lots of practice with that move with dear little Cricket.)

Link to the video for email subscribers.

Every dog’s situation will be different, as will be their abilities to heal and return to normal activities. I don’t know if Zani’s response was average, above, or below, but I do know that I feel very fortunate about her recovery. At almost four months out, she can run at about 75% of her former speed. She tends to list to one side or the other when she is moving fast, but she also corrects herself. She gets on and off things successfully; she has learned to be careful about it. She can go up and down flights of steps. The main clue that something is still wrong is the listing when moving fast and that she often nods her head or holds it a bit sideways when trotting. She also does some odd thrashing in her sleep that is new.

Beagle dog mix is lying on a mat, looking alert. She is recovering from a spinal cord concussionI will be consulting with a rehab vet soon about what exercises Zani can do and what might be contraindicated. I want to know how I can best help her. I also want to discuss the likelihood of problems as she ages resulting from her gait abnormalities.

At this point, I don’t think she will regain 100% of her pre-accident abilities, but as long as she is not in pain and can do things that make her happy I am good with that!

Related Post and Video

A Dog With Spinal Cord Concussion: Zani’s Story Part 1

YouTube video showing how dependent Zani was on care the first two days





Copyright 2017 Eileen Anderson

Share Button
Posted in Dog illness, Dog injury | Tagged , , | Comments Off on A Dog With Spinal Cord Concussion: Zani’s Recovery on Video

Speeding Tickets: Negative or Positive Punishment?

Speeding tickets are commonly used as an example in learning theory textbooks. But I’m going to disagree with the typical classification because of my own experience. Here’s a true story.

When I was about 20, I was driving in my hometown. I was home from college and driving down my own street. I think I was going about 45. I think the speed limit was 35. I don’t remember why I was speeding. I didn’t commonly drive fast. But that day I did.

I heard a siren and caught my breath. Looked in the rearview mirror. There was a police car behind me, lights flashing and siren blaring. It took a moment to realize that I was the target. My heart started beating fast and I got shaky. It took a while before there was a good place to pull off the road. I started to panic, fearing the officer would think I was trying to flee. But I got off the road as soon as I could. I parked, still shaky, and rolled down my window. I don’t remember what the male officer said, but I had been speeding and he was giving me a ticket.

I am not crier, but I started to cry. I was scared and humiliated. Then further humiliated because I was crying and couldn’t stop. Then worried that he thought I was crying strategically, to get out of the ticket. I wasn’t. I was just that upset.

But I was lucky. I was a young white, privileged female, moderately attractive if a little nerdish. I was in very little danger, compared, say, to any person of color. Being stopped by a police officer can be lethal for some people. Probably not for my demographic, and frankly, it wasn’t that type of fear. I wasn’t afraid for my life or personal safety. But a run-in with an authority figure where I was in the wrong still scared the holy bejeezus out of me.

I think the ticket was $50, a fair amount for those times and my college student budget. I received the written ticket, an attached envelope, and instructions to pay before a deadline several weeks away. I paid it.

White window envelope from City Hall. Is receiving this in the mail punishment?

Did I Stop Speeding?

As I mentioned, I generally observed the speed limit. But yes, there was a behavior change. I was extra careful on that street and in my hometown in general for several years afterward. I paid extra close attention to the posted speed limits. So although the behavior didn’t generalize as much as the authorities might have desired, I was indeed punished for speeding. My behavior of speeding reduced. I didn’t want to get caught and pulled over again.

What Kind of Punishment Was It?

It was positive punishment.

Positive punishment: Something is added after a behavior, which results in the behavior happening less often.

What was added? A scary, humiliating stop by a police officer. This was definitely an aversive experience for me.

But Wait, That’s Not What the Learning Theory Books Say!

Speeding tickets and other types of fines are often presented as examples of the operant conditioning process of negative punishment.

Negative punishment: Something is removed after a behavior, which results in the behavior happening less often.

What’s removed? Money! Your money is taken away contingent on an incident of speeding. This penalty is performed with the apparent intent of reducing speeding behavior. Negative punishment is also called a response cost.

So a ticket with a fine may be an example of negative punishment for some people, but that’s not what made me reduce my speeding.

If you aren’t bothered by authority figures or if you are on friendly terms with the officer who stopped you, the interaction itself may not be aversive. But the amount of the ticket could be hard on your budget, even catastrophic. It could prompt you to change your behavior. For me, the money was painful, but the interaction had a larger effect on my behavior.

What If There Is No Behavior Change?

Speedometer with needle just about 50 mphOK, in the negative punishment scenario, how likely is the behavior to change? How effective is the fine? Obviously, the results will vary from person to person, but there is a problem with the timing. Consequences are most effective when they follow a behavior immediately. But speeding ticket fines don’t usually do that. You are usually handed a ticket with an address or attached envelope. There is some legal code on there to reference what law you are accused of breaking. You usually have a couple of weeks to get the money to the local government or dispute the ticket and get a court date.

We can certainly understand the connection between speeding and paying the money but it doesn’t pack a big punch as a consequence because of that time lag. The time between behavior and consequence is one of the crucial factors determining whether a consequence is effective. In almost all cases, it needs to be short.

But sometimes the fine can be immediate. Once I was traveling driving across the country with a friend who got stopped on the highway in a speed trap. It was in Kentucky or Virginia. The officer pulled over and corralled two cars at once (both with out of state license plates) and led us both to the station. It soon became clear that we would have to pay a large fine then and there to be able to go on our way. The alternative was to come back to the same town at some date in the future. Who can do that when they are driving through? In that experience, the loss of the money was immediate. But hopefully, that is unusual. (It was also a scary experience.)

I wonder how often behavior changes because of the fine with the envelope in the mail scenario.

The Effects of Consequences Vary

I’ve related my personal speeding ticket story above. Someone else’s might be very different. The interaction might not bother them. Or there might not be any human interaction at all if they were “caught” by a programmed camera and mailed a ticket. On the other hand, a person without white privilege would be justifiably much more frightened than I was by being stopped.

For some people, the loss of the money could indeed be a driving force for behavior change. But I think overall, the speeding ticket example is a poor example for the learning theory books because 1) it skips the experience of receiving the ticket, which can be very aversive; 2) there is usually a time lag between the behavior and the response cost of paying the fine; and 3) being stopped by a police officer is a politically charged issue right now.

Further Reading and Discussion

After writing this post, I discovered a very nice piece that analyzes several of the behaviors and consequences related to receiving a traffic ticket. If I had seen that article, I might not have written mine! The author concludes that the purpose of traffic tickets is not to change behavior.  Take a look. It gave me some new realizations on the topic.

Are speeding tickets punishing?—ABC Behavior Training

I’d love to hear others’ experiences. Anybody out there make a long-lasting behavior change because of getting a fine? Oh, and drive safely!

Copyright 2018 Eileen Anderson 

Share Button
Posted in Behavior Science, Operant conditioning, Punishment | Tagged , , , , | 19 Comments

Finding the Joy in Agility

What do you see in this professional photo of Summer on an agility A-frame in a competition?

She’s so pretty in that photo, and running nicely, but you know what? She wasn’t happy.

Here are a couple more photos from that same trial.

Summer was not miserable. She was responsive and doing what I was asking her to do. (What a good girl!) But she was stressed. And she was not joyful. I can tell it from her face, which was drawn, even a bit grim. For some dogs, that particular look might just be focus. But for her, it shows unpleasant stress. Can you tell?

How Was She Trained?

Summer’s agility behaviors were trained with positive reinforcement. She was never forced onto the equipment, but was taught gradually and gently. She wasn’t scared of it. She was physically confident and generally enjoyed the activity. So why did she look grim in these trial photos? I can identify three reasons.

  • She was undertrained. She just didn’t have that much experience yet and wasn’t solid. The behaviors weren’t “can do it in her sleep” fluent.
  • She was stressed in the trial environment. It was outdoors, there were lots of dogs, it was muddy and rainy, and she didn’t have enough experience in challenging public situations.
  • This is actually the big one. I had trained her with positive reinforcement, but I had not sought out and used reinforcers that she was wildly crazy about.

Fixing the last one sent us well on our way to fixing all three.

Finding the Joy in Agility

At the time these photos were taken in 2008, I had recently found a new teacher. She helped me realized that even though Summer could perform most of the behaviors, and had even had some qualifying runs, I was trialing her too early.

So, we worked on Summer’s and my agility behaviors. We worked on her distractibility, especially her penchant for hunting turtles. We worked on my handling, so I could be consistent and clear. She showed me that almost anything Summer did (except to run after a turtle) was because I had cued it with my body. We cut down and practically eliminated the times when Summer might just run off after a bad cue of mine, both because my cues got better, and because Summer found it worthwhile to stick around even when I messed up.

At my teacher’s encouragement, I found very high-value treats that turned Summer’s attention on high. And we used a novel reinforcer—playing in the spray of a garden hose—as a reinforcer for a whole sequence.* The water play not only upped her excitement about agility in general, it was also great for proofing her performance. She learned that if she ran straight to the hose rather than following my signals, no water came out. But finish the sequence correctly, and there was a party with the hose. She loved it!

Transfer of Value

In my last blog post I described how I became a conditioned reinforcer to my dogs over the years through regular association with food and fun. The same thing happened with agility.  All those good feelings associated with the high-value goodies, the fun, and the hose bled right over into agility behaviors.

Three years later, we competed again. We had practiced going to new environments. The fun of agility was so strong, and our behaviors were that much more fluent, that this is how she now looked in competition.

sable dog jumping an agility jump with happy look on her face showing the joy in agility

sable dog exiting an agility chute with happy look on her face showing the joy in agility

Summer came to love agility. She sprang from the start line when released. She ran fast and happy. She was an unlikely agility dog with her penchant for turtles and other prey. But she not only got good at it, she loved it. And I loved doing it with her. Even after I got Zani, who was young, physically apt and very responsive—running with Summer was always like coming home.

I thought about calling this post “Going Beyond Positive Reinforcement,” but I decided that was inaccurate. I didn’t need to go beyond it. The difference was just better positive reinforcement training.  More thorough, more general, more thoughtful. And the result was joy.

If you want to see just how joyful, watch the following video. The first clip is from 2012, at a trial. Even though it was late in the day and I made some clumsy errors, she ran happy! The comparison in her demeanor from the previous competition is striking. It is followed by the best example of her speed I have on film: a run we did in 2014 at an agility field (with distractions). Finally, I show some messing around we did at home in 2016, just to share how delighted we were to be playing with one another. She was 10 years old then, and that winter was the last time shared the joy of agility together. (She passed away in August, 2017.)

Link to the agility joy video for email subscribers.

My teacher, and other great trainers who have influenced me, have taught me to set the bar (ha-ha) high. It’s not enough that a dog can do the behaviors. It’s not enough that they can qualify. It’s not enough that they can get ribbons. It’s not enough that they are happy to get their treat at the end of the run or get to go explore the barn area at the fairgrounds.

What’s enough is getting the joy.

*If you allow your dog to play in water, especially with a hose, make sure she doesn’t ingest too much. Drinking too much water can be deadly.

Related Posts

Copyright 2018 Eileen Anderson

Share Button
Posted in Dog body language, Positive Reinforcement, Why Use Positive Reinforcement? | Tagged | 17 Comments

Actually, I **Can** Get My Dogs’ Attention

I was thinking the other day about how and why I have a dream relationship with my dogs. They are cooperative. They are sweet. They are responsive and easy to live with. You know how I got there? Training and conditioning them with food and playing with them.

They weren’t the most difficult dogs in the world when they came to me, but they weren’t easy, either. Clara was a feral puppy who was growling at every human but me when she was 10 weeks old. Zani is so soft and sensitive that she would have been considered “untrainable” by many old-fashioned trainers. Plus she’s a hound, and you know you can’t get their attention when there is a scent around.

Yeah, actually you can.

a woman with a hat and a small black dog are gazing at each other, giving their full attention

Getting the Dog’s Attention

I published a piece earlier this year about a certain claim that some force trainers make. The post is called “It’s Not Painful. It’s Not Scary. It Just Gets the Dog’s Attention!” I point out that a neutral stimulus that is not attached to a consequence can’t reliably get a dog’s attention, even though many trainers make that claim. A truly neutral stimulus will fade into the background, meaningless. And if there is no promise of a pleasant consequence attached to the attention-getting stimulus, trainers who claim success with it are using some kind of aversive method. You can’t get something for nothing.

This was not really news. It was just Post 3,197 trying to untangle some silly mythology about training.

But I’ve got something to add. If you do enough of the good stuff, you will likely find it easier and easier to get your dogs’ attention. For my dogs, there’s no such thing as a “neutral stimulus” coming from me anymore. For years I have reinforced everything I ask of them with wonderful consequences. So—don’t tell—but I do have the “magical attention signal.”

Presession Pairing

Agility great Susan Garrett calls it “Being the Cookie.” Bob Bailey might say you were inviting Pavlov to get permanently comfortable on your shoulder.

Applied behavior analysts call it presession pairing.


That’s right. ABA folks have a process where the analyst deliberately gains rapport with her client, usually a child. She sets herself up as the source of all sorts of fun before a session starts. The “pairing” part of the term is not between the analyst and the client. It’s between the analyst and good stuff. In terms of classical conditioning, the analyst/trainer is setting herself up as a conditioned stimulus (like Pavlov’s buzzer, a predictor of intrinsically good stuff). In operant terms, she is becoming a secondary reinforcer.

This is a pretty readable scholarly review about it: Developing Procedures to Improve Therapist–Child Rapport in Early Intervention.  In it, the authors operationalize some of the techniques that are used for presession pairing. (Despite the term, presession pairing doesn’t stop when the session starts. Most behavior analysts continue to do it whenever possible.) Some of the methods depend on verbal behavior, but several are rather familiar. I’m going to convert those to dog-talk.

  • Imitating play that the dog initiates
  • Offering items to the dog
  • Creating a new activity with a toy.

Sound familiar? Of course, we would also add food, food, food! And petting, sweet talk, and cuddles with many dogs.

The advantages of gaining rapport with your client or your dog are pretty obvious. One advantage mentioned in the article stuck out to me, though:

Antecedent-based strategies can be used to reduce or eliminate the aversive nature of the therapeutic context (e.g., therapist and therapeutic setting).

This translates well to what we do, too. Training sessions are usually fun, but of necessity, we sometimes have to subject our dogs to unpleasantries. Shots, eye drops, trips to the vet. But if we set ourselves up as consistent givers of good things, we can help our dogs through these experiences with minimal stress. So making ourselves into a giant conditioned reinforcer is not a selfish thing to do. It’s not just about, “Yay, my dogs think I’m great and I can get them to do anything!” It also helps the dogs in a big way.

closeup of the head of a sandy-colored dog with a black muzzle. A woman's hand is on the dog.

Too Clinical? Nope!

The section above may have halfway given some of you the creeps. If you are new to analyzing this stuff, it may feel bad, wrong, manipulative, unnatural—pick your word—to set out so deliberately to get somebody to like you. It might strike you as cold and clinical. But only if you haven’t done it before. Because once you do it, you realize there is nothing artificial about it. It feels good. It’s fun. It improves life for your dogs. I think it’s punishment culture that makes us feel weird about purposeful generosity and kindness. We can get over it.

gingerbread cookie modeled after Gingy from Shrek movieI have massively paired myself with good stuff over the years with my dogs. As a consequence, I don’t need any special interrupters around here. I don’t need to shake a can of pennies, throw something at the dogs, shock them, apply pressure, or yell. I don’t even need a “positive interrupter,” since that’s just another cue trained with positive reinforcement. I can say about anything to them and in just about any situation, they will reorient to me. Because even if it’s not one of their learned cues, if I’m talking to them, it’s likely that something fun for them will follow. And that’s pretty cool.

I’m interesting. I’m fun. It doesn’t take much for me to get my dogs’ attention.

Related Posts

Photo credit: Gingy Cookie from Wikimedia Commons, Copyright Jorge Barrios 2006. 

Share Button
Posted in Behavior Science, Classical conditioning, Dog Training, Operant conditioning, Why Use Positive Reinforcement? | Tagged , , , | 13 Comments

Positive Punishment: 3 Ways You Might Use It By Accident

Positive reinforcement-based trainers never use positive punishment, right? At least we certainly try not to. But it can sneak into our training all the same.

Brown and white dog being grabbed by the collar in example of positive punishment

Collar grabs can be aversive

Punishment, in learning theory, means that a behavior decreases after the addition or removal of a stimulus. In positive punishment (the addition case), the stimulus is undesirable in some way. It gets added after the dog’s behavior, and that behavior decreases in the future. Some examples of that kind of stimulus would be kicking the dog, jerking its collar, shocking it, or startling it with a loud noise. You can see why positive reinforcement-based trainers seek not to use positive punishment.

In contrast, in negative punishment, the stimulus involved is desirable (appetitive). It gets taken away after the dog’s behavior, and that behavior decreases in the future. Examples of negative punishment are pulling the treat away from the dog’s mouth if she lunges for it, and leaving the room if a puppy plays too roughly. (Here are more examples of the processes of operant learning.)

In positive reinforcement-based training, we try to use only negative punishment. But even negative punishment can be unfair sometimes, as I explain in this post. Not only that, but it’s possible to slide straight into positive punishment inadvertently from negative punishment. 

Positive Punishment: A Note About the Definition

Just because something hurts doesn’t mean that it will punish behavior. It is possible to administer an unpleasant stimulus (repeatedly!) and have no behavior change. For instance, I give allergy shots to both my dogs once a week. They get a whole CC of fluid injected under the skin on the back of their necks. I can tell it doesn’t feel great. But from the very beginning, I have followed the shot with a little box of fabulous treats, different every week. I’ve tried to determine whether the shot acts as a punisher. I’ve watched for decreases in behavior that might result from the shot.  I’ve found no such decreases. The dogs come eagerly for their shots and take the position I ask and stay still. The shot event is happy overall, even though there is some brief pain involved. 

So, keep in mind the “second half” of the definition of punishment. A behavior must decrease. It’s not only that you did something icky to the dog. It had to have an effect on behavior over time. Positive punishment can actually be difficult to employ successfully. The unpleasant stimulus must be applied at the right magnitude, with good timing, and consistently.

Even with these caveats, I have seen accidental positive punishment happen several ways.

 Examples of Accidental Positive Punishment

  1. It’s been a long time since I had to close my hand during “leave-it” practice with Zani


    Side effects of “leave it.” Many trainers begin the training of “leave it” (a.k.a. “it’s your choice” or ” doggie Zen”) by holding a treat in their hand. Some start with the hand open; some start with the hand closed and work up to it being open. When the dog moves forward to take the treat, they close their hand. The goal of closing the hand is negative punishment. When the dog moves toward it, the treat (appetitive stimulus) disappears and becomes unavailable. If the training mechanics are good, lunging for the treat will decrease over time. But there is a danger of positive punishment here. If the dog is fast, then the trainer has to close her hand fast. (Most trainers recommend against pulling the hand away.) Suddenly closing your hand on a dog’s muzzle can be startling or unpleasant for the dog. If the behavior of lunging subsequently decreases, what happened? You may have used positive punishment rather than negative punishment.

    Black and white rat terrier being reached for to be picked up

    Cricket’s feelings about being reached for are pretty clear

  2. Side effects of timeouts. The goal of a timeout is also negative punishment. This technique is used on puppies or rowdy dogs. When the dog does something undesirable, such as nipping, the human removes herself or the dog from the interaction.  That’s how the negative punishment works: the fun stops when the dog performs an undesirable behavior. (Sometimes the trainer will use a verbal marker to mark the naughty behavior so the relationship is clearer.) However, when one removes the dog, a couple other things happen before the dog is away from the fun. The human either needs to pick the dog up or guide him by the collar to the timeout location. But both of those actions are potentially aversive.1)A third option is to call the dog, but most trainers don’t want to call the dog to a negative consequence.  Many dogs don’t like to be picked up. Many don’t like to be grabbed by their collars. So what can happen in those situations is positive punishment: a “noxious” stimulus is added. If the dog’s undesirable behavior decreases, it could be through positive rather than negative punishment. This possibility is one of the several reasons it’s good to condition puppies to enjoy being picked up and having their collars handled.
  3. Side effects of “penalty yards.” One common technique for teaching loose-leash walking is often referred to as penalty yards. This method consists of instantly backing up when the dog begins to pull forward on the leash. (This move is usually paired with positively reinforcing the dog for walking by the trainer’s side.) The assumption behind this method is that forward motion is positively reinforcing (there is often a specific reinforcer ahead). So causing the dog to lose ground when they pull can constitute negative punishment. They get farther away from the exciting things up ahead. However, visualize the process. With negative punishment, as with all processes of operant learning, timing is important. What happens if you suddenly start walking backward when your dog is pulling forward? A jerk transmitted via the leash to the dog’s collar or harness. You will see experienced trainers use their arms as shock absorbers and seek to soften the change of direction. But they can’t go too slowly or the contingency between the dog pulling forward and the handler moving backward will be lost. Less experienced trainers likely won’t realize how hard this can be on the dog, especially if the trainer has earlier experience with training that includes deliberate collar “corrections.” So if the dog’s behavior of pulling decreases, it may be because of the loss of progress toward a goal. But it also could be that when they pull, it is soon followed by a jarring pull back on their collar.

What’s the Fallout?

Positive punishment and negative reinforcement have falloutThe examples I gave above don’t involve scaring, hitting, or kicking the dog. They don’t sound as bad as that. A hand snapping shut, a collar grab, or a leash jerk.  Not so terrible, right? Can even these milder sounding aversive stimuli create fallout? Oh, yes. If you snap your hand shut on a puppy’s snout, or right next to it, you can cause the puppy to be wary of hands. A very unfortunate lesson for a pup. Likewise with collar grabs: if you do them without conditioning first, you will create a dog who dodges away from humans. And while some dogs habituate to leash jerks, your next dog might be the one who shuts down from the jerk you create by moving backward. My pressure sensitive dog got positively punished when I charged up at her to “help” change a prop setup.

Of course, it’s not the theoretical change from “minus” to “plus” that creates a problem for the dog. It’s that when we set out to follow a training plan, we often fail to notice the dog’s response to different parts of it. We don’t see the dog saying, “Hey, you pinched my nose! I hate that!” We are probably concentrating on our own mechanics. So I could have written these cautions without any reference to learning theory, and just said, “Watch the dog!”. But then they would just be scattered incidents. Using learning theory helps me see the pattern so I can head off future problems.

Some people claim to train without the use of aversives. That’s a goal of mine, as well, but unless we are vigilant, they can sneak in anyway. Just wait until I write a similar post about negative reinforcement. Evil grin.

Have you ever used positive punishment by accident? I promise I won’t let anyone hassle you if you want to comment. These examples are super useful for all of us to be aware of.

Copyright 2018 Eileen Anderson

Share Button

Notes   [ + ]

1. A third option is to call the dog, but most trainers don’t want to call the dog to a negative consequence.
Posted in Aversive stimulus, Behavior Science, Punishment | Tagged , , | 16 Comments

My Dogs Do Know Sit! A Hint for Training the Sit Stay

Tan dog performing a sit stay in front of a woman standing right in front of her

Clara performing a sit stay. My stance is odd for a reason. Keep reading!

Turns out my dogs do know sit.

About two years ago, I wrote a post called, “My Dogs Don’t Know Sit!”. I described how my dogs couldn’t hold a sit stay when I stood still right in front of them. I analyzed the problem, and my conclusion was that part of the cue for them to stay was actually my walking away from them.  This was probably because I added distance too soon when originally training the stay. I ended up with the perverse situation that my dogs would hold their stays if I walked around, jogged, dropped treats, or left the room, but not if I stood still. All three of them responded this way, so it was clear that I was the problem.

I kept letting it slide, because in real life, I’m almost always moving around when I need them to stay. But I’ve been rather embarrassed about our sit stay problem, and that little hole in our training has bugged me. So, the other day I decided to take the plunge.

I told my friend Marge about my dogs’ two-second sit stay. She said:

For duration behaviors, I pick a visual target for myself other than the dog’s face. If I make eye contact, I’m likely going to ask the dog to do something. That’s what they are ready for.

So try looking at the wall behind the dog. Not the dog.

Marge always gives good advice. I did a session that very day with both Zani and Clara. I tried out Marge’s suggestion, thinking I would be able to use it to at least jump-start work on the sit stay. But they both flawlessly held the sit stay for as long as I wanted the very first time, and I was standing still right in front of them! And that’s even though we were on a small rug, which is a strong environmental cue to do a down instead of a sit. I pointedly looked at the wall behind them (that’s why I look dopey in the photo), and they both held their sits like little statues.

Wow. It turns out looking at my dogs was a cue for them to offer behaviors. Who knew? (Besides Marge.) I was the problem, but for once there was an easy fix. Look somewhere else, Eileen!

I still agree with what I wrote in the previous blog. But it was incomplete. I realized at the time (two years ago, by the way) that moving away was part of the cue for them to stay. I didn’t realize that looking at their faces when I stood still was a cue for them to move! Even though that’s exactly what I do when shaping or in other situations when I want them to offer behavior.


Here’s a quick comparison with both Zani and Clara. The movie shows the differences in their behavior when I look at them versus when I look at the wall.


Link to the movie for email subscribers.

Stay Cue

Somebody is bound to mention that if I just added a separate verbal “stay” cue, I wouldn’t have this problem. Perhaps, but I’d rather just use my original verbal cue. There’s no reason a single positional cue can’t have a “stay” built into it. I already have good down stays, mat stays, and even stands trained that way. The problem I have with the sit stay is well analyzed (thanks to Marge) and fixable. I don’t want my sloppy training to be used as an argument for needing a stay cue. Lots of people do without it just fine. The difference: they are clear about criteria.

Future Criteria for the Sit/Stay

Two mixed breed dogs performing a sit stay in front of their trainer. They are looking up at her attentively.

Zani and Summer sitting

Speaking of criteria, I have a decision to make. Am I OK with the cue situation as it stands? I could say, “Yay, my dogs can sit, they have duration, I just need to remember not to stare at them.” Or I could decide that they need to be able to stay even when I look at them. There is no right or wrong answer to that, although I bet most professional trainers and dog sport competitors would choose the latter and “proof” their dogs to hold a stay even when being stared at. But it’s also valid to let the handler’s duration “expectant look” mean, “Please offer some behavior.” I get to decide and train accordingly. I need to remember to be fair to my dogs and be consistent with my criteria.

Since Marge knew about the visual target thing, I’m guessing there are others like me who are simultaneously asking their dogs to stay and cuing them to move. I hope this advice can help some others. It worked a charm for me!

Related Posts

Copyright 2018 Eileen Anderson

Note: Reader Stacey G. points out correctly that just being stared at in and of itself can make dogs uncomfortable and want to move. It wasn’t likely a contributing factor for my dogs because of their reinforcement history for eye contact, but it’s probably a common cause for “breaking a stay.”

Share Button
Posted in Cues, Dog Training, Making mistakes in dog training | Tagged , , | 5 Comments