17 Behavioral Cues that I Didn’t Train (But are Still For Real)

Training, 2009
“Real” training with Cricket, 2009

When most of us think of cues, we think of the verbal ones we teach our dogs. “Sit,” “Down,” “Here!” Perhaps we have taught them some hand signals as well. To teach a cue we go through a set process that can be quite a bit of work. It involves foresight, planning, and decision making on our parts. And practice, practice, practice. I think that tends to limit our perception of the other ways cues can come to exist in our lives with our dogs.

There are cues going on all the time that we didn’t plan or teach, and some that we don’t even know about. I’m going to share 17 of these that I have noticed out of the thousands that my dogs probably do, and movies of two of the most interesting ones.

First let’s review the definition. A cue in behavior science is properly referred to as a discriminative stimulus. Such a mouthful. A discriminative stimulus signals that reinforcement is likely available for a certain behavior. (The term also applies to a stimulus that indicates that reinforcement is not available, but let’s leave that alone for now.*). Breaking it down a bit: What’s a stimulus? It is a physical event that the organism can sense. Discriminative? It has a special meaning in this definition.

So in plainer English, and in the usage of dog training, a cue is a green light that tells the animal that there is a desirable consequence available if a certain behavior is performed. In real life training, we need to be sure and make it different enough from other stimuli so that the animal knows what behavior is being indicated. You don’t want your “bow” cue to sound like your “down” cue (thanks, Kathy Sdao!), and if you are using colors as cues, you had better not use colors that look almost the same to a dog, like orange and red.

Note that a cue is not a “command” or an “order.” There is no force in the definition of cue.

The Clever Cue Detector

What does this mean to Clara?
What does this mean to Clara?

My dog Clara has a genius for observation of the tiniest details, perhaps in part a result of her feral background. Since she arrived in my dog household, I have noticed an increase in group behaviors by my dogs that are responses to events in their environment. In other words, they now notice all sorts of things, usually that I do, that likely predict good stuff. And Clara in particular has the ability to follow my behavior chains backwards, to find the earliest predictor that I might do something cool.

Cue #1 The first one that I noticed is that Clara responds when I reach for the top shelf of a particular cupboard in the morning as I am getting ready for work. Virtually the only time I reach up there is to get down the package of cookies that I typically dip into for the dogs when I get ready to leave. Clara gets a nice treat when she goes to her crate, and the others (who are separated in different parts of the house but not otherwise confined) get a small piece too.

If we put that in the language of behavior analysis, we have:

  • Antecedent: Eileen reaches for package of cookies on the top shelf
  • Behavior: Clara runs to her crate and waits inside
  • Consequence: Clara gets a nice chunk of cookie

The interesting thing to me is how far back in time Clara has tracked this cue. Some dogs might not get in their place until verbally cued to do so. That’s the case with my other two dogs. Or a dog might wait until I was walking towards her crate. Or breaking the cookie into pieces, or rattling the package while getting the cookie out. But Clara has traced my behaviors backwards to the earliest consistent predictor of my leaving and her cookie: my reaching for the package. Also, I think it’s very cool that she runs away from the cookie to get the cookie.

In the movie, I show what happens when I reach into the cupboard and pull out something from a lower shelf. (Nothing! Even though it’s a noisy package, the dogs continue to watch, but don’t budge.) Then I show what happens when I reach for the special package of cookies. The sound is certainly part of the cue, but Clara doesn’t always wait for the sound. I have experimented, and she discriminates on the basis of what shelf I am reaching for.

Link to the cookie shelf cue movie.

Here are some more cues that I have come to notice. They are mostly Clara’s, but the other dogs have learned them now as well. I’m skipping past the more obvious ones like how all the dogs come running if they hear me preparing a meal, or opening the front door. Everybody’s dogs do that, right?

Cue, Cues, Everywhere!

The Computer

  • Cue #2 Setting: kitchen, in the morning. Cue: I close the lid on my laptop. Behavior: Clara runs to her crate. Why: I’m getting ready to leave for work, and she’ll get a good treat when I crate her. So actually, now that I think about it, she has traced the cookie cue even farther back in time than I realized.
  • Cue #3 Setting: kitchen, in the late evening. Cue: I close the lid on my laptop. Behavior: Clara runs to the bedroom. Why: I’m getting ready to go to bed, and she loves getting in the bed. (So in these two, the time of day is a part of the antecedent that allows her to discriminate.)
  • Cue #4 Setting: kitchen or office, the rest of the day. Cue: I close the lid on my laptop. Behavior: All dogs jump up or come running from other parts of the house to see what will happen. Why: Whatever I do next will likely be more interesting to them than my working on the computer.

OK, you get that when I actually get off the computer, it’s a real event. And actually, my drawing a breath and reaching for the laptop cover is now becoming the cue.

A different computer cue:

  • Cue #5 Setting: office, early evening. Cue: I put my laptop in its cover. Behavior: Clara runs to her crate. Why: I’m likely going out (I carry my laptop around a lot).

The All-Important Ball

A tan dog with black muzzle and a red ball in her mouth is rushing toward a woman sitting down with a white plastic bowl in front of her. The woman is holding a similar red ball in her right hand, completely covered, and out of sight of the dog.
The Ball Game

As you can imagine, with a ball-crazy dog like Clara, she pays intense attention to any cue that might precede a game.

  • Cue #6 Setting: Afternoon in the yard. Cue: I clean up after the dogs and put the poop stuff away. Behavior: Clara runs up the steps eagerly, looking back over her shoulder to see if I am coming. Why: I might throw the ball.
  • Cue #7 Setting: Afternoon in the yard. Cue: I finish raking and put the rake away. Behavior: Clara runs up the steps eagerly, looking back over her shoulder to see if I am coming. Why: I might throw the ball.
  • Cue #8 Setting: Late afternoon in the house. Cue: I let the dogs out of their various areas after they eat their supper. Behavior: Clara runs to the back door, looking back over her shoulder to see if I am coming. Why: I might throw the ball.
  • Cue #9 Setting: Late afternoon in the house. Cue: I walk towards the back door. Behavior: Clara runs ahead of me, looking back over her shoulder to see if I am coming. Why: I might throw the ball.

OK, from the above four, you can see how important playing ball is to Clara! The other dogs usually come too, since there is fun stuff available for them as well.

Kitchen Stuff, Training Sessions, and Attention in General

  • Cue #10 Setting: Kitchen. Cue: I lean back in my chair after eating. Behavior: Clara comes running over and nuzzles my hands. Why: I am available to pay attention to her again.
  • Cue #11 Setting: Kitchen. Cue: I open the pill bottle for Summer’s thyroid medicine. Behavior: All dogs come running. Why: they all get a little peanut butter when I give Summer her pill. This one is especially interesting because it has been several years since I used to open the bottle for Summer’s pills twice a day. These days I only open it once a week because I cut up the pills and put them in a pill sorter. And I don’t always do that when it’s time to administer the pill. So it is no longer a perfect predictor. No matter; they still all come running. The power of a variable reinforcement schedule.
  • Cue #12 Setting: Anywhere in the house. Cue: I pick up the camera tripod. Behavior: All dogs come running. Why: Training session!
  • Cue #13 Setting: Anywhere in the house: Cue: I pick up one of the dogs’ mats. Behavior: All dogs come running and try to get on it even while it’s up in the air. Why: Training or mat session!
  • Cue #14 Setting: I am talking on the phone. Cue: I start making finishing remarks. My dogs can tell from my inflection that I am winding up the conversation even before I get to “Goodbye.” Dang, they are good! Behavior: All dogs gather around. Why: I will probably get up and do something.
  • Cue #15 Setting: Anywhere in house. Cue: A delivery truck comes by.  Behavior: Clara and Zani come running. Why: I have classically conditioned Summer’s barking to mean a shower of food, and it has morphed into a recall cue. However, Clara and Zani both learned what makes Summer bark, so they no longer wait for her to bark.
  • Cue #16 Setting: going outside. Another recall cue that I wrote a whole post about.


A small black and tan colored hound is looking up. She has flecks of snow all over her face
Zani in the snow

Cue #17 Here’s another one starring Zani. Back in 2011, when I was making this movie about negative and positive reinforcement, I trained Zani to run down my back steps on cue. I have not used that cue very much in our life together, since generally she goes down when she needs to and I don’t intervene if she thinks she doesn’t need to. Some of the training for that cue took place during some snow here, a relative rarity. Interestingly, the snow became a cue! See what happens.

Link to movie “A Snowy Antecedent”

There are three types of antecedents: cues, setting factors, and motivating operations. I discussed with some knowledgeable friends what kind of antecedent the snow likely was. Characteristics of the environment are often setting factors. However, the snow by itself is sufficient to get Zani to start running up and down the stairs. So I vote that it is an actual cue. 

What are some of your dogs’ more interesting cues? Planned or unplanned?

Related Posts

Eileenanddogs on YouTube

* Keller and Schoenfeld, Principles of Psychology, 1950, p 118. A stimulus-delta is also a discriminative stimulus.

44 thoughts on “17 Behavioral Cues that I Didn’t Train (But are Still For Real)

  1. I LOVE this one! Shared, and I’ll now go and beg friends to watch. There are opportunities to train your dogs all day long, little fun things to notice and capture. This post and video will show so many people that training can and does happen, even if an owner doesn’t have a large block of time to set aside for it each day. Thank you for this!

    1. Thanks, Ingrid! You know, I hadn’t thought about working this into the idea that “training is always happening” but you are right. It’s a great example of that. Who knows, might be another blog in there!

    1. Hah! I wanted to get a video of it, but of course, setting up the camera is a cue, too. I would have had to let it run for however long it took for the dogs to believe we weren’t about to train, to be able to capture the immediate leaps to attention that happen when I close the laptop. Thanks for the comment! Glad I’m not the only one!

  2. Another interesting and well written post! Very thought provoking. Thank you for the time you put into these!

    One tiny note on terminology: Most behaviourists would call these “context cues,” and the term has extended into clicker training. Karen Pryor was on a panel that discussed how powerful context cues can be a few years ago, and it’s available as a clickertraining.com podcast:


    Casey Lomanco has written for many years about the importance of “fluency training” to overcome unintended context cues, so that the dog learns that it is the word Sit that matters, not where or when you say it. Sue Ailsby and other trainers have always stressed the importance of practicing behaviours in many situations “because dogs don’t generalise the way we do,” but Lomanco looks at it in detail and says it’s not usually an issue of Generalising but rather one of specificity: the dog thinks the cue to Sit is the additive combination of what’s being said, where, when, and how. They just don’t overweight the verbal until we teach them to do so. Change any one component, and many dogs won’t recognise the cue, because they don’t consider the verbal any more important than what shoes the handler is wearing. It’s up to us to teach them that the cue is more limited than they assume.

    Lomanco’s article is pretty technical, but I wanted to especially call out this:


    “The first few behaviors you train to fluency will take longer than any subsequent behaviors. Once you have three or more behaviors well-trained to fluency and under stimulus control, you should find all future proofing for fluency speeds up.”

    If you do a lot of proofing, most dogs can eventually learn the general idea that when you put a behaviour on verbal cue, the verbal is what matters. Which is good and bad for us! Like your examples of Clara running to her crate, it can be amazing how convenient those context cues can be. 🙂

    1. Thanks for the comment and the great resources. I’ve been thinking that the obvious followup to this is a post about the times these cues and behaviors don’t work in our behavior, and I just might do it. I’ve written about it some before, but have some good new examples. I wasn’t familiar with the podcast and will be listening. Thanks again!

  3. PERFECT timing on this one Eileen! There is a little discussion going on one of my horse training lists asking “what is the difference between a cue and an aid?” I don’t think dog folks use the word ‘aid’…?? I will share this blog with them (trusting that is ok right?) and give them some food for thought. As for me, again your post is such a breath of fresh air! You write so clearly I love how easily I can understand. So huge thanks again for that! Riding a horse is so complicated for me when I think about how much -R I (seem to) need to use and you post got me wondering about what’s going on when we train a behavior and get it on a cue, but we trained it with -R. I think I asked about this on the LLA course but it is still a bit foggy for me. An example: raising the whip is the cue to walk on – trained by first tapping with the whip to get them to move. Do we call raising the whip a cue? Even if the reason they move is because they are avoiding being annoyingly tapped? Thanks for any help with this. You probably get sick of me thanking you…:-) but your posts always brighten my day….:-) And thanks above to Robin J. for her comments. Great stuff!

    1. Thanks, Lyndsey! Of course I don’t get tired of being thanked! Yes, raising a whip in an R- scenario is an antecedent (cue). In R-, the aversive is the antecedent. The animal’s behavior cause the aversive to be removed. There are two kinds of R-: Escape and avoidance. In the first stages of the training, the horse moved to escape the tapping, to get it to stop. After that cue is learned, the horse can avoid the tapping altogether by moving when the whip is first raised, to prevent the tapping from starting in the first place. It’s still a cue, and still R-.

      Does your training facility allow the use of R+? Treats? People used to think it wasn’t possible with such large animals in close quarters, but there are people doing wonders with it. Some of the first lessons are usually teaching the horse to give you space and not mug you for the food! As you can imagine, you might not want to teach a recall as the first behavior!

      Alexandra Kurland is one of the big names in R+ horse training, in case you don’t know about her. She does use R-, but a lot less than traditional horse trainers.

      Good luck with your horse riding and keep the questions coming!

      1. Thanks! great explanation. Ok so just to check I understand how to word it accurately – if “A discriminative stimulus signals that reinforcement is available for a certain behavior.” then we say the raised whip is a cue (the SD), and the reinforcer is being able to avoid the aversive…?? Is there a better way to describe the reinforcer…? Having he choice of escape…??

        And just to let you know, I’ve been clicker training my horse for 15 years!! And yes I know Alex well and have been to many of her clinics. However, the horse world is sooo far behind the dog world and there is a tremendous amount of cultural fog and lay person conversation that can really muddy the waters. So…that’s another reason your posts are so welcome! I learn much better from seeing examples which is why I find your posts so powerful. The pics, explanations and video’s of real life situations are great. I would love to see this type of blog with horses. Your yard looks big enough for at least a mini….:-)

        1. Hah! Just what I need, a mini horse! (That would be ultra cool!) That’s wonderful that you are a longterm horse clicker trainer. Sorry to tell you what you already know! I always keep other readers in mind, who may well wonder why I’m calmly discussing the use of a whip, even for touching! But I think the horse world is catching up some, don’t you?

          Yes, the raised whip (in a certain placement or manner, I imagine) is an SD. The behavior is “The horse walks on.” The consequence is avoidance of contact with the whip (see last paragraph). You probably wouldn’t word it as a choice or ability of avoidance, since if the horse has already performed the behavior, the whip will not touch them. (Assuming consistency from the rider here.) The choice point is at the behavior step: The horse can choose to walk on, or choose not to and get tapped by the whip.

          There are whole areas of research just on avoidance, even to the point of there being some controversy about how exactly it works, so I’m a little out of my depth in writing extensively about it. It’s certainly harder to see than many other consequences. You can almost always see R+ consequences. And you can see the removal of aversives that are already physically happening. It’s harder to “see” physical avoidance. As Murray Sidman put it, “Successful avoidance means that something…did not happen, but how could something that did not happen be a reinforcer!” [Sidman, Avoidance at Columbia, The Behavior Analyst, 12, 191-195.]

          If you wanted the most proper wording possible for the consequence, I would say, “The whip tapping was avoided.” One thing that always sticks in my mind when writing about this stuff is that the A and the C must be stated in terms of the environment. So we wouldn’t want to say, “The horse avoided the whip” or “The horse was able to avoid the whip.” It sounds too much like second behavior from the horse.

          Hope that helps. Great questions!

          1. Ok my poor brain is spinning…:-) Thanks for taking the time Eileen! Unfortunately no I don’t think the horse world is catching up. There is movement for sure and hopefully at some stage it will be like the 100th monkey and will sweep the planet overnight. But…for the most part it’s all -R with lots of +P thrown in. That’s my perception anyway. I’m confident it will change for the better because there are so many wonderful and passionate people trying to make it happen. As Susan Friedman says, it’s so much about education.

            And right now, there are many lay people in the horse world chatting about behavior science but they just don’t have a good education. Hey it’s great that they are at least interested!!!! But I have found the dog world – people like yourself – to be a fantastic resource for clarity with the science. And I know many of my horsey friends are looking to the dog folks as well.

            I think that one of the reasons it’s hard for me to get my head around what is really happening with -R is just as you say – how can something that did not happen be a reinforcer. So saying for our ‘C’, “the whip tapping was avoided”, helps make it clearer. I’ll keep pondering this for a bit. Huge thanks again!

    2. I don’t know if it’s the same in the US, but I have friends in the UK who do horse training and there an “aid” is physical gear that helps when training specific postures. Champon, de gogue, harbridge, etc. Very very similar to the specialty harnesses used in dog training like a Gentle Leader head harness or a Sensation no jump harness.


      If that’s what we’re talking about, then from a behavioural psychology point of view the gear starts out as a means of delivering a mild aversive when the animal’s movement is the one we are trying to change. R- training. Pull too hard, or in the wrong direction, and the gear itself makes that movement more uncomfortable. NOT impossible. But more unpleasant.

      So as in Elaine’s discussion of the whip, the animal learns first to escape the unpleasant feeling by readjusting to the other posture and then eventually learns to avoid the unpleasant feeling by never going into the posture that causes the unpleasant feeling. That’s the theory, anyway!

      Over time, the gear itself does become a context cue for the more comfortable behaviour, as seeing the gear reminds the animal of the unpleasant feeling, theoretically increasing the motivation to remain in the comfortable position.

      So most aids of this type start out as R- delivery mechanisms and over time become context cues as the animal has learned how to avoid the aversive.

      There are a few types of gear that do NOT deliver an aversive when the animal assumes an undesired posture but instead prevent an undesired posture from ever being achieved. If the animal is comfortable throughout and it’s just creating opportunities for R+ in the desired position that’s a “modeling” aid, not an R- delivery mechanism.

      I don’t know of a horse example, but with dogs a classic is training a dog to walk nicely parallel to a wheelchair. This is often done by simply walking along a long wall or fence so there’s no room for the dog to swing his butt out of position. Lots of R+ along the way. Sometimes it only takes 3 or 4 passes before the dog starts staying parallel to the chair away from the wall as well. The wall isn’t an R- delivery mechanism, it’s just modeling the desired straight line so you can increase the rate of reinforcement in R+.

      Back to the harness type gear…an alternative R+ approach is Dr. Ian Dunbar’s, who teaches loose leash walking for dogs by beginning without a leash at all, using just R+ to reward a dog for staying in the right relative position. Taking the gear completely out of the equation vastly simplifies the process for both dog and person. But it does take consistent training over time.

      So: if by “aids” you mean gear that, like head harnesses for dogs, increases the aversive level of specific positions and are used for R- training, then they start out as an R- delivery mechanism and theoretically over time become a context cue. How effective that is depends on many individual factors.

      If by chance “aid” means something completely different in your context, hopefully this discussion is still of some interest.

      1. Wow thanks Robin! Great explanation. And yes this is just how I would use the word aid – for gear or response restriction (is that also a term for ‘modeling aid’?) to help capture the target behavior. I personally find many of those side reins and the pessoa too aversive but I use walls and lines a lot. My initial interest in Eileen’s above post on cues was in really defining a cue mostly because I realized it was still a bit fuzzy in my head what we could call a cue. Which came up when horse folks were trying to explain the difference between a cue and aid. I need to think about this more but off the top of my head, I see aids morphing into cues…?? (If we define aids as you have above.)

        Thanks so much for your time and yes it’s been a really interesting and helpful discussion!

        1. This is one of those cases where it’s easy to get confused between the technical use of a word, in this case “cue,” and the use of the word in every day language.

          In behavioural psychology, a “cue” (meaning a “discriminative stimulus”) means something the animal has learned from past experience is a predictor of a possible reinforcement opportunity.

          I like to use the example of a hunter who sees fresh tracks in the woods. That doesn’t mean someone’s about to hand him a bowl of stew. It doesn’t mean he must start the hunt. It does mean that IF he chooses to start hunting that particular animal, he feels more confident that the hunting behaviour may pay off at this time.

          (For those who prefer a vegetarian example, if an experienced bird watcher is walking through the woods and hears a recognisable bird call, same situation. There’s no guarantee he’ll see the bird, and he can choose to ignore the call and just keep walking. But IF he gets out his binoculars and starts looking, he can feel more confident that his search will pay off at this time.)

          So: a “cue” provides information because of past experience. The information provided is that doing a specific behaviour at this time is likely (but not guaranteed) to provide reinforcement.

          That’s why a particular stimulus may only be a cue for one animal and not another–because they may have different past experiences. So they may have different associations to it. This may be unintended context cues or intentional training.

          If you’ve ever seen a shepherd with a whistle working two dogs you get a great example of this. Each dog has been trained to recognise only a specific set of notes as relevant to her, and to ignore any other notes. The following is from an Irish sheepdog demonstration with two dogs:


          So a cue can be anything: sight, sound, smell, touch, taste–that that particular animal has learned from past experience to associate with a specific reinforcement opportunity.

          Like the tracks in the forest, the cue represents an opportunity for the learner to voluntarily produce a specific behaviour in hopes of a specific reinforcer. All based on knowing what worked in the past.

          We had horses when I was a kid. Me climbing the apple tree with a bucket was a cue for the horses to come put their heads over that side of the fence. Why? Because they had learned that might mean I would give them an apple on my way back up to the house. If I climbed the tree without a bucket, they knew they were likely out of luck, so they didn’t bother to come to that side of the fence.

          And one horse was either smarter or lazier –he didn’t run to get into position by the fence until I actually started walking towards the fence. Also interesting–he didn’t care about the bucket. Kid approaching that part of the fence might mean apple or sugar or carrot. Or just a kid climbing the fence to watch the horses, but he accepted that possibility. So he was cueing off something different than the others. They all had similar past experiences, but he had made a different association and so responded to a different cue. For most of the horses, the cue was “kid with bucket near the apple tree.” For him, the cue was “kid walking to the fence, with or without bucket.”

          Here’s the funny thing–for me, the horse who ignored the bucket was more reinforcing to me! I felt like the other horses were focused on the apples, and he was waiting for me. So not infrequently I brought one cube of sugar in a pocket just for him. Even on bucket days. All the horses would get an apple–he’d get apple and sugar. My dad would say, “That horse has you trained well.” 🙂

          1. Oops! That video of the Irish sheep dogs isn’t very good quality. This one of the same dogs is much better. Again notice how each dog only pays attention to the notes they’ve been trained to–but I’ll bet both are using a lot of context cues as well!


          2. One more jargon note on “discriminative stimulus.” This is super technical, so if you’re not interested in that stuff, nothing in this note will be of interest. For those who are interested in the academic side, here goes. 😉


            First you need to understand a different term, “differential reinforcement.” That’s used to say that only a specific behaviour will be reinforced. You want your dog to lie on her mat. You give a treat when she lies on her mat. You don’t give a treat when she runs to the other side of the room and lies down there. That’s differential reinforcement. You may differentiate based on what she does, where she does it, when she does it, or what she did before doing it. Still all called “differential reinforcement.”

            There are a bunch of additional technical terms that go along with DR. This link has some good classroom examples:


            With dogs, Housetraining is another good example. We reinforce based on where the dog urinates: outside, good. Inside, not so good. So again DR.


            That brings us to the big question from the learner’s point of view: how do they know which behaviours are likely to be reinforced when?

            Take High Five. Most of the time if my dog paws at me, I am not happy. But sometimes that’s exactly the behaviour that will be rewarded. So how does the dog know the difference?

            This is where the “discriminative stimulus” comes in. The discriminative stimulus allows the dog to “discriminate” between situations where the behaviour is likely to pay off and where it isn’t.

            In its purest form, like the High Five cue, the discriminative stimulus is indicating that this is the only time the behaviour may pay off. If a behaviour is under “stimulus control,” then the behaviour should only occur when the discriminative stimulus is presented.

            In another typical form like the hunter seeing tracks in the woods, the discriminative stimulus is simply indicating a higher probability of payoff. You can still hunt without seeing tracks.

            In dog training this second form applies to naturally occurring behaviours, like lying on the mat. Unlike pawing, we don’t care if the dog chooses to lie on the mat when we didn’t cue her to do so. But when she does get a mat cue, we will have trained her to have a higher expectation of a food reward for that same behaviour. So it’s still a discriminative stimulus, because like the animal tracks it lets her discriminate between a high payoff percentage situation and a low one.

            So a discriminative stimulus helps the animal distinguish times when a specific behaviour is particularly likely to pay off.


            The classic laboratory example is a light bulb above a lever. When the light is on, pressing the lever is likely to produce food, even if it’s only one every 3 or 4 times. When the light is off, you can press the lever all you like, no food is coming out. The light being on is the discriminative stimulus. But it only communicates the presence of the opportunity because of all the past experience the learner has had.

            So a “discriminative stimulus” allows the learner to discriminate between times when there is a higher probability of reinforcement for a specific behaviour and other times of lower probability based on that individual learner’s past experience. Or to use more precise jargon, that individual’s “reinforcement history.”

            Discriminative stimuli are really helpful to the learner in many situations. Take the lever in the lab. The learner has no way of knowing from the lever itself whether it’s likely to pay off. Even pressing it once may not tell you anything. But the light being on provides information that might normally require many lever presses to figure out. True, the learner originally had to do a lot of lever presses to figure out the association between the light and the food. But once that’s established, it’s a huge time and energy saver for future decisions.


            For the trainer in practical every day situations, the learner’s recognition of discriminative stimuli can also save time and energy because it means, from the trainer’s point of view, that the learner is more reliable In producing desired behaviour.

            The family with a pet dog who sometimes says, “Down,” sometimes, “Off,” sometimes, “Lie down,” sometimes “”Lay Down,” sometimes with a point, sometimes with two hands flapping, sometimes with one hand flat, has failed to give the dog a discriminative stimulus that consistently predicts reinforcement for lying down. So many dogs just stop paying much attention to verbals and instead try to figure out for themselves a good discriminative stimulus, often the person’s body language. Or they just guess and try 3 or 4 behaviours that have sometimes worked in the past.


            If we can remember that from the learner’s point of view the value of a cue is that it identifies times when reinforcement for a specific behaviour is more likely, then we as trainers can choose intentional cues that help elicit the behaviour we want to see. And we can take advantage of context cues that the animal is already using to make a specific behaviour even more reliable from the trainer’s point of view.

            Or we can reverse the process and intentionally break the association between a context cue and the expectation of reinforcement for a particular behaviour. Which is what fluency training is all about. For example, just being in the training room usually means a high probability of treats. It’s up to us as trainers to build associations in other environments as well.

            We improve reliability when we improve the learner’s ability to discriminate between high payoff situations and low payoff ones.

            Usually that means teaching the dog to overweight a specific verbal cue as the critical discriminative stimulus for a specific behaviour. But it applies to any cue we want the dog to value.

            Again, all very technical, but that’s how differential reinforcement, reinforcement history, and discriminative stimuli work together. And why a stimulus which is neutral for one learner is full of information for another.

            1. I should also say that discriminative stimuli are just one kind of “cue” in psychology. Classical conditioning also uses “cue,” but in a different way because no particular behaviour is required of the learner before reinforcement occurs. Practical dog training usually uses “cue” to mean discriminative stimulus, but may also use it for a conditioned stimulus or even an environmental cue with an instinctive response.

      2. In riding there are “artificial” aids – gear – and “natural” aids – legs, hands, seat, and voice. All of those could become cues or signals as to what’s going to happen next.

        A leg “aid”, behind the girth, for example, could become the cue for a canter depart. Or it could be a cue for a leg yield or half pass or haunches in, depending on what OTHER aids/cues you’re also giving. Some people are very specific in where they use the leg aid (with a spur) to differentiate between those behaviors; Spot #1 for the canter depart, spot #2 for half pass, spot #3 for haunches in, and so on.

        Everything in riding is contextual – where did the last canter depart happen? It will likely happen there again. What other aids were used and how were they applied governs what behavior is being asked for.

        Just a couple of thoughts, not fully fleshed out. 🙂

  4. This actually doesn’t have to do with my dogs but actually my guinea pigs .. the second my alarm clock goes off they start ‘weeking’ that squeaky noise they make and run up to the top level of their cage and wait for their greens .. every morning they get veggies and the alarm has become a cue to get to the feeding station …haha

    1. How well I know that noise! That’s cool (I guess?!?) that they cue off your alarm. At least it’s not something that happens even earlier in the morning! Come to think of it, my guinea pig started in when she heard the refrigerator door open.

  5. George picked up on a habit I have of going to a kitchen cupboard to put lip salve on before getting ready for dog walks. When I go to a mirror and apply lipstick, he sighs and lies on his bed, knowing I’m off somewhere. Closing the laptop, as you say but he also knows the Windows close down jingle is time for a bedtime wee and goes to the back door.

    1. That’s cool, Nicola, that he knows the two different lip treatments. And the closedown noise, is a good one, too. Mac user here, so we have a startup noise but no shutdown. So they have to wait for the lid to go down.

    2. My dog has figured out which drawer in the bathroom I open in preparation for going for a walk (hair comb), as opposed to the makeup drawer which means I am going out without them. He is certainly much more adept at noticing antecendents but I never thought before that this might be connected to his background; prior to coming to live with me, he was rescued from living on the streets. Not feral but certainly in a position where he had to pay more attention in order to survive.

      1. That’s cool, ClearlyKrystal. I love noticing the things that dogs notice. It certainly makes sense that the survivors from the streets or the woods are probably the ones who are good at that.

  6. Thank you for another good post! My friend and I walk our dogs together several times a week. Usually one of us calls the other to say we are leaving the house so we can meet up. Whenever I answer my phone while sitting at my desk, Max comes over to me and gives me that “look”. He also does it whenever I make a call. The funny thing is, the “ready to walk” phone calls are always brief so if I am on the phone longer than a minute or two, he usually sighs and lies down. Is it possible he had determined the longer conversation means it is not the “ready to walk” call? My friend has two dogs and the one who really loves to walk responds to her phone calls the same way. The other dog pays no attention when her phone rings.

  7. Linda, LOL, Max is smart.

    Robin J, Thanks, I found your articles very interesting.

  8. I’ve noticed things like this with our dogs too. It’s interesting how quickly they learn a behavior and how long it lasts. Our adult dogs are still exhibiting behaviors that they learned as puppies and the puppies picked up on behaviors right away. I love it!

  9. Love this! (I especially liked the part in the video where Clara and Zani pass each other in response to the “reaching for the cookie” cue; Zani into the kitchen and Clara (away from the cookie!) into her crate! Excellent and informative post, Eileen! (Loved the “snow cue” also).

    My very favorite (new) classically conditioned cue is the predictive value of my husband’s phone ringing…..last summer, we finally were able to put in a long-desired lap pool (for us AND for dogs). I would regularly do my lap swimming first and then call Mike from the pool to “release the dogs” from the house so that they could all come and swim with me. After just a few repetitions of this……you guessed it, the ringing of Mike’s phone now causes all heck to break loose in our house – four dogs barking, spinning in circles, grabbing their pool toys, and racing for the doors. The pool has been closed for the winter for several months now…..but Mike still has to keep his phone on vibrate….. 🙂

    Thanks again Eileen – Love reading your posts! Linda

    1. Thanks, Linda! I got a kick out of that in the movie, too. Zani wasn’t supposed to be in it. But I thought their contrasting reactions are cool. (Zani stays in the same room as Clara (Clara is crated) when I am gone, but she hasn’t reached Clara’s level of efficiency yet!

      So you did new cue/old cue with your phone! Clever dogs. Thanks so much for the comments.

  10. Interesting! I always thought that “cue” was just a word used to replace “command” due to the negative connotations.
    The blog does state
    “Note that a cue is not a “command” or an “order.” There is no force in the definition of cue.”
    I don’t see why there has to be. I use “command” and “cue” interchangeably, and to me that simply means “telling your dog which behaviour to perform”. When I say “sit”, for example, I am giving a command/order- I’m telling my dog what to do. There still isn’t any force involved as it was FF/+R trained, but it still means “do what I tell you right now” in a very literal sense. There just isn’t an “or else”.

    1. That’s great that your commands don’t have an “or else.” I believe you! With what I wrote, I was responding to the historical use of that word by some types of trainers where there definitely was, and I think there can be a really different mindset. (Can we really “command” a dog, like a king commands his subjects?) Since I’m a crossover trainer, it was quite a thing for me to learn that nobody, no matter what they say, can get 100% compliance, and that cues are not about that. With behavior, we are always dealing with probabilities and trying to stack the odds in our favor, whatever methods we use. Anyway, for me, the meanings of the two terms have always been very different. Thanks for commenting and giving your perspective!

  11. Pingback: unintended cues
  12. I like to take a glass of wine when I take our dogs out to walk our land so, yep, our dogs demonstrate an embarrassing enthusiasm for wine.

Comments are closed.

Copyright 2021 Eileen Anderson All Rights Reserved By accessing this site you agree to the Terms of Service.
Terms of Service: You may view and link to this content. You may share it by posting the URL. Scraping and/or copying and pasting content from this site on other sites or publications without written permission is forbidden.
%d bloggers like this: