Surprisingly, no. Not necessarily.
You could actually get my answer to this question by reading this other post: Only If The Behavior Decreases! But of course I’m writing some more anyway.
This issue was a big stumbling block for me when I first started studying operant learning. If negative reinforcement requires for an aversive to be removed, then it had to get there in the first place, right? That’s only logical. So whenever it appeared, there must have been punishment, right? I used to argue about this in my head all the time. But the answer is no, it doesn’t follow that there necessarily was punishment. There absolutely could have been, but it is not logically necessary after all. Here’s why.
Positive punishment: Something is added after a behavior, which results in the behavior happening less often.
Look at the second half of the definition. I’ve harped on this before, largely because I need a lot of reminding myself. The definition of punishment (and reinforcement also) is recursive. We can only know if punishment has occurred by traveling to the future. If the behavior didn’t decrease, there was no punishment.
Aversives, Negative Reinforcement, and Positive Punishment
Employing an aversive and using negative reinforcement do not mean one is also positively punishing a behavior. I have this from the words of Susan Friedman, PhD, in one of her lectures in her Living and Learning with Animals Professional Course. For punishment to have occurred, a behavior must decrease in frequency. That’s the definition. And if one is in a negative reinforcement scenario, the behavior the animal is performing at the onset of the aversive could very well be randomized, because that is not the focus of the training. The animal may not always be doing the same thing when the aversive (e.g. shock, pinch, pressure, nagging, or appearance of scary monster) starts.
Examples

If I decided to teach my sensitive dog Zani to back up by consistently walking into her whenever we were both in a certain hallway (note: I don’t teach backing up that way), her behavior of coming into that hallway when I was there would definitely decrease. Coming into the hallway with me would have been positively punished.
But let’s say in another training scenario I play a mildly aversive noise whenever I want Zani to come get on her mat in the kitchen, and the noise stays on until she gets there. The mat is a “safe place” and getting on it turns off the noise. (Again, I would not do this.) Zani could be anywhere in the house when the noise starts. Her behavior at the onset is randomized, so nothing gets punished. But if I started doing it consistently when she was sitting in a particular chair, she would likely stop getting in that chair.
Since we humans fall into patterns so easily, it is very easy for positive punishment to start happening when we regularly use an aversive. But the point is that it does not have to happen.
Let’s face it, people use noxious stimuli all the time without behavior decreasing. That’s one of the many problems with trying to use positive punishment: unless the noxious stimulus is strong enough (and well timed enough, and several other criteria), the original behavior may maintain its strength.
Ramifications
So does this mean that negative reinforcement is OK? No. An aversive is an aversive. Just because there is no positive punishment going on doesn’t mean that the training is humane.
If you wanted to reword the question in the title, you could say, “But if you use negative reinforcement aren’t you also using an aversive, just like in positive punishment?” As long as it is recognized that the aversive is used in a different way, the answer is yes.
But the funny thing is, I’ve heard the relationship between the two processes that use aversives used to make two opposite claims, neither of them true in my opinion.
First is the claim implicit in the title. It usually goes like this:
If you use negative reinforcement, therefore you are using positive punishment, so neeter neeter neeter.
I have dealt with that one above.
The second claim goes like this:
Yes, you can have negative reinforcement without positive punishment. And that’s the GOOD kind of negative reinforcement. As long as there is not positive punishment going on, negative reinforcement can actually be kind of nice.
Excuse me? The existence or non-existence of concomitant positive punishment is irrelevant to how aversive a stimulus is. It is possible to train with a shock collar using a negative reinforcement protocol, for instance. Again, as long as the point at which the shock is turned on is fairly random with regard to the dog’s behavior, you may not necessarily see a decrease in a behavior as a result of the commencement of the shock.
Negative reinforcement is only one notch up from positive punishment on the Humane Hierarchy, and for good reason. It always involves an aversive, and employs escape and avoidance, pure and simple. So don’t anybody dare use my argument above showing that there is not necessarily positive punishment in order to say that negative reinforcement is OK.
Coming up:
Professor Rosales Ruiz (a frequent Clicker Expo presenter) and his animal behaviour students have done quite a bit of academic study on negative reinforcement: the good, the bad, and the ugly. You might find his work interesting.
http://www.clickertraining.com/node/280
In addition, both Alexandra Kurland and Katie Bartlett have done a huge amount of practical work on this with regard to training horses. Bartlett has a long article synthesizing current horse trained thought on the topic. Again, I think you’ll find it interesting.
http://www.equineclickertraining.com/training/negative_reinforcement.html
As always, thanks for the great work you do on the blog!
Robin, thanks for the great resources. I was not familiar with the Bartlett article. There was an interesting bit of synchronicity though. I follow along with several horse trainers who try to use as much positive reinforcement as possible. Recently I read something, it may have been by Melissa Alexander, pointing out that negative reinforcement was unavoidable with horses because of the particular behaviors we want horses to do: be ridden by a human rider, taking cues from the rider’s body. Ms. Bartlett says the same thing. I had never thought it all the way through before. That article is comprehensive and thoughtfully written and I’m so glad you linked it.
I think it’s also helpful to remember that, as with the horses being ridden, sometimes our ultimate goal is to countercondition the animal to something that, at the moment, this particular animal finds mildly aversive but that another animal might find pleasureable, and where our hope is that this animal will eventually change her opinion as well.
Some of Rosales Ruiz’s work, for example, deals with shy dogs in animal shelters who are initially terrified of people. A person moving close to them is highly aversive to them–but another dog in the same shelter might be actively soliciting passers by to come closer, even crave a human connection. The same with eye contact, and eventually touch. If RR’s students reward the shy dog by moving farther away, just feeling in control may make the dog able to tolerate a slightly closer approach the next time.
This is a situation where the trainer is not introducing a new aversive–it’s an internal assessment by this dog. And for many animals, that assessment CAN change over time.
This type of approach is also common in horse training. Learning that just a human presence DOESN’T mean actual pain and increasing the distance so the animal feels more in charge can help change the assessment over time.
So when something is aversive not because it is objectively unpleasant, but because of that particular animal’s expectations of what will follow it, you can use negative reinforcement in a very humane and ethical manner simply by demonstrating over time that the expectation was false. A person coming close doesn’t have to mean pain. You prove that by moving away again. Changing the experience history.
Indeed, I would say many situations where the handler hopes to earn the trust of a wary animal often benefit from a judicious use of negative reinforcement as part of a counterconditioning protocol. Just ending the training session can be a form of negative reinforcement for an animal who is still wary.
I support it when it is simply the only feasible method, as with wild animals, or when less intrusive things have been tried in a thorough and skilled way, and failed. Susan Friedman uses two examples of situationally humane R-. One was with wild parrots coming through customs. Time was of the essence to get them handleable. Doing approach and retreat got the trainers close enough to drop a treat in a cup and then switch to R+. The other was with a giraffe in a zoo that was afraid to go through a certain gate (into an enclosure where the other giraffes were). They designed an elegant R- protocol involving slow and gentle herding by humans carrying sheets of canvas stretched between poles. It was well designed in that even though humans carried the structures, the protocol probably didn’t have a side effect of making the giraffe more wary of humans in general. It was so thoughtfully designed and executed that they only had to do it once. I think it’s interesting that both her examples involve wild animals.
Thanks for all the info you share Eileen! It is always interesting, informative, based on science and sound research!!
I am currently starting up my own Dog Training business (been doing it for free for years) having just completed my certificate in Professional Dog Training Science and Technology through Companion Animal Science Institute (CASI). One of the concepts that has been the hardest to get across to people is that we can actually train our dogs using only positive reinforcement (particularly with the people in my town!) so this post is so relevant to me right now!!
Thanks for the vote of confidence, Kathleen! I think using only positive reinforcement (and other ultra humane stuff like antecedent arrangement and classical conditioning) is a great goal. I do think if one lives with one’s animals it’s pretty hard to not have at least some negative punishment in the mix. But I’m with you all the way in not using aversives!
Should I respectfully disagree or just maybe I am just on that “well for an aversive to cease the you have to activate it first right?” I do understand what you are saying, obviously it makes a lot of sense. Is there “good” negative reinforcement at all? Murray Sidman would disagree, how can or should we ignore the consequences of the use of a quadrant that bases itself on avoidance? Still is avoidance after all.
Also when you say we would have to travel to the future to know if the behaviour has decreased, therefore confirming the P+ application, you are so right, which then makes me think that we should admit all the time that P+ is there shouldn’t we? An aversive is an event/stimulus the animal wants to avoid, the animal does want to avoid it therefore increasing the alternate behaviour that arises from the avoidance, so hence we could just say the is R- is working then P+ worked as well, however the difference being, that one chooses to increase a behaviour instead of working on the decrease of a behaviour.
Thank you for making my wheel turn. I love your blog. From Portugal Claudia!
Hi Claudia, so nice to hear from you! Do you know, you hit on two things I was thinking about but didn’t write about. First, “Is there good negative reinforcement at all?” I had a sentence in there saying that R- was “never nice.” I took it out, since once in a blue moon it might be the right choice, if less intrusive methods didn’t work and if it did solve the problem. In that case it could be “nice” if it made the animal’s life better with a minimum of trauma.
And your second paragraph! I got to wondering this morning about alternative behaviors. In an R+ scenario, when the non-reinforced behaviors decrease under DRA, it’s not from punishment but is extinction. I got to wondering if you used differential
negative
reinforcement of an alternative behavior, whether the other behaviors would decrease from extinction or from punishment because of “leakage” of the aversive used. Probably the answer is, “It depends”!
I do think that someone who is using
only
aversives in training is probably going to have plenty of P+ even if it is not intended. Humans fall into patterns too easily!
Thanks for writing. Lots of food for thought here.
lets take the scenario of a parrot that wont step off a hand, some people squeeze the parrots toes to get it to step down. now if you look at it from the angle of stepping down increases, to escape the negative stimulus, then it is R-. but looking at the behavior from another angle, staying on the hand decreases, because that behavior is punished by the application of an aversive (squeezed toe) P+. So both (R- and P+) seem to be happening in this scenario, it just depends what you are looking at as the target behavior, stepping off the hand or staying on the hand.
Yes? No? (i am still trying to get my mind around the finer details of operant conditioning)
I agree. I think you have a good example of when both P+ and R- are happening. The toe squeezing happens only when the parrot is on the hand. It’s not randomized, like in the examples where P+ may not be occurring. It happens consistently when the parrot is on the hand and has been cued to get off. You’ve got a behavior that is decreasing because of an aversive. And the aversive continues until the parrot gets off the hand. I vote yes. Great example!
Great blog and discussion! I am trying to think of an example of what you described above Eileen when you said “I got to wondering if you used differential negative reinforcement of an alternative behavior, whether the other behaviors would decrease from extinction or from punishment because of “leakage” of the aversive used.” If you have the time an inclination, can you think of an example for me…?? Thanks!
Let me think on that and get back to you. When I read the words, I couldn’t even trace my thought processes back right away. But I’ll make a note to take a look. It’s a great question.
OK I came up with an example. Don’t take any of this as gospel; I am thinking out loud.
Sorry to use a shock collar but it is the easiest purveyor of negative reinforcement to talk about. Let’s say that when visitors come the dog is supposed to go to his mat instead of jumping on people at the front door, rushing around, or barking. His owner has previously taught him a “go to mat” behavior with the shock collar by a duration shock that only ends when he gets on his mat. So he knows this cue in easy situations but not when people come to the door.
So when people come to the door, the owner cues him to go to his mat and turns the shock on. It stays on until he goes to his mat. So he might be barking, jumping, running, or just standing around when the shock first turns on. The question is whether during the learning process, these other behaviors are punished by the initiation of the shock or go extinct.
My thought is that since shock is such a potent aversive, the behaviors that are most commonly taking place when the shock first turns on will get punished. That’s what I meant by “leakage” of the aversive. So running to the door and jumping on people may well get punished. Other possible behaviors that aren’t as common, say, standing and looking out the front window, may never be taking place when the shock goes on. So they will go extinct in this scenario but they were not punished (when go to mat has been cued).
I am thinking out loud here, and if anyone has some more informed opinions about this, I’d love to hear them.
The part that I can’t get my mind around is that when we have DRA using positive reinforcement, we have different competing positive reinforcers. Barking is fun, but so is going to the mat and getting great food. But I don’t know what happens when we have a previously positively reinforced behavior (jumping) competing with a negatively reinforced behavior (go to your mat to avoid shock). If extinction is happening, is the extinction process different? Can anyone think of a situation where a negatively reinforced behavior is competing with a different, previously negatively reinforced behavior?
Thanks for a good question, Lyndsey!
Hi Eileen! I love your blog! It is a goldmine of information! Thanks so much for all the effort you put into this fabulous resource.
I have a question that relates to the relationship between punishment and avoidance learning.
Avoidance learning would be regarded for the most part as negatively reinforced. In your example of the mildly aversive noise being the prompt for the dog to go to her mat, you are describing the use of an unconditioned stimulus – the noise. When we are training animals with aversives, whether for the purposes of increasing or decreasing a behaviour, it’s usual for a warning signal to be given intentionally, or at the very least learned by the animal, provided it is used consistently.
When people are taught to ride horses using aversives for example (when they are taught properly by people who understand learning theory at any rate – which is rare!), it’s usual for a warning signal to be used intentionally before an aversive is applied. So most “aids” or commands are classically conditioned.
Riders are taught to touch the horse lightly on the sides with their legs as an initially neutral conditioned stimulus (and often also taught to give a vocal warning such as a cluck or the words “walk on”, which is then followed by an unconditioned aversive stimulus (being kicked or having a spur applied, or hit with a stick) until the horse moves forward, whereupon the aversive is removed.
The horse initially moves off in response to the US and learns that moving results in escape from the aversive because it stops when they move. Most would regard that learning as negatively reinforced. An aversive is applied to prompt movement and removed when the desired kind of movement occurs.
The horse then gradually learns that the voice or light leg touch predicts the onset of an aversive US and begins to react to the CS to avoid the aversive onset.
Avoidance learning like that is also considered to be a form of negative reinforcement. First the animal learns what to do to escape an aversive (operant conditioning) and then learns that something predicts the aversive onset (classical conditioning) and acts before aversive onset to avoid it (operant conditioning). And will repeat that – because there is now a classically conditioned warning signal that predicts aversive onset for non-response.
This is also true of any CS that is learned by association with aversives applied through the reins to the bit or headgear. Often the rider is taught to use a warning signal again either given by voice or by altering their position or movement in the saddle. A transition from walk to whoa might be preceded by the rider ceasing to move their body in time with the horse and then following that by pulling on the reins until the horse stops, or using a vocal cue to whoa, followed by pulling the reins to apply an aversive.
Turns to the right or left might be preceded by the rider turning their upper body and hips in that direction or altering the weight in their stirrups or seat bones (whichever system they follow) and then followed by a pull on the rein to go in that direction. Some people argue that those seat movements from the rider aren’t neutral either and they unbalance the horse to the extent that the horse moves in that direction to regain balance. But anyway….
So the general idea is to teach almost invisible “aids” which are warning signals that an aversive will be applied if the horse does not respond in a particular way. And those aids seem to have to be regularly retrained (the horse has to be “reminded”) in many different contexts and the aversive escalated regularly for non-response primarily because horses are very prone to distraction and are rarely simultaneously trained by riders for that – there’s little active desensitisation to other stimuli – aversive addition or increase is the tool of choice for regaining the attention of the horse who decides to do his own thing between aids.
My question really is a philosophical one. Many people regard these warning signals that precede aversive onset, and that are avoidance learned, as threats. They come to mean “do x when you see / feel / hear that particular CS or you will experience an aversive US until you do” signals (which is what these conditioned stimuli become).
So they are a bit like your dog on the chair. If whenever the person is around and the dog is on the chair, the aversive noise is made until the dog returns to the mat, then you would say that the dog being on the chair is punished because it decreases. But would you regard that also as avoidance learning? The dog now avoids being on the chair in the presence of a person who might make an aversive noise until they get off. So the presence of the person is the CS and the aversive noise is the US?
But equally with your random use of the aversive noise to call the dog to his mat, we might find that in the presence of the person, the dog spends more time on their mat, or goes to their mat when the person appears, so as to avoid experiencing this stimulus from the person?
It’s an interesting debate in the horse world where aversive use is the norm for forming and reinforcing behaviour but where the “better” trainers use classically (aversively) conditioned warning signals so as to avoid the need for constant aversive escalation. That includes classical riding trainers, natural horsemanship trainers and academic systems such as that promoted by the Equitation Science movement. The chief proponent of that even argues that the warning signals have no aversive salience to the horse – they are “just discriminative stimuli” even though they are conditioned by association with aversives.
But those signals are all aversively conditioned and they all predict aversive onset for non-response. They all actually mean “do x or an aversive will be applied until you do, and then removed”. We might, I think, reasonably refer to those as threats.
And most people think that this is psychologically better for horses because they know how to act to avoid an aversive US. But in the end all the aids that are used to influence their behaviour are fear-conditioned and so whether CS or US they are aversive to the horse psychologically if not physically. They still produce behaviour under aversive conditions but because of classical as well as operant learning.
I was thinking about a specific example. Let’s say we are riding a horse towards a road junction and we decide that we always want the horse to stop when we come to a road junction even if there is no traffic coming.
So we apply an aversive through the reins to the bit or headgear just before the road junction, and until the horse stops. To begin with the horse escapes the aversive by stopping at the road junction. Then he learns that road junctions predict aversive onset. Then he begins of his own accord to stop before road junctions to avoid aversive onset. Is his behaviour of stopping at road junctions negatively reinforced avoidance learning? Or evidence of punishment because carrying on walking at road junctions decreases?
That’s the conundrum for us when we use aversives with warning signals – and probably evidence that it doesn’t much matter – because the horse doesn’t care whether one behaviour is being negatively reinforced or another is being positively punished. He just wants to avoid aversives.
In the case of your justified use of aversives – to prevent behaviour that would lead to something worse for the animal (like getting hit by a car) – or to produce behaviour that is in the best interests of the animal (like getting the horse into a trailer to go to a vet for life-saving treatment), in the end those are what we could call tactical uses of aversives – when we’ve either not trained something sufficiently well using +R or we’ve yet to desensitise and counter-condition to something sufficiently. So sometimes we have no option to use aversives – but those are so often situations in which we’ve either not yet trained for that behaviour or for that distraction or it’s something which it’s almost impossible to counter-condition, and about which we have to take out of our relationship bank account and repay later 🙂
Hi Max! I don’t know if I can do this wonderful comment justice, but here goes.
First, I rode as a kid. I had a quirt. I wasn’t taught to give a warning signal. But a predictor happened naturally. I learned very quickly that after using it a few times, all I had to do was swing it within the horse’s peripheral vision to get the same effect. Although that might also be a US for an animal that startles. But it definitely gained meaning and got conditioned when followed with a slap. (I think back on that with moderate horror—on behalf of the horse and myself. Let’s hand a whip to a 10 year old and let her go to town….)
Anyway I mention that because I have a some experience of what you are discussing.
By the way, I have a better post in the works about decoupling positive punishment and negative reinforcement. Not in the ways you are discussing how they truly may be connected, but the same idea I was going after in this post, that when there is an aversive present for R- people think there must necessarily be a behavior punished by that aversive stimulus.
In your thoughts about avoidance, it seems to me that you are asking the basic avoidance learning question: is it a two process system or a one process system? Do you know that terminology? I don’t know enough to have a valid opinion on that question myself, though.
You asked about threats—I do think in terms of threats, but I tend to avoid (haha) that terminology—or at least to be explicit about what the threat “descended from” in real life stimuli. “Threat” has some human cognitive connotations that may not bridge over to animals well. That reminds me—have you read Bringing Out the Best in People by Aubrey Daniels? It’s about workplace motivation by someone who understands behavior analysis. That book taught me so much about R-!
OK, this question of yours I think I can answer. “Is his behaviour of stopping at road junctions negatively reinforced avoidance learning? Or evidence of punishment because carrying on walking at road junctions decreases?” If I understand your scenario correctly, I would say that both things are happening. If stopping is increasing, and walking on without a cue to do so is decreasing, both processes are happening. And unless there are other mystery stimuli at work, one aversive stimulus is functioning to negatively reinforce one behavior and punish another. It’s just that not every incidence of negative reinforcement necessarily has such a pairing.
(Negative reinforcement is naturally so ubiquitous. Imagine if there was a behavior punished for every negatively reinforced behavior. Might we run out of things to do?)
In dog training, it’s common to use body pressure (walking them down) to get a dog back into a sit position if they break their stay. The dog going back into a sit is negatively reinforced. The pressure is released. But of course, we are teaching a stay and want popping up from a sit to decrease. And it may decrease. But it also may not. The dog may just learn that after it pops up, it can avoid pressure by sitting right down again. (I have personal experience with this. Walking into them is a good way to get them to sit back down again. It’s not always such a good way to get them not to break the stay in the first place.)
What a great discussion. I am still pondering some of your questions.
Thanks Eileen – it’s so good to have these discussions with folks who think deeply about them because they are questions that come up all the time when we are teaching operant and classical conditioning to others! I appreciate your reflections on the use of the word “threat” – because, yes it will produce an emotional response when we use it in conversation with others, but then I also find that the words “punishment” and “negative” do as well! Not to mention luring (some people associate that with bribing and bribery goes with corruption and by association luring is therefore corrupt!) Just goes to show that words are just classically conditioned and we can use them to intentionally trigger associations to make our point on the one hand while we also spend our time trying to change emotional associations to scientific ones on the other!
I am aware of the two process theory of avoidance learning and it’s that that makes me think of the warning as a threat – because it’s essentially a fear-conditioned stimulus. I actually find the wikipedia description of that, that can be accessed via this entry on Operant Conditioning to be quite a helpful description: https://en.wikipedia.org/wiki/Operant_conditioning
With horses, my observation is that avoidance learned signals are not as resistant to extinction as was found in other experiments, and that responses to avoidance learned signals are not always performed without evidence of fear. Sometimes they are – but that might be in situations where the animal is ambivalent about performing the behaviour, or has been very well trained – which is rare! Very often horses are being trained in situations where they are conflicted – wanting to avoid two or more aversives at the same time. So it’s rare to see someone who can apply a CS as a warning and not have to add an aversive to “remind” the horse what the warning means regularly. More sensitised, fearful horses tend to experience more rein pressure being added to make them slow or stop when seat or voice warnings fail, and more shut down, more confident horses seem to need more aversives to be added to “make” them want to move. This is why we rarely see the colder blooded less fearful horses in dressage. It’s not about athletic ability – it’s that they are generally less fearful and more stoical about pain. Of course we aren’t in lab conditions so there are very often multiple stimuli in the environment that the horse is dealing with at any one time.
I think what it might come down to is how the whole process is perceived by the animal. When we are either actively applying aversives or the means to do so is present (us or our equipment is in the environment), and the animal has experience of those aversives being applied either to prompt a behaviour we want so that we can attempt to negatively reinforce it, or enforce a behaviour we want by adding an aversive if it stops happening, or to attempt to reduce unwanted behaviour by adding an aversive if an unwanted behaviour happens – it’s all aversive training for the animal. I suppose while we are discussing whether it’s negative reinforcement or positive punishment the animal is just working out how to escape – or avoid aversives in his life!
Sorry for the delay, but while writing my comment I realized I could just as well put it in a new post! I’ve got a pretty good story for you. So thank you so much for this conversation, Max, and stay tuned!
I almost put this in a blog but decided to write it here instead. First, here is a very good article on motivating operations that a friend studying ABA (in the human application field) gave me. It applies to some of the stuff we have been talking about.
Then, I have a story.
I work in an office with one other woman and she brings her dog, Kaci, a 16 year old rat terrier, to work. (My old rat terrier used to come to the office in her day as well.)
Kaci is losing some of her house training so both of the humans have modified their own behaviors in various ways, mostly involving taking her outside a lot and watching her like a hawk.
Kaci loves to eat and lunch is the high point of her office day. She knows I often go to the microwave just before eating lunch, so every time I get up from my chair and head that way in mid-morning she gets up and follows me or wanders around. But when she is up and about, rather than lying on her bed, she is more likely to have a urinary “accident.” In short, if I get up from my chair, Kaci is more likely to pee.
I adore Kaci and love having her at the office but her getting off her pillow has become aversive to me because it frequently is a precursor to her peeing on the carpet. Consequently I will work to avoid prompting her to get up. I try to minimize my getting up from my chair. If she gets up, this dog I love, I am awash with dismay; it’s an “oh crap!” moment.
Note that the direct consequence of my getting up is Kaci getting up, not Kaci peeing. So even the “threat” of peeing acts to punish my getting up. If Kaci suddenly stopped peeing, ever, in the office, her getting up would lose its aversive quality. But she only needs to pee every once in a while to maintain the aversiveness. It’s pretty funny to work in an office and be obsessed with minimizing getting out of one’s chair.
The threat of her peeing also acts to get behavior from me. Negative reinforcement. The threat of pee on the floor makes me do crazy stuff. If I do get up, I try to sneak past her. She’s blind but it never works. And of course if she starts to act like she is going to pee, that prompts a flurry of behavior from both of the humans, as we try to distract her long enough to take her outside, or minimize the mess if it’s too late for that.
My friend hazards that the threat of Kaci peeing is a reflexive conditioned motivating operation for me. See what you think from the article if you are interested.