When we bake cookies, some reinforcement is on a variable interval schedule.
What Is the Matching Law?
Have you heard trainers talking about the matching law? This post covers a bit of its history and the nuts and bolts of what it is about. I am providing this rather technical article because I want something to link to in some other written pieces about how the matching law has affected my own training of my dogs.
Answer: Almost always. One study is usually flimsy evidence. What we need to consider is the bulk of the research. I’ll explain.
Most of the online requests for studies I see are from people who want to support their points of view in online arguments. Others are investigating a health or behavior condition that has to do with their own dog. Some need references for a position paper on dog training or another aspect of care. There are also people who are delving deep into an issue for reasons of education or scholarship. But usually, these people don’t need that much help.
Requests are almost always couched as follows:
“Is there a study that shows XYZ?”
This is human. We believe something, either from a perspective of faith or a review of the evidence. We want to bolster our belief with stronger evidence. But thinking we can do this with one study is based on a misunderstanding of how science and research work. In order to find strong evidence, we need to view any study in the context of the other research related to that topic.
There are plenty of contradictory studies in the canon. You can often find one that supports your position even if it’s wrong. It’s only over time that the best evidence floats to the top. And it takes an expert to assess that evidence.
The most recent study is not necessarily definitive. In fact, recent studies should be treated with healthy skepticism. Even when they are building on previous research, there has not been time to replicate or contradict their findings. When they are on a new topic, they are even more likely to have problems, since research techniques on the issue may not have been honed yet.
All this leaves us with some problems and challenges.
What’s Better Than One Study?
Wouldn’t it be cool if there were a way to get an expert’s view of a study or a set of studies? To get an educated opinion about them? Well, there is a way. Experts tend to write books and articles. Here are three types of publications that will help the reader get a broad sense of a topic. Citing one of these publications is usually superior to picking out a single study.
Textbooks, depending on the level, cover a broad view of a field of study or topic. Good ones provide the standard research citations for every subtopic they discuss. They are almost always more appropriate for “winning an argument” than a single study. That’s because the author will cover all views and note which have the most supporting evidence. See Example 1 below.
Scholarly compilations are based on a large topic within a field of study. Usually, world experts are asked to contribute an article or chapter on one aspect of the topic. For example, the red book in the picture above is Operant Learning: Areas of Research and Application and has chapters by Azrin, Sidman, and other heavy hitters. Some of the information has been superseded over time but the book is still a great reference for the classic research.
Review articles summarize the research on a certain topic up to the current date. An example is James McGaugh’s article on memory consolidation: “Memory: A Century of Consolidation.” If you take a look at that on Google Scholar, you’ll see that it has been cited several thousand times by other authors.
These three types of publications provide the views of experts. They can tell us which studies have stood the test of time, been replicated, or been expanded on. They can tell us when the research took a wrong turn. They can tell us what new research to take a look at, and they do it without the sensationalist headlines we often get in blog posts.
Here are a couple of examples of what I learned on two different topics using textbooks (Example 1) or a personal review (Example 2). Oh yes, and a third example where my research had big holes in it.
Example 1: Punishment Intensity
In 2016, I wrote a post called Don’t Be Callous: How Punishment Can Go Wrong. In it, I talked about the pitfalls of using punishment. On the one hand, starting with too low an intensity allows the animal to habituate. On the other, starting with a high-level intensity risks fallout. There is nothing controversial about this finding. You can find information about it in any learning theory textbook.
A commenter claimed I had cherry-picked the studies I cited. But I hadn’t. I had cracked multiple behavior science textbooks. All of them covered the topic of punishment intensity. And they studies they cited all overlapped. I cited them, too.
Textbooks are giant literature reviews created by experts in the field. They are generally way more helpful than a study or two.
Example 2: Dogs and Music
I keep track of studies on the purported effects of music on dogs. I am actually fairly qualified to assess some aspects of that literature, as I have master’s degrees in both music and in engineering science with an emphasis on acoustics. I keep a list of dogs and music studies and I presented on the topic at the 2021 Lemonade Conference.
It’s usually safe to quote Chance
This is a new field so you won’t find extensive coverage in textbooks. The research is still in what we might call an oscillating phase, with conflicting, back-and-forth results. Yet there is a burgeoning market of music products for dogs, most of which claim that research has “proven” that music is beneficial to dogs.
That’s a stretch. And it pays to know something about the literature before taking such claims at face value. For instance, you can buy recordings of music that is specially altered for dogs. A certain brand claims that their music has been clinically proven to relax dogs and allay their fears. The product’s website cites a study. One study. And their product isn’t even included in it.
But what about the bulk of the research? Is there more than that one study? Oh yes. And this company leaves out of the marketing materials the fact that their specific product has been tested twice three times in subsequent research studies. Guess what? In both all three of the studies the product has been no more beneficial than regular “classical” music. Instead of mentioning that, they just continue to cite the older article that shows benefits to dogs from generic classical music.
If we trace the current threads of research on dogs and music, we will see that a current hot topic is habituation. There are some studies that have shown that dogs habituate to music that is played regularly. Think about that one for a minute. Those tracks you play during every thunderstorm (if they ever did contribute to your dog’s relaxation) may have become so much background noise to your dog.
The lesson I have learned here is to always, always check the sources myself. Whether deliberately or through an oversight, product marketers, writers, and private individuals often cite studies that don’t actually support their claims. In some cases, they cite studies whose results are the opposite of their claims. One company referred me to a study that found their product to perform no better than a placebo!
Example 3: Research Blooper
In 2013 I wrote a blog post about errorless learning. I performed my standard research procedures and came up with Herb Terrace’s work starting in the early 60s with pigeons. My post was critical of applying his methods to dog training. The pigeons were food deprived and their training necessitated hundreds, even thousands of reps. Plus I disliked the absoluteness of the term “errorless” since even Terrace’s pigeons made errors.
I published my post and a friend whose parents trained with B.F. Skinner gently showed me Skinner’s work and his suggestions about setting up antecedents for errorless learning. Turns out my post on errorless learning had many errors! Several decades before Terrace, there was an important discussion regarding the role of errors. The topic was important in Skinner’s work. Skinner disagreed with Thorndike, who claimed that errors were necessary for learning. I could get behind Skinner’s claims, which centered on skills and planning used by the teacher/trainer to make the learning process as smooth, efficient, and stress-free for the learner as possible.
In my defense, most textbooks and scholarly discussions about errorless learning center on Terrace’s work, not Skinner’s. Terrace’s own references and credits to Skinner are skimpy. I’m just lucky I had a friend who could direct me to the right place. I published a second post on errorless learning with updated information and corrections. I left the first one published (with cautions for the reader and links to the second article) as an example of how easy it is to miss a research elephant in the room.
Who else has a personal “Oops” story? Did you get taken in by a popular article on a study that turned out to miss the point of the study? Did you go as far as I did and publish an article that didn’t cover the research well? (Not sure I can get any takers on this but it’s worth a try!)
My Behavior Science Go-To Resources
Here’s a list of the textbooks I use most often when researching a learning theory topic. Enjoy!
Chance, P. (2013). Learning and behavior. Nelson Education.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis. Pearson.
Domjan, M. (2014). The principles of learning and behavior. Nelson Education.
Domjan, M. (2000). The essentials of conditioning and learning. Wadsworth/Thomson Learning.
Goodwin, C. J. (2016). Research in psychology methods and design. John Wiley & Sons.
Honig, W. K. (1966). Operant behavior: areas of research and application. Appleton-Century-Crofts.
Keller, F. S., & Schoenfeld, W. N. (1950). Principles of psychology: A systematic text in the science of behavior (Vol. 2). Appleton Century Crofts.
Klein, S. B. (2011). Learning: Principles and applications. Sage Publications.
Mayer, G. R., Sulzer-Azaroff, B., Wallace, M. (2018) Behavior analysis for lasting change. Sloan Publishing.
Miltenberger, R. G. (2008). Behavior modification: Principles and procedures. Wadsworth. Belmont, MA.
Schwartz, B. (1989). Psychology of learning and behavior. WW Norton & Co.
Shettleworth, S. J. (2010). Cognition, evolution, and behavior. Oxford University Press.
I want to share just how tricky this falsification stuff can be. In the last few weeks I’ve received two comments from readers that pushed me to rethink some things I’ve written. They were both presented very constructively, offering some ideas in the spirit of good dialogue and the search for truth. They included fascinating questions that Continue reading “I Failed to Falsify—Twice! (Falsifiability Part 2)”
What’s your favorite color? Do you prefer pie or ice cream? Which shirt do you like better: the striped one or the solid green one?
Most of us have been asked our preferences since we were children. Sometimes we are being asked to make a choice: if we choose the striped shirt we won’t be wearing the green one also. If we are asked to choose enough times, our preferences often become clear.
What if we had to know our animal training theory and practice so well that we could easily tell someone what would disprove the hypotheses that inform our methods? That’s what scientists do. If we are going to claim to base our training methods on science, I think we should get with the program.
There’s a concept in science that is not much discussed in the world of dog training. The concept is falsifiability. Learning about it can save us a world of hurt in assessing statements about training methods. Focusing on how we would disprove our own methods may seem counterintuitive at first, but bear with me.Continue reading “Falsifiability or Falsehood in Dog Training? (Part 1)”
This post is not directly about dogs, but it’s about something we see happening in the dog world very frequently. That is the misunderstanding and misapplication of research results. This particular example caught my attention because it involves something I have a bit of expertise in: sound.
In the past few years there has been a rash of articles about how important silence can be in our lives. Many of them center on a campaign by the Finnish Tourist Board that promoted the restful silence of that country. I’ve been there, and it’s true!
The silence thing got my attention. I’m a fan. I’m an auditory person, musically trained. I’m very sensitive to my auditory environment and dislike unnecessary background noise, including music. When I have music, radio, or the television on, I am actively listening. When I’m done they go off. I need and enjoy quiet.
Likewise, I am quite attuned to the “background” sounds that are present even when it’s very quiet. I am sitting in my study now. I’m aware of traffic noises, neighborhood dogs, the occasional creak of the house, the furnace and refrigerator when they cycle on, my neighbor’s sump pump, and Clara snoring. She’s got a funny little whistle sound in her nose. Plus I can hear some of the common urban mashup of low frequency noises. The 60-cycle hum of power lines is audible, although we habituate to it. We can hear even lower frequencies generated by industrial equipment. Most of us city dwellers are unaware of these lower frequency, deeper noises, although sometimes we notice their absence if we get out “beyond the sidewalks,” especially at night. But even with all that going on, my environment right now definitely qualifies as quiet, if not exactly silent.
Frequency and magnitude breakdown (FFT) of the noise in my study
How different would it feel if **all** that noise were gone?
Silence is Golden?
The articles I ran across praised the value of silence in our lives and cited a scientific study that had “proved” the value of silence.
In the study, the effects of different auditory stimuli were tested on mice with the goal of analyzing whether they affected the creation of new brain cells. The scientists were looking at adult neurogenesis in the hippocampus. They exposed the mice to five different acoustic conditions: the ambient sounds of the facility, white noise, some Mozart piano music (thoughtfully transposed to the normal hearing range of the mice), the calls of rat pups, and silence. Most mice were exposed to one of the auditory stimuli for two hours a day for three days inside an anechoic chamber. After one more day they were killed and their brains were studied. Some mice were exposed for seven days, then killed.
The Mozart music and the silence resulted in the largest increase in precursor cell proliferation after three days of exposure to the sounds. (Precursor cells are new, blank cells that can develop into different kinds of cells. For example, stem cells are one type of precursor cell.) And after seven days of exposure, only silence was associated with increased numbers of precursor cells. Edit 4/3/16: I deleted some incorrect comments I made about the control of the study.
Back to the articles. They claim, and cite this study to support, the idea that periods of quiet, perhaps “down time,” are beneficial to our brains. The articles evoke images of calm contemplation and taking breaks from mental activity. This is a potent meme in our sometimes noisy, frenetic lives.
Such periods probably are beneficial. The problem is that that is not what this study is about. The term “silence” in the study refers to a specific state that is virtually never replicated in normal life. And it was probably not a pleasant state for the experimental mice, despite the article title. Here’s what it really involved.
Walls of anechoic chamber–photo source, Wikimedia Commons
All of the mice experienced the sound exposure inside an anechoic chamber. Anechoic chambers are enclosed spaces in which the amount of reflected sound is reduced almost to zero. They are built of absorptive material installed in patterns designed to break up sound waves. They are also insulated from exterior noise. When there is sound being generated on the inside, as with the recordings used in the experiment, only the original sound wave reaches the organism’s ears. There are no reflections. This is an abnormal situation. In real life, we almost always perceive some reflected sound. Any noise would without reflection sounds “dead.”
This is a highly disturbing auditory situation if you don’t understand what is going on. I’ve been in an environment that approximates that. It makes your ears feel funny and you lose senses you didn’t even know you had. You can no longer sense where objects are in relation to your body (the rudimentary human equivalent of echolocation).
For mice, being trapped in an anechoic chamber and exposed to its unique qualities could well have stressed them out of their minds. We can’t explain it to them. So we need to get rid of the positive connotations of the word “silence” in the case of this study. This was not restful or calm. It was foreign and strange, something that no animal could be prepared for from previous life experience.
We should note that the mice who were exposed to other auditory stimuli were also placed in the anechoic chamber. There was doubtless also some strangeness for them. But since sound was being played, they would not experience the strangeness of absolute silence.
The Results
If you read far enough in the study, there is discussion about silence being a stressful state.
But of the tested paradigms, silence might be the most arousing, because it is highly atypical under wild conditions and must thus be perceived as alerting. Functional imaging studies indicate that trying to hear in silence activates the auditory cortex, putting “the sound of silence”, the absence of expected sound, at the same level with actual sounds. The alert elicited by such unnatural silence might stimulate neurogenesis as preparation for future cognitive challenges.–Kirste, Imke, et al. “Is silence golden? Effects of auditory stimuli and their absence on adult hippocampal neurogenesis.” Brain Structure and Function 220.2 (2015): 1221-1228.
No kidding. In other words, the level of silence was novel and probably uncomfortable and scary. The apparent increase in neurogenesis in the mice’s brains correlated with a time when they were suddenly thrust into an eerily quiet, unnatural environment and couldn’t escape. They weren’t in the equivalent of a pleasant, peaceful, mousie yoga studio.
A more accurate title for an article about this study might be, “Being trapped without the possibility of escape in a strange, frightening environment may help generate new brain cells.”
The Big Picture
I am not weighing in on the methods and results of the study. Neither am I arguing against the value of relative quiet in our noisy human lives. I am highlighting the way this study is being incorrectly referenced. The results of the study do not connect with the spin of the articles about it. And we can’t blame it only on the journalists. Note that the scientists themselves prompted this, in part, with the reference to “Silence is golden” in the title. Catchy, but misleading. (Also, to be fair, most of the articles cite other studies as well, studies that may support the claims about restful silence.)
Humans love to take mental shortcuts, and articles about the “value of quiet” are attractive in our noisy, hasty world. They resonate, if I may use another auditory figure of speech. But we need to be careful.
This particular example jumped out at me since I have a background in acoustics. I was curious about how the “silence” was created, and as soon as I saw the mention of an anechoic chamber, I was on the trail. But in this study, you don’t actually have to understand acoustics to see the problem, as long as you read the whole thing. The paragraph I quoted above is one of several in the “Discussion” part of the study where they make observations and theorize about the findings. The fact that the silence was a highly stressful condition is discussed in detail. But you have to read far enough to get there, and to drop your automatic warm fuzzy thoughts about silence and calm states.
I’d love to know whether anyone has been in an anechoic chamber or experienced other sensory deprivation. What was it like? When I was in graduate school we bought the materials to build a chamber and I messed around with the stuff, so I know what even a small exposure to the noise absorptive materials made my ears feel like. Creepy!
Reference
Kirste, I., Nicola, Z., Kronenberg, G., Walker, T. L., Liu, R. C., & Kempermann, G. (2015). Is silence golden? Effects of auditory stimuli and their absence on adult hippocampal neurogenesis. Brain Structure and Function, 220, 1221-1228.
Lots of us in the dog community read journal articles and scholarly books to learn more about the science behind behavior, even if our academic credentials lie elsewhere. And sooner or later we want to share what we’ve learned, out of the goodness of our hearts (grin), or more likely to try to win an argument persuade someone of our position.
Some say you shouldn’t even cite research if you don’t have credentials in that field. I think that’s true to some extent, but I also think it is beneficial to read and try to assess research even if you don’t have those credentials. Delving into scholarly journals isn’t always easy, but it’s one of the best ways to expand your knowledge and learn about the dialectic nature of science. But you have to keep front and center in your mind that if you are reading about a discipline that you don’t have academic expertise in, you are at a huge disadvantage compared to the people who have a longstanding background in that area.
One of the first rules of citing research is that you must understand the context, both for your own benefit and to save your ass from embarrassment. And if you don’t know much of the context, you’d be well advised to start studying.
Let’s say you run across a quote that refers to some research. It supports a position that might be a little controversial or a minority view, but you are excited since you hold that view yourself. You are delighted and ready to quote it, both to impress your friends and show the other camp a thing or two. What should you do?
As someone whose credentials are in fields other than psychology or animal behavior, here are some guidelines I have developed.
What to Do Before You Quote the Article
Cherry picking is a rhetorical fallacy
Find the original source. If you read about the study in Newsweek or The New Yorker, get the author’s name and track down the original research article. An editorial mention is not peer-reviewed research. You may have to pay for the original piece or order it through a library if you don’t have university access. Another option is to send an email to the author. You’d be surprised how many times they’ll just send it to you. Be sure and thank them politely!
Read the article. The first time, don’t worry too much about all the stuff you don’t understand. Try to forge ahead and get a sense of the whole thing.
Read the article again.
Study the charts and graphics. What are they measuring? What’s on the x-axis and what’s on the y-axis of the charts? What statistical methods did they use?
Now look up the terms you don’t understand. Give yourself a crash course if you need to.
If there are still big sections that you don’t get, consult an expert in the field if you can.
Read the article again. Are you beginning to understand it?
If not, and if you have no way of doing so, stop right there. Don’t bother to quote it. If you think you understand it moderately well, proceed.
Find the quote that got you started in the first place.
Study the part in the article just before it. How was the experiment or problem set up?
Study the part just after it. Did they qualify the statement at all? If so, you are ethically bound to include that part if you plan to quote the study. “The new XYZ method works 95% of the time (YAY!), but only with orphaned voles raised with chipmunks and no other rodents (oh).
Study the results section and the discussion section. These sections are where the authors summarize their results and make the case for their findings. But they are also bound to announce the limitations, and we should be just as attentive to those.
Think hard about applicability. If it is about behavior, are there big behavioral differences between the subject species and the one you want to apply it to? Is one a prey animal and another a predator? Have the researchers done something spectacular in the controlled condition of the lab that can’t possibly be replicated in real life? Or conversely, have they found a problem that rarely shows up in the real world because of the ways that good trainers know how to help animals generalize and practice behaviors? Tread carefully. Think it through. You’ll look silly if you announce a problem that real world experts have been aware of for ages and already know how to avoid.
Find out how many times the article has been cited. Google Scholar will give you a rough idea. If there are few citations it generally means the work made very few ripples in the scientific world (usually a bad sign) unless it is brand new. If it has lots, keep that in mind for # 18.
Start reading the citations. Did they show further research that replicated the results? Or did they yield different results and argue against the first conclusion? Sometimes you can tell from just the abstracts, but sometimes you’ll need to get the full text of those articles too. You may run across a review article of the whole topic. Read it!
Take note of the date of the article. If it was from 1975 and the thread of research continues through 1980, 1983, 1988, and 1992, you’d better read to the end. You’ll either bolster your case or save yourself some embarrassment.
Find a ranking for the journal that published the article. Here’s a journal-ranking site. Collection Development Librarians can also help you assess the comparative merit and ranking of journals and academic publishers. This is another area where you may save yourself some embarrassment. If the ranking is abysmal and the only other publications citing the article are from the same journal–you have a problem. And be careful about the open source “pay and publish” journals; they require even more careful assessment. Some are responsible. Others not so much.
Search through the citations and find the major opponents of the work if there are any. Get the cheerleader out of your head and address the article critically. What do the opponents of the work say? What are the opposing hypotheses and results? Do they make sense? How many citations do they have? (Being heavily cited only shows that people paid attention to the article. A good start. But it might be because a bunch of future studies demolished the findings.)
Take a deep breath. Does your quote have merit? Is it a fair claim, given what else you have learned? Is it from a good source? Has it stood the test of time? Does it apply to your own topic? If so, go for it. Write your post, make your claim, but qualify it appropriately. Cite your source and be careful about Fair Use guidelines: give complete credit so that anybody could go find the very article and quote you are citing, but don’t quote huge chunks.
What does Chance say?
What To Expect Afterwards
Your friends will be proud of you. People who disagree may be irritated or outraged. But here is what to be ready for. There are virtually always people with better knowledge and credentials than you in a given field. If you are already in the hierarchy of academia, you are keenly aware of this.
So, those people may have something to say about what you wrote. Here are the main possible reactions:
They address you with criticism of your piece from the benefit of their broader knowledge. They may ask if you considered Joe Schmoe’s experiment from 2004. They may advise you that you made a beginner’s error and you forgot to account for the “Verporeg Effect.” They may tell you that you really need to start over because of the discrepancy between the metrics being used in the different studies. Make no mistake: This is a GREAT response to get from experts. Even if you personally feel ripped to shreds and devastated, get ahold of yourself. They took you seriously enough to make suggestions. They took time out of their day. Thank them (publicly if their critique was public) and go do as they suggested.
They argue in opposition to your piece. Now you have lots more work to do. They have an advantage. They know the field. They are probably right. But you can make lemonade. Go study their points. You wanted to learn about this, right? Now you have a chance to learn some more. This is still hard on the ego, but again, you got taken at least somewhat seriously, and you have an opportunity to learn. And if/when you find that they are probably right, be gracious.
But the worst: they ignore it. They took a look and decided that gracing it with a response would be a complete waste of their time. So you can either puff up your ego and decide that no one recognizes your genius, or go back on your own and study some more. Maybe you are that lone polymath who has connected the dots between some interdisciplinary stuff and people will recognize your genius later. More likely, you were just out of your depth. The people who make radical, startling discoveries are usually immersed in the field in which they make the discovery, or a closely related one.
But hey. You did your best. You probably learned a lot. Whatever the response to your claim, you must forever be ready to delve more deeply if someone comes up with a well-supported opposing point of view. Be a good sport. That’s how science works.
And by the way: I write from experience. I’ve made a variety of mistakes in citing resources and making claims. I thank the people who kindly helped me improve my understanding and make corrections.
Photo credits: Clara with mud on face and Summer “reading,” Eileen Anderson. Cherries, Wikimedia Commons. The circle and slash added by Eileen Anderson.