Answer: Almost always. One study is usually flimsy evidence. What we need to consider is the bulk of the research. I’ll explain.
Most of the online requests for studies I see are from people who want to support their points of view in online arguments. Others are investigating a health or behavior condition that has to do with their own dog. Some need references for a position paper on dog training or another aspect of care. There are also people who are delving deep into an issue for reasons of education or scholarship. But usually these people don’t need that much help.
Requests are almost always couched as follows:
“Is there a study that shows XYZ?”
This is human. We believe something, either from a perspective of faith or a review of the evidence. We want to bolster our belief with stronger evidence. But thinking we can do this with one study is based on a misunderstanding of how science and research work. In order to find strong evidence, we need to view any study in the context of the other research related to that topic.
There are plenty of contradictory studies in the canon. You can often find one that supports your position even if it’s wrong. It’s only over time that the best evidence floats to the top. And it takes an expert to assess that evidence.
The most recent study is not necessarily definitive. In fact, recent studies should be treated with healthy skepticism. Even when they are building on previous research, there has not been time to replicate or contradict their findings.
All this leaves us with some problems and challenges.
What’s Better Than One Study?
Wouldn’t it be cool if there were a way to get an expert’s view of a study or a set of studies? To get an educated opinion about them? Well, there is a way. Experts tend to write books and articles. Here are three types of publications that will help the reader get a broad sense of a topic. Citing one of these publications is usually superior to picking out a single study.
- Textbooks, depending on the level, cover a broad view of a field of study or topic. Good ones provide the standard research citations for every subtopic they discuss. They are almost always more appropriate for “winning an argument” than a single study. That’s because the author will cover all views and note which have the most supporting evidence. See Example 1 below.
- Scholarly compilations are based on a large topic within a field of study. Usually, world experts are asked to contribute an article or chapter on one aspect of the topic. For example, the red book in the picture above is Operant Learning: Areas of Research and Application and has chapters by Azrin, Sidman, and other heavy hitters. Some of it has been superseded as time passed but it is still a great reference for the classic research.
- Review articles summarize the research on a certain topic up to the current date. An example is James McGaugh’s article on memory consolidation: “Memory: A Century of Consolidation.” If you take a look at that on Google Scholar, you’ll see that it has been cited several thousand times by other authors.
These three types of publications provide the views of experts. They can tell us which studies have stood the test of time, been replicated, or been expanded on. They can tell us when the research took a wrong turn. They can tell us what new research to take a look at, and they do it without the sensationalist headlines we often get in blog posts.
Here are a couple of examples of what I learned on two different topics using textbooks (Example 1) or a personal review (Example 2). Oh yes, and a third example where my research had big holes in it.
Example 1: Punishment Intensity
Last year I wrote a post called Don’t Be Callous: How Punishment Can Go Wrong. In it, I talked about the pitfalls of using punishment. On the one hand, starting with too low an intensity allows the animal to habituate. On the other, starting with a high-level intensity risks fallout. There is nothing controversial about this finding. You can find information about it in any learning theory textbook.
A commenter claimed I had cherry-picked the studies I cited. But I hadn’t. I had cracked four learning theory textbooks. All of them covered the topic of punishment intensity. And they cited the same group of studies.
Textbooks are giant literature reviews created by experts in the field. They are generally way more helpful than a study or two.
Example 2: Dogs and Music
I keep track of studies on the purported effects of music on dogs. I am actually fairly qualified to assess some aspects of that literature, as I have master’s degrees in both music and in engineering science with an emphasis on acoustics. I keep a list of dogs and music studies.
This is a new field so you won’t find extensive coverage in textbooks. The research is still in what we might call an oscillating phase, with conflicting, back-and-forth results. Yet there is a burgeoning market of music products for dogs, most of which claim that research has “proven” that music is beneficial to dogs.
That’s a stretch. And it pays to know something about the literature before taking such claims at face value. For instance, you can buy recordings of music that is specially altered for dogs. A certain brand claims that their music has been clinically proven to relax dogs and allay their fears. The product’s website cites a study. One study.
But what about the bulk of the research? Is there more than that one study? There sure is. And they leave out of the marketing materials the fact that their specific product has been tested twice in subsequent research studies. Guess what? In both the studies the product has been no more beneficial than regular “classical” music. Instead of mentioning that, they just continue to cite the older article that shows benefits to dogs from classical music.
If we trace the current threads of research on dogs and music, we will see that a current hot topic is habituation. There are some studies that have shown that dogs habituate to music that is played regularly. Think about that one for a minute. Those tracks you play during every thunderstorm (if they ever did contribute to your dog’s relaxation) may have become so much background noise to your dog.
The lesson I have learned here is to always, always check the sources myself. Whether deliberately or through an oversight, product marketers, writers, and private individuals often cite studies that don’t actually support their claims. In some cases, they cite studies whose results are the opposite of their claims. One company referred me to a study that found their product to perform no better than a placebo!
Example 3: Research Blooper
In 2013 I wrote a blog post about errorless learning. I performed my standard research procedures and came up with Herb Terrace’s work starting in the early 60s with pigeons. My post was critical of applying his methods to dog training. The pigeons were food deprived and their training necessitated hundreds, even thousands of reps. Plus I disliked the absoluteness of the term “errorless” since even Terrace’s pigeons made errors.
I published my post and a friend whose parents trained with B.F. Skinner gently showed me Skinner’s work and his suggestions about setting up antecedents for errorless learning. Turns out my post on errorless learning had many errors! Several decades before Terrace, there was an important discussion regarding the role of errors. The topic was important in Skinner’s work. Skinner disagreed with Thorndike, who claimed that errors were necessary for learning. I could get behind Skinner’s claims, which centered on skills and planning used by the teacher/trainer to make the learning process as smooth, efficient, and stress-free for the learner as possible.
In my defense, most textbooks and scholarly discussions about errorless learning center on Terrace’s work, not Skinner’s. Terrace’s own references and credits to Skinner are skimpy. I’m just lucky I had a friend who could direct me to the right place. I published a second post on errorless learning with updated information and corrections. I left the first one published (with cautions for the reader and links to the second article) as an example of how easy it is to miss a research elephant in the room.
Who else has a personal “Oops” story? Did you get taken in by a popular article on a study that turned out to miss the point of the study? Did you go as far as I did and publish an article that didn’t cover the research well? (Not sure I can get any takers on this but it’s worth a try!)
My Learning Theory Go-To Resources
Here’s a list of the textbooks I use most often when researching a learning theory topic. Enjoy!
- Chance, P. (2013). Learning and behavior. Nelson Education.
- Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis. Pearson.
- Domjan, M. (2014). The principles of learning and behavior. Nelson Education.
- Domjan, M. (2000). The essentials of conditioning and learning. Wadsworth/Thomson Learning.
- Goodwin, C. J. (2016). Research in psychology methods and design. John Wiley & Sons.
- Honig, W. K. (1966). Operant behavior: areas of research and application. Appleton-Century-Crofts.
- Keller, F. S., & Schoenfeld, W. N. (1950). Principles of psychology: A systematic text in the science of behavior (Vol. 2). Appleton Century Crofts.
- Klein, S. B. (2011). Learning: Principles and applications. Sage Publications.
- Schwartz, B. (1989). Psychology of learning and behavior. WW Norton & Co.
- Shettleworth, S. J. (2010). Cognition, evolution, and behavior. Oxford University Press.
Copyright 2017 Eileen Anderson