Latent learning has a precise definition in learning theory and it’s not what many people think. It’s not magic learning that happens during downtime–at least not in the way people assume. It is not a sudden better performance after a break between training sessions. It’s not when everything suddenly comes together after we sleep on it.
Here’s the definition:
[Latent learning is] learning that occurs during non-reinforced trials but that remains unused until the introduction of a reinforcer provides an incentive for using it.–Lieberman, David A. Learning: Behavior and Cognition. Wadsworth/Thomson Learning, 1990.
Note that the definition includes nothing about making sudden cognitive leaps. If we are struggling with teaching our dog something and in the next session she has improved vastly–this does not fit the definition of latent learning.
One reason we can be sure it doesn’t fit is that when training we are regularly reinforcing the behaviors we want or reinforcing the closest approximation to them that we can get. Again, latent learning deals with “non-reinforced trials.”
The First Latent Learning Study
The study that prompted the definition and exploration of latent learning took place in 1930 by Tolman, Chace, and Honzik.1)Tolman, Edward Chace, and Charles H. Honzik. “Introduction and removal of reward, and maze performance in rats.” University of California Publications in Psychology (1930). Rats were divided into three groups and the individuals in each group were put in a maze. The rats in Group 1 received a food reward when they reached the end of the maze. The rats in Group 2 never received food; they just were put in the maze and wandered freely for a certain amount of time for 10 days. The rats in Group 3 wandered the maze with no food for 10 days, then on the 11th day they started receiving a food reward for finishing the maze. It took them only one day to catch up to the Group 1 rate of running the maze. This was believed to show that they had been learning to navigate the maze during the period of no food, i.e., no reinforcement.
Stevenson demonstrated probable latent learning in humans in 1954. His experiment also dealt with remembering locations.2)Stevenson, Harold W. “Latent learning in children.” Journal of experimental psychology 47.1 (1954): 17.
A real-life version of latent learning could go like this. Say I have no interest in bicycles or cycling. None. Nobody in my life does that. And say there is a bicycle repair shop in a little strip mall that I pass sometimes. If I notice that, there’s nothing in it for me. No reinforcement.
However, let’s say I have a new friend who is into cycling. She cycles to my house one day, and just as she arrives something goes wrong with her bike. She needs a repair. If at that moment I remember the location of that bike repair shop, that is latent learning. Learning about the location of the bike shop was not valuable earlier. There was no reinforcement available for it. To repeat the definition: The knowledge was “unused until the introduction of a reinforcer provided an incentive for using it.” In this scenario, the potential reinforcement is that I can help my friend.
What Should We Call the Other Thing?
OK, so if that’s latent learning, what should we call that thing that happens when we wait a little bit, then it all comes together? When everything gels and we, or our dogs, “get it”? It’s a great thing when it happens; no wonder we want a name for it!
Candidate #1 could be the so-called Eureka effect, where a perplexing problem becomes clear all at once in a flash of insight. But the focus on this term is not on the passage of time, except that a period of sleep is sometimes mentioned. Also, it’s not usually applied to animals.
Candidate #2 could be memory consolidation, a concept in neuroscience.
Consolidation is the processes of stabilizing a memory trace after the initial acquisition.–The Human Memory
It involves converting something we know from short-term to long-term memory. It could contribute to fluency in knowledge and possibly tasks. It is even known to correlate with getting some sleep. I am pretty far out of my league here, but it seems like it could apply, for example, in something like cue recognition. It could account for a notable difference in correct cue responses from one session to the next. But I’m not sure whether that merits that dramatic change we are usually talking about when something all comes together.
Here’s a good review article if you want to read about memory consolidation: Memory–a century of consolidation.3)McGaugh, James L. “Memory–a century of consolidation.” Science 287.5451 (2000): 248-251.
Candidate #3 could be that some dramatic improvements we observe are related to longer inter-session intervals. Since the early 20th century, learning and behavior researchers have been studying the effects of tinkering with the times between sessions of learning.4)Shea, Charles H., et al. “Spacing practice sessions across days benefits the learning of motor skills.” Human movement science 19.5 (2000): 737-760. That time period is referred to as the inter-session interval (and yes, occasionally the time between is referred to as inter-session latency, just to build in some confusion). But I’m not aware of a zippy term for the advantages of a longer wait, although said advantages are common. Somehow, “benefit of a longer inter-session interval” isn’t sexy.
But What If There’s No Such Thing?
It gets more complex. There were later studies that countered the latent learning effect. There were researchers who argued strongly against it. They claimed that the rats in the maze without food were getting some type of reinforcement and that their behavior could be explained under standard principles of behaviorism. You can read about that point of view in this article:
“Behaviorism, latent learning, and cognitive maps: Needed revisions in introductory psychology textbooks.”5)Jensen, Robert. “Behaviorism, latent learning, and cognitive maps: Needed revisions in introductory psychology textbooks.” The Behavior Analyst 29.2 (2006): 187.
After reading that article, I almost decided not to publish this post at all. But I still think it could be useful. I’ll let eager researchers make their own decisions.
So, in summation:
- Latent learning has an official definition and it might not be what you thought.
- There isn’t a sticky term for what you thought was latent learning, but I mention three possibilities.
- Oh, and latent learning (as per the definition) might not exist anyway.
But Eileen, Language and Usage Are Always Changing!
Here’s the part where you can get after me for being stodgy or old fashioned. It could be that “latent learning” is on its way to becoming an acceptable term for a sudden improvement in performance after some downtime. I have seen one recent journal paper that uses the term that way.
I don’t know if popular usage will bleed into academia or not. But learning about the original definition turned me on to some pretty cool research, and I hope you enjoy it too.
This post started life as a rant about terminology on the Facebook group Canine Behavior Research Studies. Thank you to the people who contributed to the discussion there, particularly הדס כלבי ה, who suggested the term memory consolidation, and Sasha Lazareva, who brought up the “other” controversy about latent learning and cited the Jensen article mentioned below.
Copyright Eileen Anderson 2016
Notes [ + ]
|1.||↑||Tolman, Edward Chace, and Charles H. Honzik. “Introduction and removal of reward, and maze performance in rats.” University of California Publications in Psychology (1930).|
|2.||↑||Stevenson, Harold W. “Latent learning in children.” Journal of experimental psychology 47.1 (1954): 17.|
|3.||↑||McGaugh, James L. “Memory–a century of consolidation.” Science 287.5451 (2000): 248-251.|
|4.||↑||Shea, Charles H., et al. “Spacing practice sessions across days benefits the learning of motor skills.” Human movement science 19.5 (2000): 737-760.|
|5.||↑||Jensen, Robert. “Behaviorism, latent learning, and cognitive maps: Needed revisions in introductory psychology textbooks.” The Behavior Analyst 29.2 (2006): 187.|