The floor tiles are falling off the road and exploding in the air like fireworks; the ground folds together with the buildings and flips 180 degrees, obscuring the sky; when you open the doors of different floors, some go out to the rooms and some go out to the beach …… This is the absurd scene in the movie “Inception”, while the real dream world may be even more bizarre than these.
Where do these bizarre dreams come from? Scientists have been trying to “interpret” dreams using different hypotheses. Recently, scientists have once again “brainstormed” a new hypothesis: the brain has these absurd dreams in order to take us out of our everyday experiences and avoid “overfitting” – in short, to prevent us from “getting lost in reality and going off the rails”. To put it simply, this is to prevent us from becoming “obsessed with reality and going off the rails.
This explanation is based on the dilemma of machine learning.
The Unexplained Dream
This new hypothesis comes from Erik Hoel, a neuroscientist at Tufts University, who recently published a paper in Patterns. The paper points out that since machine learning is inspired by the human brain, could the problems encountered in machine learning be explained in turn by the human brain?
In the paper, Hull mentions three properties of the dream experience.
Sparseness (sparseness)-dreams are usually less vivid than reality and lack details. For example, it is often difficult to read the words in a book in a dream, and we can never press the three numbers 110 correctly to call the police in a dangerous situation.
Hallucinatory quality – Dreams are often bizarre, with scenes that have never happened before and contain details that do not make sense.
Narrative property – Adult dreams are usually a series of events that can be woven into a story.
There are many theories about dreams, among which the more mainstream ones are the memory consolidation hypothesis: we access past memories in dreams to strengthen them or to integrate new memories into old ones. However, Hull argues that this hypothesis does not provide an explanation for some dreams that do not correspond to memories.
Most dreams do not reproduce our experiences at all, but are related to tasks that we train repeatedly, for example, after a day of playing Tetris, you are likely to have dreams about this game as well.
The “neural” network丨Pixabay
Hull thinks of neural network models in machine learning, which also need to undergo repetitive training – to avoid the model going off the rails in repetitive training, researchers often need to set up programs to prevent it from overfitting. Hull argues that dreams may not just be a byproduct of the brain’s overflow in consolidating memories, regulating emotions, etc.; their very lightness and weirdness have meaning, allowing our brains to step outside of a single repetitive daily experience and avoid overfitting.
Overfitting: “stereotypical learning” by example
In 1980, a team in the United States undertook a machine learning project: a neural network model to identify tanks. They collected 200 photos of bushes, 100 of which had tanks hidden in them; the model was trained to draw patterns from a sample of these 200 photos, and thus learned to determine whether there were tanks in the other photos.
After training, the model was able to achieve 100% recognition accuracy. However, they eventually found that the training was so good simply because the 100 photos with tanks were taken on cloudy days, while the other 100 photos without tanks were taken on sunny days. The model didn’t learn how to identify tanks, it just learned to judge the weather based on the light. It treats chance as a general rule, and therefore lacks what is called generalization – once a tank appears on a sunny day, it can’t identify it correctly.
Machine learning isn’t that “smart” either
While this story may be fictional, it is a vivid example of overfitting. Common machine learning models today are also prone to overfitting – after all, neural networks are so powerful that they can see patterns that are invisible to many people, or treat chance as a rule; thus, they only “memorize” and can only process what is consistent with the sample. The key to avoiding overfitting is to have a network that is not overfitting. The key to avoiding overfitting is to find ways to let the model know which factors are accidental and can be ignored, and which patterns are important and should be learned carefully.
One common approach is data augmentation, where the same image is processed in various ways (e.g., flipped, randomly masked, or contrast adjusted) and then fed back together as a sample to the model for learning. This artificial addition of random interference prevents the model from taking a certain chance phenomenon as a universal law.
For example, randomly changing the brightness, contrast, and saturation, if used in the above items, would let the model know that the light is not the focus, thus alleviating the overfitting problem to some extent.
Randomly change the brightness, contrast, and saturation of the flower’s image. tensorflow
Let’s take another example. We want to train a model that can distinguish between cats and dogs, but the cats in the collection happen to be on the top left, while the dogs are on the bottom right. The model may take “something on the top left” as “cat” and “something on the bottom right” as “dog “. To alleviate this dilemma, it is important to let the model know that location is a factor of chance, not focus.
Flipping, rotating, scaling and cropping the cat and mouse image can solve the “overfitting” caused by the position factor. 丨kdnuggets.com
Random masking is also a method of data enhancement.
Random coverage丨Reference 
To avoid model overfitting, researchers tampered with material from everyday life …… Isn’t this a bit of a dream?
The human brain can also be “overfitted”
The neural network model in machine learning was originally inspired by the brain before it was proposed. Recent research in 2020 has shown that the important learning mechanism in such neural network models – “back propagation” – also seems to find a corresponding neural activity in the brain. Thus, Hull argues, it may be possible to use research in machine learning to feed research in neuroscience.
There is a certain similarity between human experience in real life and machine learning processing tasks. Our daily experiences, which are often repetitive and similar, cannot be experienced for all possible environments. This presents the same problem as machine learning – too few valid samples. Hull argues that too few samples for our complex and powerful brains may also lead to overfitting – we will simply remember the experiences rather than learn scalable, generalizable laws from them. For example, it has been shown that in a task called “texture discrimination” (texture discrimination), subjects who are overtrained will perform worse on the test.
Can the brain be “overfitted”? Pixabay
If the brain also has the problem of “overfitting”, can it be solved by “artificially adding interference”, just like machine learning? Hull believes that dreams are a kind of interference. The hypothesis is that dreaming is a biological “noise injection” of everyday experience. Bizarre dreams produce perceptible information that does not correspond to reality and deviates far from our daily experience, thus avoiding overfitting when we practice a task.
Hull presents a range of neuroscientific evidence for this. He argues that, in the face of these facts, his theory has more explanatory power than using any other theory already available.
First, excessive and repetitive training during the day for a particular task is the most likely to trigger dreams. For example, after playing Tetris and ski simulator all day, one dreams related scenes, but these dreams are not simply replayed. This deviation reduces overfitting and improves our performance on that task.
Second, children’s dreams are almost always static and perceptual (e.g., “there is a bird chirping” or “there is a puppy standing there”), whereas adults’ dreams are often narrative in nature. Hull argues that children’s perceptual systems are not yet solidified and can therefore be reorganized with the help of perceptual dreams, whereas adults can enhance their performance on cognitive tasks through narrative dreams, which also explains the narrative nature of dreams. In contrast, adults’ perceptual systems are sufficiently trained to be free of overfitting, and thus adults’ perceptual abilities cannot be enhanced by sleep as they are in children.
Finally, previous studies have shown that sleep improves the ability to abstract and reason in complex problems. This is because dreaming enhances the ability to generalize, which allows us to generate new insights into complex problems and show more creativity in cognitive tasks. In this matter, too, the dream overfitting hypothesis is more explanatory than the memory consolidation hypothesis.
In addition to neuroscience, Hull also gave some evidence on machine learning. There are models called “generative adversarial networks” that can generate new samples based on existing samples to expand the material learned and improve performance on different tasks. The samples they generate can sometimes have a dream-like effect. This interesting connection between dreams and generative models also indirectly confirms the dream overfitting hypothesis.
Images generated by antagonistic neural networks. 丨 arXiv:1903.05696
The current evidence, however, comes from previous empirical studies. Whether the human brain really faces the problem of overfitting, as neural network models do, remains to be further verified. To this end, Hull has proposed a series of predictions. For example, according to the hypothesis, a reduction in dreaming time is less of a weakening for the ability to memorize by rote than it is for the ability to generalize. If these predictions can be experimentally verified, the hypothesis has a little more reliability.
Hull also makes the point that all kinds of fictional works, including novels and movies, can be seen as some kind of artificial dreams that perform similar functions. Previous research has shown that reading or watching fictional works can improve our ability to understand the psychological states of others, which is an extension and generalization of “from understanding one’s own psychology to understanding the psychology of others. According to Hull, this broad “deviation from everyday experience” seems to be important for the further development of our cognitive functions and can be studied together.
 Nielsen, T.A., and Stenstrom, P. (2005). What are the memory sources of dreaming? Nature 437, 1286-1289
 Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., and Hinton, G. (2020). Backpropagation and the brain. nat. rev. neurosci. 1-12.
 Stickgold, R., Malia, A., Maguire, D., Roddenberry, D., and O’Connor, M. (2000). Replaying the game: hypnagogic images in normals and amnesics. science 290, 350-353.
 Foulkes, D. (2009). Children’s Dreaming and the Development of Consciousness (Harvard University Press).
 Kidd, D.C., and Castano, E. (2013). Reading literary fiction improves theory of mind. science 342, 377-380.
 Zhong, Zhun, et al. “Random erasing data augmentation.” Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34. No. 07. 2020.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/sleeping-and-dreaming-is-preventing-you-from-getting-addicted-to-reality-and-going-off-the-rails/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.