
AFP/Getty Images/YASUYOSHI CHIBA
A new study by a Finnish research group has shown that speech can affect the development of neural networks before birth and have a positive effect on language acquisition. The study, published in the journal Proceedings of the National Academy of Sciences, shows that auditory stimuli during pregnancy can have a significant effect on the ability of the infant to accurately discriminate language changes and potentially compensate for genetic abnormalities such as dyslexia and language impairment.
Learning, the process of building knowledge or skills through experience or study, is the result of continuous development of brain neurons and their interactions with each other. Several external stimuli can affect the learning process which starts even before birth, when humans are still embryos, possibly at week 29 during infancy. Newborns’ cry melody, for example, is influenced by the native language of the parents. Scientists know that, as long as we are born, we have an innate preference to listening to speech rather than complex non-speech analogues. However, how newborns can detect changes in auditory stimuli and how the neuronal networks in the brain are affected is still a mystery.
To investigate this, a group of researchers in Finland led by Professor Minna Huotilainen, played the sound of a trisyllabic word (tatata) in three different ways to the infants of Finnish women from week 29 of pregnancy until birth. First, the word “tatata”, which does not mean anything in Finnish, was played as is without any change, second with a vowel change (tatota); and third with a pitch change, slightly differing the pronunciation of the middle syllable (tatata). The selection of these modifications was not random; in Finnish, pitch changes seldom happen, contrary to the frequent vowel changes. Therefore, the research team could evaluate how well the infants were trained in discriminating language changes.
After birth, the team compared newborns who heard these sounds while in the womb (learning group) with newborns that had not (control group). They recorded the neural responses with electroencephalography (EEG) while hearing the above trisyllabic word with its variations or other “words” new to both groups.
The results were astonishing. While all babies could discriminate vowel changes, only newborns in the learning group were trained well enough to distinguish pitch changes. And that was clearly evident with other words not included in the learning material, too.
“Once we learn a sound, if it’s repeated to us often enough, we form a memory of it, which is activated when we hear the sound again,” said Eino Partanen, first author of the study.
This is possibly attributed to neural memory traces generated during the gestational period and suggests brain processes important for speech perception induced by auditory experiences during the fetal period. What are the practical implications? Hearing a large variety of speech before birth may help to develop a neural network which may lead to improved and accurate speech perception and language acquisition later on in life. But we have to be cautious as the babies in these studies were less than 27 days old and we are not sure if they will retain this ability later in the life. This study also raises awareness that sound stimuli that might be perceived as noise by the fetus might have severe negative effect on speech perception and learning, as it has been shown earlier in mice.