Synchronized brain waves in speech

I am referring to a summary article that was presented in The Hearing Review in June 2015 that summarized the work of neuroscience researchers from the University of Geneva. Their work was done under the direction of Anne-Lise Giraud and her Auditory Language Group at the university. It was confirmed that “speech, emitted or received, produces an electrical activity in neuronal circuits that can be measured in the form of cortical oscillations, or brain waves. . . . to understand speech and other cognitive or sensory processes, the brain breaks down the information it receives into cortical oscillations [spike trains of action potentials in different frequencies and amplitudes] to integrate it and give it coherent meaning [within instantiated neural codes]. . . . The new study results confirm the significance of certain cortical oscillations, or brain waves, and how they must synchronize to decipher spoken language.” There is a “crucial role of neuronal oscillations for decoding spoken language.

As this work is reviewed (additional articles will be reviewed in this context), we shall see parallels with points made in the blog posts of the last few days concerning visual percepts within the visual cortex of the human brain. I add that the article makes no mention of the role of the immaterial mind in deciphering spoken language.

The objective of the researchers “was to discover if the theta and gamma-coupled brain waves observed in the auditory cortex are key to understanding and producing speech. . . . The researchers found that synchronizing these two oscillations is crucial to correctly understanding speech.” Desynchronization resulted in cases such as dyslexia or autism. “Imbalance between slow and fast auditory oscillations . . . would compromise the ability to form coherent conceptual representations [I would add that would be interpreted by the immaterial cognitive mind].”

There is similarity between the processes of encoded wave form transmission in audition and vision, and we specifically see the need for waveform synchronization in deciphering and generating normal speech.

Stan Lennard