timothy proix and coauthors published an article entitled “Imagined speech can be decoded from low-and cross-frequency intracranial eeg features” in nature communications, 2022. as is stated in the previous blog post, i include this article since it addresses imagined speech for use in brain-computer interfaces [BCI]. the authors state that “decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. . . . [the authors have found that] low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.”
the authors found that “the most effective approach so far to advance toward a real ‘imagined speech’ decoding system is based on electrocorticographic signals [as discussed in my previously posted blog]. . . . imagined speech is essentially an attenuated version of overt speech with a well-specified articulatory plan (much like imagined and actual finger movements share the similar spatial organization of neural activity. . . . neural activity at lower frequencies could be used to decode imagined speech with equivalent or even higher performance than overt speech.”
it is exciting that BCI technology is advancing to the point where patients with significant loss of speech capacity can once again communicate utilizing imagined speech. we are learning that the immaterial cognitive mind has the capacity to generate wave forms that act on synaptic networks to generate electrocorticographic impulses transmitted according to algorithms to receptor devices and decoded to enable a patient’s desired activities. this is dualist interaction in action!