A new study on speech comprehension shows that humans respond to the “contextual semantic content of each word in a relatively time-locked fashion.”
These findings demonstrate that, when successfully comprehending natural speech, the human brain responds to the contextual semantic content of each word in a relatively time-locked fashion. (Source)
This process is roughly illustrated here:

While I do not doubt these findings for simple speech in simple contexts, I do wonder what the results would be for speech in psychologically complex contexts, whether that speech is simple or not.
I wonder this because I am certain that in almost all psychologically complex contexts (those rich with subjectivity, emotion, idiosyncratic memory or association, etc.) the “contextual semantic content of each word” will necessarily be different, often very different for each speaker.
Psychologically rich interpersonal speech is almost always fraught with contextual differences that can be very large. Sometimes participants know these differences exist and sometimes they don’t. It is very common for speakers to make major mistakes in this area, the most important area of speech for human psychological well-being.
It seems possible that EEG with increased sensitivity might one day be able to detect “context diversion” between speakers, but even if complex emotional information is also included, people will still have to talk about what is diverging from what.
My comments are not meant to detract from the very interesting findings posted above. I make them because these findings illustrate how inherently problematic real-time mutual comprehension of the “contextual semantic content” of all spoken words actually is.
FIML practice is the only way I know of today to find profound real-time mutual comprehension of complex interpersonal speech.
first posted FEBRUARY 24, 2018