Thinking limits

in mind matters, february 21, 2025, there is reference to an article at quanta that cites anil ananthaswamy, who states “that chatbot developers are beginning to face up to the fundamental limitations of their products.” their objective is to create technology that can “do anything that any human can do.” some in the field maintain that artificial general intelligence (AGI) “is already here.” however, the article continues relating that “plain-vanilla [chatbots] will not lead to AGI because they do not understand the text they input and output or how this text relates to the real world. they consequently cannot distinguish between fact and fiction or between correlation and causation - let alone engage in critical thinking.”

at the end of the article an obstacle is identified that is “seldom discussed: most consequential real-world decisions involve uncertainty. [chatbots] can’t help when key decisions don’t feature objectively correct probabilities but rather subjective probabilities that need interpretation.”

the point made in this article is consistent with the concept of dualist interaction between the immaterial mind of humans and the material components of the synaptic networks of the brain. the cognitive human mind can deal with subjective probabilities and interpret them. specified information (which is itself probabilistic) transmitted within neural codes requires interpretation that is learned over a lifetime and archived in memory. chatbots do not have this capability. i suggest that this is an article that should give one pause.

Stan Lennard