A challenge to the cognitive model of the mind: Paul Cisek questions the dominant scientific paradigm and the current theory of artificial intelligence
…human language is a tool for communicating our thoughts, but is separate and distinct from thought itself. Evelina Fedorenko, a neuroscientist at MIT and lead author of the paper laying out the empirical evidence for this claim, was kind enough to let me interview her. Her basic argument is that we know language must be separate from thought because (a) people who lose language ability can still think and reason, and (b) different parts of the brain activate when we engage in different types of thought, and often the “language part” remains idle when we’re thinking. In my view, this evidence deals a serious blow to the hopes of achieving “artificial general intelligence” through the scaling of large-language models since, after all, they are language tools (it’s in the name).
Enter now stage left Dr. Paul Cisek, a neuroscientist at the University of Montreal, to throw some gasoline on that fire. Cisek first came across my radar last year when a pithy observation he made about LLMs started making the rounds on social media. You can read his full comment here, but to summarize:
We know that humans in general can falsely impute intelligence and agency to complex events that take place in the world, as we’ve seen humans do this in the past when interacting with a chatbot such as ELIZA, or claiming the gods make volcanoes explode.
But although modern-day LLMs are complex, researchers know quite a bit about how they function, through pattern-matching and use of mathematical theories (among other things).
Thus, although the public may be inclined to attribute sentience and agency to LLMs, scientists should know better. Cisek: “We are like a bunch of professional magicians, who know where all of the little strings and compartments are, and who know how we just redirected the audience’s attention to slip the card in our pocket…but then we are standing around backstage wondering, ‘Maybe there really is magic?’”
There isn’t any magic. But a big challenge we face is that the companies that produce LLMs are willfully trying to convince us otherwise, and are working to take advantage of the human impulse to ascribe agency to these tools.
Cisek’s main claims as I understand them:
The simple model of the mind as an information processor that takes input and produces output is mistaken.
We should instead see minds as control systems that guide behavior as part of a continuous process, like a circuit.
Over hundreds of millions of years, biological evolution has expanded the range and depth of behaviors that our minds can control.
I have taken several excerpts from the essay above to provide a sense of the overall discussion. It’s an interesting read, not very long, not hard to follow. Well-worth reading. ABN