Yet, LLMs can be a positive tool to aid thinking, but only if we don't rely on them by outsourcing our mental tasks in the process, says computational neuroscientist Vivienne Ming, author of Robot Proof. She's concerned though that this is not how most people interact with this technology.
Her reasoning comes from research she conducted for her book, during which Ming asked a group of students at the University of Berkeley to predict real-world outcomes, such as the price of oil. She found that the majority of participants simply asked AI and copied the answer.
She measured their brains' gamma wave activity – a marker of cognitive effort – finding it showed very little activation. Again her research is yet to published, but Ming worries that if her findings are borne out in further studies it could have long-term implications. Other research, for example, has linked weak gamma wave activity to cognitive decline later in life.
"That's really worrying," Ming says. "If that is a natural mode for people to interact with these systems – and these are smart kids – that's bad." Deep thinking, she says, is our superpower. "If we don't use it, the long-term implications for cognitive health are pretty strong."
That's because when we rely on LLMs it requires very little cognitive effort, Ming adds, which is exactly what's needed for a healthy brain.