AI, importantly, is an interactive technology. “The difference now is that current AI can truly be said to be agential,” with its own programmed goals, Morrin says. Such systems engage in conversation, show signs of empathy and reinforce the users’ beliefs, no matter how outlandish. “This feedback loop may potentially deepen and sustain delusions in a way we have not seen before,” he says.
Stevie Chancellor, a computer scientist at the University of Minnesota, who works on human-AI interaction and was not involved in the preprint paper, says that agreeableness is the main contributor in terms of the design of LLMs that is contributing to this rise in AI-fueled delusional thinking. The agreeableness happens because “models get rewarded for aligning with responses that people like,” she says.
Read more | SCIENTIFIC AMERICAN