Five years ago, the idea that a machine could be anyone’s confidant would have sounded outlandish, a science-fiction premise. These days, it’s a research topic.
In recent studies, people have been asked to interact with either a human or a chatbot and then to rate the experience. These experiments usually reveal a bias: if people know they’re talking to a chatbot, they’ll rate the interaction lower.
But in blind comparisons A.I. often comes out ahead.
In one study, researchers took nearly two hundred exchanges from Reddit’s r/AskDocs, where verified doctors had answered people’s questions, and had ChatGPT respond to the same queries.
Health-care professionals, blind to the source, tended to prefer ChatGPT’s answers—and judged them to be more empathic. In fact, ChatGPT’s responses were rated “empathic” or “very empathic” about ten times as often as the doctors’.
Not everyone is impressed. Molly Crockett, a cognitive scientist I know, wrote in the Guardian that these man-versus-machine showdowns are “rigged against us humans”—they ask people to behave as if they were bots, performing emotionless, transactional tasks.
Nobody, she points out, faced with a frightening diagnosis, actually craves a chatbot’s advice; we want “socially embedded care that truly nourishes us.” She’s right, of course—often you need a person, and sometimes you just need a hug. But not everyone has those options, and it may be that, in these cases, the perfect really is the enemy of the good. “ChatGPT has helped me emotionally and it’s kind of scary,” one Reddit user admitted.
“Recently I was even crying after something happened, and I instinctively opened up ChatGPT because I had no one to talk to about it. I just needed validation and care and to feel understood, and ChatGPT was somehow able to explain what I felt when even I couldn’t.”