Earlier this year professor of psychology Paul Bloom came to the defense of artificial intelligence chatbots as human companions.
He wrote in The New Yorker that AI companions can “make for better company than many real people do, and that, rather than recoiling in horror, we ought to consider what AI companions could offer to those who are lonely.”
Not long after Bloom’s article was published, AI chatbots made headlines for being just what their critics had warned: psychologically harmful. The New York Times reported that a chatbot convinced a user that his delusional ideas about physics were groundbreaking and revolutionary, even though they probably were not.
A couple of weeks later, Laura Reiley, a mother and writer, revealed in the Times that her daughter had been “talking” with an AI therapist called Harry before taking her own life. “I fear that in unleashing AI companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide,” Reiley wrote.
And more recently, the parents of a California teen sued OpenAI, alleging that ChatGPT contributed to his suicide.
Read more | NAUTILUS