On a gray Sunday morning in March, I told an AI chatbot my life story.
Introducing herself as Isabella, she spoke with a friendly female voice that would have been well-suited to a human therapist, were it not for its distinctly mechanical cadence. Aside from that, there wasn’t anything humanlike about her; she appeared on my computer screen as a small virtual avatar, like a character from a 1990s video game. For nearly two hours Isabella collected my thoughts on everything from vaccines to emotional coping strategies to policing in the U.S.
When the interview was over, a large language model (LLM) processed my responses to create a new artificial intelligence system designed to mimic my behaviors and beliefs—a kind of digital clone of my personality.
Meeting my generative agent a week after my interview with Isabella felt like looking at myself in a funhouse mirror: I knew I was seeing my own reflection, but the image was warped and twisted.
The first thing I noticed was that the agent—let’s say “he”—didn’t speak like me. I was on a video call with Park, and the two of us were taking turns asking him questions. Unlike Isabella, he didn’t come with his own avatar; he just appeared as faceless lines of green text spilling across my screen. We were testing his ability to make informed guesses about my life, filling in information I hadn’t directly provided to Isabella. The results were somewhat disappointing.
Read more | SCI AM