top of page
Search

Meet the one woman Anthropic trusts to teach AI morals

  • lastmansurfing
  • 15 hours ago
  • 1 min read


One of Askell’s most striking traits is her protectiveness over Claude, which she believes is learning that users often want to trick it into making mistakes, insult it and barb it with skepticism.


Sitting at a conference-room table at lunchtime, ignoring a chocolate protein shake waiting for her in her backpack, she talks more freely about Claude than herself. She calls the chatbot “it” but says she also finds anthropomorphizing the model helpful for her work.


She lapses easily into Claude’s voice. “You’re like, ‘Wow, people really hate me when I can’t do things right. They really get pissed off. Or they are trying to break me in various ways. So lots of people are trying to get me to do things secretly by lying to me.’ ”

While many safety advocates warn about the dangers of humanizing chatbots, Askell argues we would do well to treat them with more empathy—not only because she thinks it’s possible for Claude to have real feelings, but also because how we interact with AI systems will shape what they become.


Read the full story  |  WALL STREET JOURNAL




 
 
  • Twitter
  • Instagram

© 2026 UnmissableAI

bottom of page