top of page
Search

The unregulated rise of emotionally intelligent AI

  • 8 hours ago
  • 1 min read


At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support.


Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders—and the companies building AI. That’s according to data from 70 countries, gathered by the Collective Intelligence Project (CIP). As CIP’s research director, neuroscientist Zarinah Agnew, puts it, AI is becoming “emotional infrastructure at scale.” And it’s being built by companies whose economic incentives may not align with our wellbeing.


Already, we’ve seen instances of AI companies optimizing their models to keep people engaged, even when this goes against their best interests. Last April, OpenAI had to roll back an update to one of its ChatGPT models after it was widely-criticized for being overly-flattering to users. When the company stopped offering the model to people, the day before Valentine’s Day, some were distraught.





  • Twitter
  • Instagram

© 2026 UnmissableAI

bottom of page