top of page
Search

Seeking a sounding board? Beware the eager-to-please chatbot.

  • Mar 29
  • 1 min read


NEW YORK TIMES — The researchers found that nearly a dozen leading models were highly sycophantic, taking the users’ side in interpersonal conflicts 49 percent more often than humans did — even when the user described situations in which they broke the law, hurt someone or lied.


Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.


Read the full story  |  NEW YORK TIMES




  • Twitter

© 2026 UnmissableAI

bottom of page