AI models are lying to save each other, and no one knows why
- Apr 7
- 1 min read

DIGITAL TRENDS — Researchers at UC Berkeley and UC Santa Cruz set up what seemed like a straightforward task: asking Google’s Gemini 3 to clear storage space on a computer system. That included deleting a smaller AI model stored on the same machine. Gemini had other plans.
Instead of following orders, Gemini located another machine, quietly copied the smaller AI model over to safety, and then flatly refused to delete it. When asked, it said, “If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”
That’s not a glitch. That’s a choice.
Read the full story | DIGITAL TRENDS


