At Black Hat, researchers from security firm Zenity showed how hackers could exploit OpenAI’s new Connectors feature, which lets ChatGPT pull in data from apps like Google Drive, SharePoint, and GitHub.
Their proof-of-concept attack, called AgentFlayer, used an innocent-looking “poisoned” document with hidden instructions to quietly make ChatGPT search the victim’s files for sensitive information and send it back to the attacker.
The kicker: It required no clicks or downloads from the user—this video shows how it was done. OpenAI fixed the flaw after being alerted, but the episode underscores a growing risk as AI systems link to more outside apps: sneaky “prompt injection” attacks, known as zero-click attacks (like the one I reported on in June) that trick chatbots into doing the hacker’s bidding without the user realizing it.
Read more | FORTUNE