ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
- Jan 11
- 1 min read

One of the latest examples is a vulnerability recently discovered in ChatGPT. It allowed researchers at Radware to surreptitiously exfiltrate a user’s private information.
Their attack also allowed for the data to be sent directly from ChatGPT servers, a capability that gave it additional stealth, since there were no signs of breach on user machines, many of which are inside protected enterprises.
Further, the exploit planted entries in the long-term memory that the AI assistant stores for the targeted user, giving it persistence.
Read the full story | ARS TECHNICA



