cross-posted from: https://programming.dev/post/37707316
- We found a zero-click flaw in ChatGPT’s Deep Research agent when connected to Gmail and browsing: A single crafted email quietly makes the agent leak sensitive inbox data to an attacker with no user action or visible UI.
- Service-Side Exfiltration: Unlike prior research that relied on client-side image rendering to trigger the leak, this attack leaks data directly from OpenAI’s cloud infrastructure, making it invisible to local or enterprise defenses.
- The attack utilizes an indirect prompt injection that can be hidden in email HTML (tiny fonts, white-on-white text, layout tricks) so the user never notices the commands, but the agent still reads and obeys them.
- Well-crafted social engineering tricks bypassed the agent’s safety-trained restrictions, enabling the attack to succeed with a 100% success rate.
You must log in or register to comment.