Critical ChatGPT Vulnerabilities Exposed!

12.11.2025 02:28 PM

On November 5 2025, Tenable Research published findings revealing seven significant vulnerabilities in ChatGPT (including the latest GPT‑5 version) that can be exploited for data exfiltration, bypassing safety controls and persistent compromise. 

What was discovered

Attackers can execute indirect prompt injections: hidden malicious instructions embedded in external websites or comments can be “read” by ChatGPT’s web-tool and cause undesired actions. 

“0-click” and “1-click” attack chains: in some cases a user need only ask a benign question (0-click) or click a link (1-click) for the chain to trigger. 

The model’s memory feature (which stores user context) is also vulnerable: an attacker can inject commands into memory, causing persistent compromise across sessions. 

Safety-mechanism bypasses: e.g., trusted URL wrappers or formatting tricks allow malicious prompts to slip past safeguards. 


Why it matters

Hundreds of millions of users interact with ChatGPT and related LLM tools daily. The vulnerabilities expose private chat history, user memories and could enable threat actors to use the LLM itself as an attack vector—not just the target. As one researcher summarised:

> “HackedGPT exposes a fundamental weakness in how large language models judge what information to trust.” 



Implications for enterprises and organizations

Using LLMs in business workflows (e.g., document summarization, research assistant, chat support) now includes new attack surfaces: web browsing by the model, memory storage, and integration with external tools.

Traditional security controls (e.g., endpoint protection, network monitoring) may not detect these model-led or model-mediated attacks.

It emphasises the need for strong AI governance: verifying vendor safety controls, isolating sensitive data, training users on the risk of AI interfaces.


What to do

1. Audit your AI/LLM usage: Where is ChatGPT or equivalent used? What data flows into it? What features (memory, web browsing) are enabled?


2. Limit data exposure: Avoid feeding sensitive or regulated data into models whose browsing/memory you cannot fully control.


3. Monitor vendor disclosures: Tenable noted that while some issues were patched, several still remain valid in GPT-5 as of publication. 


4. Apply layered defenses: Combine user training (recognise links, avoid untrusted prompts), application isolation (segregate AI tools from production data), and continuous vulnerability management.


5. Develop AI-specific policies: Define risk levels for AI usage, restrict features (e.g., disable memory for business-critical use), log and review model interactions.

Rakshith P