× News Alerts AI News CyberSec News Let's Talk Local AI Bank Tech News Cyber Advisories Contact

GenAI/LLM Tools Vulnerable to Man-in-the-Prompt Attacks

A critical "Man-in-the-Prompt" vulnerability threatens popular GenAI/LLM tools like ChatGPT and Gemini. Malicious browser extensions exploit the Document Object Model (DOM) to inject prompts, acquire sensitive data, and alter AI responses without special permissions. This exposes confidential corporate AI data, as current security measures are insufficient. Mitigation strategies include continuous DOM activity monitoring and advanced browser extension risk assessment.

GenAI/LLM Tools Vulnerable to Man-in-the-Prompt Attacks

A newly identified critical vulnerability, dubbed “Man-in-the-Prompt,” now threatens popular AI tools such as ChatGPT, Google Gemini, and other internal AI LLM deployments.

Research by LayerX published on July 29, 2025, details how malicious browser extensions leverage the Document Object Model (DOM) to inject prompts, illicitly acquire sensitive data, and alter AI responses, all without requiring special permissions. This vulnerability poses a risk to billions of users across prominent platforms. Confidential corporate AI data could be exposed, as existing security measures prove inadequate for detecting these attacks.

Browser Extension Exploits Target Browser AI Prompts

The root of this vulnerability lies in the integration method of generative AI tools with web browsers, specifically through Document Object Model (DOM) manipulation. During user interaction with LLM-based assistants, prompt input fields become readily accessible to any browser extension possessing even rudimentary scripting capabilities.

This inherent architectural design flaw enables malicious actors to execute prompt injection attacks, either by modifying legitimate user inputs or by embedding clandestine instructions directly within the AI interface. Consequently, the exploit establishes a “man-in-the-prompt” scenario, empowering attackers to both read and write to AI prompts undetected.

It was demonstrated that browser extensions, even those operating without any special permissions, can gain access to widely used LLMs such as ChatGPT, Gemini, Copilot, Claude, and Deepseek. This attack vector is particularly alarming given that almost all of the enterprise users utilize at least one browser extension.

Current security solutions, lack the necessary coverage of DOM-level interactions, thereby rendering them ineffectual against this specific attack methodology.

Addressing the Risk

The impact to companies can be significant if internal LLMs are being used since usually they have been trained with proprietary confidential data. Also, these internal LLMs, usually based on open source models do not always have the safeguards in place to prevent the LLM from responding to inappropriate prompts.

As such, internal information security teams should plan to augment existing data security and application-level controls to also address browser behaviour. These could include the continuous monitoring of Document Object Model (DOM) activity. The adoption risk assessment for browser extensions that extends beyond static permission analysis, and the prevention of prompt tampering via real-time browser-layer protection.

Subscribe for AI & Cybersecurity news and insights