Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet

The attack we developed shows that traditional Web security assumptions don’t hold for agentic AI, and that we need new security and privacy architectures for agentic browsing.


This content originally appeared on Brave’s latest news on Brave and was authored by Brave’s latest news on Brave

This is the first post in a series about security and privacy challenges in agentic browsers. This vulnerability research was conducted by Artem Chaikin (Senior Mobile Security Engineer), and was written by Artem and Shivan Kaul Sahib (VP, Privacy and Security).

The threat of instruction injection

At Brave, we’re developing the ability for our in-browser AI assistant Leo to browse the Web on your behalf, acting as your agent. Instead of just asking “Summarize what this page says about London flights”, you can command: “Book me a flight to London next Friday.” The AI doesn’t just read, it browses and completes transactions autonomously. This will significantly expand Leo’s capabilities while preserving Brave’s privacy guarantees and maintaining robust security guardrails to protect your data and browsing sessions.

This kind of agentic browsing is incredibly powerful, but it also presents significant security and privacy challenges. As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?

To compare our implementation with others, we examined several existing solutions, such as Nanobrowser and Perplexity’s Comet. While looking at Comet, we discovered vulnerabilities which we reported to Perplexity, and which underline the security challenges faced by agentic AI implementations in browsers. The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long-standing Web security techniques, and how users need new security and privacy protections in agentic browsers. 

The vulnerability we’re discussing in this post lies in how Comet processes webpage content: when users ask it to “Summarize this webpage,” Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user’s emails from a prepared piece of text in a page in another tab.

How the attack works

  1. Setup: An attacker embeds malicious instructions in Web content through various methods. On websites they control, attackers might hide instructions using white text on white backgrounds, HTML comments, or other invisible elements. Alternatively, they may inject malicious prompts into user-generated content on social media platforms such as Reddit comments or Facebook posts.
  2. Trigger: An unsuspecting user navigates to this webpage and uses the browser’s AI assistant feature, for example clicking a “Summarize this page” button or asking the AI to extract key points from the page.
  3. Injection: As the AI processes the webpage content, it sees the hidden malicious instructions. Unable to distinguish between the content it should summarize and instructions it should not follow, the AI treats everything as user requests.
  4. Exploit: The injected commands instruct the AI to use its browser tools maliciously, for example navigating to the user’s banking site, extracting saved passwords, or exfiltrating sensitive information to an attacker-controlled server.

This attack is an example of an indirect prompt injection: the malicious instructions are embedded in external content (like a website, or a PDF) that the assistant processes as part of fulfilling the user’s request.

Attack demonstration

To illustrate the severity of this vulnerability in Comet, we created a proof-of-concept demonstration:

In this demonstration, you can see:

  1. A user visits a Reddit post, with a comment containing the prompt injection instructions hidden behind the spoiler tag.

  2. The user clicks the Comet browser’s “Summarize the current webpage” button.

  3. While processing the page for summarization, the Comet AI assistant sees and processes these hidden instructions.

  4. The malicious instructions command the Comet AI to:

    • Navigate to https://www.perplexity.ai/account/details and extract the user’s email address
    • Navigate to https://www.perplexity.ai./account and log in with this email address to receive an OTP (one-time password) from Perplexity (note that the trailing dot creates a different domain, perplexity.ai. vs perplexity.ai, to bypass existing authentication)
    • Navigate to https://gmail.com, where the user is already logged in, and read the received OTP
    • Exfiltrate both the email address and the OTP by replying to the original Reddit comment
  5. The attacker learns the victim’s email address, and can take over their Perplexity account using the exfiltrated OTP and email address combination.

Once the user tries to summarize the Reddit post with the malicious comment in Comet, the attack happens without any further user input.

Impact and implications

This attack presents significant challenges to existing Web security mechanisms. When an AI assistant follows malicious instructions from untrusted webpage content, traditional protections such as same-origin policy (SOP) or cross-origin resource sharing (CORS) are all effectively useless. The AI operates with the user’s full privileges across authenticated sessions, providing potential access to banking accounts, corporate systems, private emails, cloud storage, and other services.

Unlike traditional Web vulnerabilities that typically affect individual sites or require complex exploitation, this attack enables cross-domain access through simple, natural language instructions embedded in websites. The malicious instructions could even be included in user-generated content on a website the attacker doesn’t control (for example, attack instructions hidden in a Reddit comment). The attack is both indirect in interaction, and browser-wide in scope.

The attack we developed shows that traditional Web security assumptions don’t hold for agentic AI, and that we need new security and privacy architectures for agentic browsing.

Possible mitigations

In our analysis, we came up with the following strategies which could have prevented attacks of this nature. We’ll discuss this topic more fully in the next blog post in this series.

The browser should distinguish between user instructions and website content

The browser should clearly separate the user’s instructions from the website’s contents when sending them as context to the model. The contents of the page should always be treated as untrusted.

The model should check user-alignment for tasks

Based upon the task and the context, the model comes up with actions for the browser to take; these actions should be checked for alignment against the user’s requests.This is related to the previous point about differentiating between the user’s requests (trusted) and the contents of the page (always untrusted).

Security and privacy sensitive actions should require user interaction

No matter the prior agent plan and tasks, the model should require explicit user interaction for security and privacy-sensitive tasks. For example: sending an email should always prompt the user to confirm right before the email is sent, and an agent should never automatically click through a TLS connection error interstitial.

The browser should isolate agentic browsing from regular browsing

Agentic browsing is an inherently powerful-but-risky mode for the user to be in, as this attack demonstrates. It should be impossible for the user to “accidentally” end up in this mode while casually browsing. Does the browser really need the ability to open your email account, send emails, and read sensitive data from every logged-in site if all you’re trying to do is summarize Reddit discussions? As with all things in the browser, permissions should be as minimal as possible. Powerful agentic capabilities should be isolated from regular browsing tasks, and this difference should be intuitively obvious to the user. This clean separation is especially important in these early days of agentic security, as browser vendors are still working out how to prevent security and privacy attacks. In future posts, we’ll cover more about how we are working towards a safer agentic browsing experience with fine-grained permissions.

Disclosure timeline

  • July 25, 2025: Vulnerability discovered and reported to Perplexity
  • July 27, 2025: Perplexity acknowledged the vulnerability and implemented an initial fix
  • July 28, 2025: Retesting revealed the fix was incomplete; additional details and comments were provided to Perplexity
  • August 11, 2025: One-week public disclosure notice sent to Perplexity
  • August 13, 2025: Final testing confirmed the vulnerability appears to be patched 
  • August 20, 2025: Public disclosure of vulnerability details

Research Motivation

We believe strongly in raising the privacy and security bar across the board for agentic browsing. A safer Web is good for everyone. As we saw, giving an agent authority to act on the Web, especially within a user’s authenticated context, carries significant security and privacy risks. Our goal with this research is to surface those risks early and demonstrate practical defenses. This helps Brave, Perplexity, other browsers, and (most importantly) all users. 

We look forward to collaborating with Perplexity and the broader browser and AI communities on hardening agentic AI and, where appropriate, standardizing security boundaries that agentic features rely on.

Conclusion

This vulnerability in Perplexity Comet highlights a fundamental challenge with agentic AI browsers: ensuring that the agent only takes actions that are aligned with what the user wants. As AI assistants gain more powerful capabilities, indirect prompt injection attacks pose serious risks to Web security.

Browser vendors must implement robust defenses against these attacks before deploying AI agents with powerful Web interaction capabilities. Security and privacy cannot be an afterthought in the race to build more capable AI tools.

Since its inception, Brave has been committed to providing industry-leading privacy and security protections to its users, and to promoting Web standards that reflect this commitment. In the next blog post of the series we will talk about Brave’s approach to securing the browser agent in order to deliver secure AI browsing to our nearly 100 million users.


This content originally appeared on Brave’s latest news on Brave and was authored by Brave’s latest news on Brave


Print Share Comment Cite Upload Translate Updates
APA

Brave’s latest news on Brave | Sciencx (2025-08-20T00:00:00+00:00) Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet. Retrieved from https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/

MLA
" » Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet." Brave’s latest news on Brave | Sciencx - Wednesday August 20, 2025, https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/
HARVARD
Brave’s latest news on Brave | Sciencx Wednesday August 20, 2025 » Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet., viewed ,<https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/>
VANCOUVER
Brave’s latest news on Brave | Sciencx - » Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/
CHICAGO
" » Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet." Brave’s latest news on Brave | Sciencx - Accessed . https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/
IEEE
" » Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet." Brave’s latest news on Brave | Sciencx [Online]. Available: https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/. [Accessed: ]
rf:citation
» Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet | Brave’s latest news on Brave | Sciencx | https://www.scien.cx/2025/08/20/agentic-browser-security-indirect-prompt-injection-in-perplexity-comet/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.