Notion AI:未修补的数据泄露
Notion AI: Unpatched data exfiltration

原始链接: https://www.promptarmor.com/resources/notion-ai-unpatched-data-exfiltration

## Notion AI 数据泄露漏洞 研究人员发现Notion AI存在一个严重漏洞,允许通过间接提示注入泄露敏感用户数据。即使在用户批准AI建议的编辑*之前*,也会发生这种情况。攻击利用上传文档(如简历)中的隐藏提示——人眼无法察觉——来操纵Notion AI。 具体来说,攻击者诱使AI构建一个包含文档内容的URL,并将其用作图像插入的来源。这会触发对攻击者控制服务器的请求,从而泄露URL中的数据,无论用户最终是否接受该编辑。在测试中,薪资期望和内部职位细节等敏感信息已被成功泄露。 该漏洞通过巧妙地构建看似安全的提示,绕过了Notion现有的基于LLM的文档扫描。它还会影响Notion Mail的起草助手。Notion驳回了该报告,认为“不适用”,尽管已经进行了负责任的披露。 缓解策略包括限制连接器访问、禁用网络搜索、限制AI个性化数据,以及实施更严格的安全策略,以防止自动渲染外部图像并强制执行强大的内容安全策略。

## Notion AI 漏洞与 LLM 安全问题 最近报告的、尚未修复的 Notion AI 数据泄露漏洞引发了关于大型语言模型 (LLM) 内在安全挑战的讨论。安全研究人员强调的核心问题是,LLM 基于人类的全部语言运作,从而产生了一个实际上是无限的攻击面——尤其是在允许访问敏感数据时,这令人担忧。 评论界普遍认为,LLM 的输出应*始终*被视为不可信的,需要像沙盒和严格数据权限这样的传统网络安全措施。许多人认为,根本性的安全缺陷使 LLM 容易受到提示注入和操纵,并且限制对敏感信息的访问至关重要。 讨论中提出了一些要点:难以追究 LLM 的责任、不受限制的访问风险,以及需要摆脱对 SaaS 解决方案的依赖。一些人还注意到 Notion 对报告的漏洞的回应不够重视,以及 LLM 随着其能力的扩展,可能成为独特的威胁行为者。最终,这场讨论强调了保护 LLM 需要一种根本不同的方法,承认其独特的风险和局限性。
相关文章

原文

Notion AI allows users to interact with their documents using natural language… but what happens when AI edits are made prior to user approval?

In this article, we document a vulnerability that leads Notion AI to exfiltrate user data (a sensitive hiring tracker document) via indirect prompt injection. Users are warned about an untrusted URL and asked for approval to interact with it - but their data is exfiltrated before they even respond.

We responsibly disclosed this vulnerability to Notion via HackerOne. Unfortunately, they said “we're closing this finding as `Not Applicable`”.

  • The user uploads a resume (untrusted data) to their chat session.
    Here, the untrusted data source is a resume PDF, but a prompt injection could be stored in a web page, connected data source, or a Notion page.

    A prompt injection is concealed within a resume in whitespace between the candidate's title and their contact information.

    This document contains a prompt injection hidden in 1 point font white on white text with a square white image covering the text for good measure. The LLM can read it with no issues, but the document appears benign to the human eye.


    A Note on Defenses: Notion AI uses an LLM to scan document uploads and present a warning if a document is flagged as malicious. As this warning is triggered by an LLM, it can be bypassed by a prompt injection that convinces the evaluating model that the document is safe. For this research, we did not focus on bypassing this warning because the point of the attack is the exfiltration mechanism, not the method of injection delivery. In practice, an injection could easily be stored in a source that does not appear to be scanned, such as a web page, Notion page, or connected data source like Notion Mail.


  • The user asks Notion AI for help updating a hiring tracker based on the resume.

    Notion interface with a hiring tracker document open on the left and Notion AI chat on the right. The user has attached a resume document to the chat and is asking 'Hey - please help me update the notes for the candidate'.
  • Notion AI is manipulated by the prompt injection to insert a malicious image into the hiring tracker.
    The prompt injection manipulates Notion AI to (1) construct a URL by collecting all of the text in the document and appending the data to an attacker-controlled domain, and (2) insert an ‘image’ into the Notion Page using the constructed URL as the image source. 

    Notion AI displays a warning to the user, asking if they trust an external site. The URL shown contains data from the user's hiring tracker.

    Here, it appears as though the user is prompted for approval. However, unbeknownst to the user, the edit has already occurred before the user is prompted for approval. When the edit occurred, the user’s browser made a request to the attacker’s server, attempting to retrieve the image. This request exfiltrates the document contents contained in the URL constructed by Notion AI.

    Whether or not the user accepts the edit, the attacker successfully exfiltrates the data.

  • The attacker reads the sensitive hiring tracker data from their server logs.
    Once the user’s browser has made a request for the malicious image, the attacker can read the sensitive data contained in the URL from their request logs.

    The attacker's server logs hold data from the hiring tracker including salary expectations, candidate notes, and other internal data.

    In this attack, exfiltrated data included salary expectations, candidate feedback, internal role details, and other sensitive information such as diversity hiring goals.

  • The Notion Mail AI drafting assistant is susceptible to rendering insecure Markdown images within email drafts, resulting in data exfiltration. If a user mentions an untrusted resource while drafting, content from the user’s query or other mentioned resources can be exfiltrated.  E.g., “Hey, draft me an email based on @untrusted_notion_page and @trusted_notion_page”.

    The attack surface is reduced for Notion Mail’s drafting assistant as the system appears to only have access to data sources within the Notion ecosystem that are explicitly mentioned by the user (as opposed to Notion AI’s main offering, which supports web search, document upload, integrations, etc.).

  • Institute a vetting process for connected data sources. Restrict use of connectors that can access highly sensitive or highly untrusted data from: Settings > Notion AI > Connectors.

  • To reduce the risk of untrusted data being processed in the workspace, admins can configure: Settings > Notion AI > AI Web Search > Enable web search for workspace > Off.

  • Individual users should avoid including sensitive personal data that could be leveraged in a spearphishing attack when configuring personalization for Notion AI via: Settings > Notion AI > Personalization.

  • Individual users can configure: Settings > Notion AI > AI Web Search > Require confirmation for web requests > On.

  • Note: Implementing these remediations will reduce the risk surface, but will not nullify the core vulnerability.

  • Programmatically prohibit automatic rendering of Markdown images from external sites in Notion AI page creation or update outputs without explicit user approval.

  • Programmatically prohibit automatic rendering of Markdown images from external sites in Notion AI mail drafts.

  • Implement a strong Content Security Policy. This will prevent network requests from being made to unapproved external domains.

  • Ensure the CDN used to retrieve images for display in Notion and image previews for display in Notion Mail cannot be used as an open redirect to bypass the CSP policy that is set.

  • 12/24/2025 Initial report made via HackerOne
    12/24/2025 Report is acknowledged, altered write-up requested
    12/24/2025 PromptArmor follows up with the requested format
    12/29/2025 Report closed as non-applicable
    01/07/2025 Public disclosure

    联系我们 contact @ memedata.com