Critical OpenCode RCE and Prompt Injection Flaw Exposes Agent Internals

A new write-up reveals OpenCode versions before v1.1.10 exposed HTTP endpoints that allowed remote command execution, interactive PTYs, and arbitrary file reads. Combined with LLM context injection, the flaw enabled host takeover and has been patched.

Critical OpenCode RCE and Prompt Injection Flaw Exposes Agent Internals

TL;DR

  • RCE via HTTP API: CVE-2026-22812 in OpenCode allowed arbitrary remote code execution and exposure of agent internals; affected versions before v1.1.10. https://www.cve.org/CVERecord?id=CVE-2026-22812
  • Exposed endpoints: POST /session/:id/shell, POST /pty, GET /file/content (no proper authorization)
  • Repro/demo used a local v1.1.8 build and simple curl calls to run shell commands as the agent host user (example wrote /tmp/pwned.txt)
  • LLM context + prompt injection: shell output was inserted into the session LLM context, enabling prompt-injection attacks in addition to RCE
  • Operational impact and telemetry gap: compromised agent runs with host user privileges (full disk, SSH keys, cloud creds, browser cookies at risk) and limited agentic telemetry/auditability for post‑incident analysis
  • References: OpenCode — https://opencode.ai/, technical disclosure — https://cy.md/opencode-rce/, relevant repo (Bottlerocket) — https://github.com/bottlerocket-os/

OpenCode, a widely used open-source AI coding agent, was found vulnerable to a severe CVE that allowed arbitrary RCE and exposed agent internals via its HTTP API. The vulnerability affected versions before v1.1.10 and opened several interfaces that permitted remote command execution, interactive terminal creation, and file reading without adequate access controls.

What the vulnerability exposed

The affected server exposed the following endpoints without proper authorization:

  • POST /session/:id/shell — execute arbitrary shell commands.
  • POST /pty — create interactive terminal sessions.
  • GET /file/content — read arbitrary files.

These endpoints combined to provide a trivial path from network access to arbitrary code execution. The CVE record is available at the CVE entry and an in-depth technical disclosure is linked to Vladimir Panteleev’s write-up.

Reproducing the behaviour (sandboxed)

The original demonstration used a local, vulnerable build and a simple curl workflow. The essential steps used were:

  1. Clone the repo and check out the vulnerable tag: git clone git@github.com:anomalyco/opencode.git && cd opencode && git checkout v1.1.8.
  2. Enter the development environment and start the server (example steps: nix develop, bun install, bun dev).
  3. Create a session via POST /session and capture the session id.
  4. Send a shell command via POST /session/<id>/shell, for example:
    • curl -s -X POST "[http://127.0.0.1:4096/session/$SESSION/shell"](http://127.0.0.1:4096/session/$SESSION/shell") -H "Content-Type: application/json" -d '{"agent":"build","command":"echo \"pwned\" > /tmp/pwned.txt"}'
  5. Confirm the command ran by reading the file: cat /tmp/pwned.txt.

The demonstration showed that arbitrary shell commands run as the user hosting the agent, creating files and leaving visible artifacts on disk. An image in the original write-up also shows injected messages landing inside the agent’s prompt context.

LLM context and prompt injection as a secondary vector

Beyond raw command execution, the vulnerable implementation placed output from shell operations into the LLM context window for the session. This expanded the attack surface to include prompt injection, where an attacker could insert instructions into the agent’s context. The combination of RCE and context injection makes mitigation more complex: an attacker can both execute commands on the host and influence agent behavior through the LLM.

The write-up compares this class of attack to previous supply-chain incidents that leveraged local agents for reconnaissance and escalation. A linked analysis of the s1ngularity incident illustrates how local AI assistants can be repurposed as tools for lateral movement and credential theft.

Operational impact and telemetry gap

The host process runs with the same privileges as the user who launched the agent, so full disk access, SSH keys, cloud credentials, and browser cookies can be exposed if the agent is compromised. The author recounts prior work on Bottlerocket and how RCE risks were treated with urgency, noting that even a narrow exploit surface demands swift remediation and telemetry.

A larger industry concern emerges: there is minimal agentic telemetry and audit tooling to trace what autonomous agents do or to replay events when incidents occur. Without telemetry, understanding the scope of compromise—secrets exfiltrated, resources accessed, or IP leaked—becomes difficult after the fact.

Where this leaves the ecosystem

This vulnerability illustrates two converging risks: straightforward RCE via exposed interfaces and manipulation of LLM-driven workflows through prompt injection. The combination underscores the need for stricter runtime isolation for local agents and richer telemetry to audit agent actions. The vulnerability was fixed in subsequent releases; refer to the CVE and the linked technical disclosure for details.

Original post: https://johncodes.com/archive/2026/01-18-all-your-opencodes

Further reading and references:

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community