Scion is an experimental multi-agent orchestration testbed from GoogleCloudPlatform that focuses on a practical problem many AI-assisted coding setups still struggle with: running multiple “deep agents” at the same time without turning a repo into a race condition. The project’s core idea is straightforward and developer-friendly—each agent runs as an isolated container process with its own git worktree and credentials, enabling parallel work across a single codebase while reducing accidental interference.
Instead of pushing a single, rigid workflow for agent coordination, Scion positions itself as a lighter orchestration layer where models learn a CLI tool and decide how to coordinate. The intent is to make it easier to prototype multi-agent patterns through natural language prompting, rather than wiring up an elaborate framework first.
What Scion orchestrates (and how)
Scion is harness-agnostic, designed to run multiple agent harnesses—including Claude Code, Gemini CLI, Codex, and others—as isolated, concurrent processes. That isolation is central: agents get separate containers, separate config and credentials, and a dedicated git worktree, aimed at avoiding the “two agents edited the same file” class of conflicts.
Agents can run locally, on remote VMs, or across Kubernetes clusters, with Scion supporting multiple runtimes via named profiles—Docker, Podman, Apple containers, and Kubernetes are all called out.
Deep agents, but with operations in mind
A notable design choice is that agents run in tmux sessions, which enables a clean attach/detach workflow: let an agent run in the background, attach for interactive work, and send messages while it’s detached. In practice, this reads like an attempt to make agent swarms feel more like durable processes than chat tabs.
Scion also includes normalized OTEL telemetry for logging and metrics “across agent swarms,” aiming to make multi-agent work observable rather than opaque.
Templates, groves, and an optional Hub
Scion’s model is built around a handful of concrete concepts:
- Agent: a containerized process running a deep-agent harness
- Grove: a project namespace that commonly maps 1:1 with a git repo
- Template: an agent blueprint combining a system prompt with skills
- Runtime: Docker, Podman, Apple Container, or Kubernetes
- Hub: an optional control plane for multi-machine orchestration
- Runtime Broker: a machine (laptop or VM) that offers its runtimes to a Hub
Templates are where specialization shows up: roles like “Security Auditor” or “QA Tester” can be defined with custom prompts and skill sets (with documentation linked from the repo).
Quick start: build-first, then Go install
Scion doesn’t currently provide pre-built binaries or containers; container images need to be built first (linked from the docs). Installation is done from source and requires golang:
go install github.com/GoogleCloudPlatform/scion/cmd/scion@latest
From there, a repo is initialized as a “grove” via scion init, which creates the .scion directory for agent config. The project notes a practical tip: adding .scion/agents to .gitignore helps avoid issues with nested git worktrees. Scion also auto-detects the OS to select a default runtime (Docker on Linux/Windows, Container on macOS), with overrides available in .scion/settings.json.
Starting an agent is done via scion start ... --attach, and ongoing management includes commands like list (ps), attach, message (msg), logs, stop, resume, and delete.
Status and support expectations
Scion is explicitly labeled early and experimental. The repo characterizes:
- Local mode as relatively stable
- Hub-based workflows as “~80% verified”
- Kubernetes runtime as early, with known rough edges
It’s also clearly marked as not an officially supported Google product, and it isn’t eligible for Google’s Open Source Software Vulnerability Rewards Program. The project is licensed under Apache 2.0.
For the broader design framing, Scion links to a philosophy page as well as a full documentation site covering installation, concepts, CLI reference, templates, tmux usage, and Kubernetes runtime guidance.