OpenAI Skills Now Available in OpenAI Responses API

OpenAI is introducing first-class skill support to the API, allowing developers to upload and version reusable skill bundles that models can mount into shell environments on demand.

OpenAI Skills Now Available in OpenAI Responses API

TL;DR

  • SKILL.md manifest required: bundle-mounted runtime extracts name and description into hidden system prompt; model reads files and runs scripts via shell when invoked.
  • Appropriate for repeatable, versioned procedures: stable report generators, conditional workflows, local scripts/templates; not for one-off or live-data tasks.
  • Packaging essentials: frontmatter must include name and description; only one manifest per bundle; Key limits: Max zip 50 MB; max file count 500; max uncompressed file 25 MB.
  • Upload & API: POST /v1/skills supports multipart or recommended single zip upload; successful uploads return identifiers and version pointers.
  • Mounting & runtime: attach via tools[].environment.skills; reference by skill_id (optionally pinned) or inline base64 zip; runs in hosted container_auto and local environment.type="local".
  • Best practices: make skills discoverable, prefer zip uploads, version-pin skills/models, design tiny CLIs with deterministic outputs and explicit paths, restrict network access.

OpenAI is adding native skill support to the API, enabling developers to upload, version, and attach reusable skill bundles that models can mount into shell environments (hosted or local). Skills give models controlled access to code, files, and procedural logic only when invocation is appropriate, creating a clean middle layer between system prompts and tools.

This makes skills a practical way to package repeatable workflows with code and assets, without bloating prompts or overloading tool schemas.

How skills integrate with model execution

A skill is distributed as a folder bundle with a required SKILL.md manifest. When a skill is attached to a shell environment, the runtime mounts the bundle and exposes the manifest’s metadata (name and description) to the model for discovery and routing.

At runtime:

  • The model can see which skills are available and what they do.
  • When a skill is selected, the model can read its files and execute scripts via the shell tool.
  • File access and command execution are scoped strictly to the skill’s mounted path.

In hosted shell mode (environment.type="container_auto"), OpenAI’s infrastructure:

  • uploads and unpacks the skill into the container,
  • injects the skill metadata into hidden prompt context,
  • allows execution only within the skill directory.

Local shell mode behaves the same conceptually, but execution is handled by your local runtime.

When skill support matters

Skills are designed for stable, reusable procedures that benefit from versioning and bundled code. They are a good fit when:

  • workflows include branching logic, retries, or validation,
  • code, templates, or fixtures must live alongside instructions,
  • the same procedure is reused across prompts, agents, or teams.

They are not intended for highly dynamic, one-off logic or workflows that primarily exist to call live external APIs—those remain better handled with tools.

Packaging and constraints

Each skill version includes:

  • exactly one SKILL.md manifest (used for discovery and routing),
  • optional scripts, dependencies, assets, and helpers.

Important limits:

  • 50 MB max zip size
  • 500 files per skill version
  • 25 MB max uncompressed file size

The manifest frontmatter (name and description) is critical; it determines whether and how the model discovers and invokes the skill.

Creating and attaching skills

Skills are uploaded using POST /v1/skills. You can upload:

  • multiple files via multipart form data, or
  • a single zip archive (recommended for reproducibility and fewer upload failures).

Once uploaded, skills are referenced by ID and version (latest, default, or a pinned version) when attached to a shell tool in the Responses API. For ephemeral use, skills can also be provided inline as a base64-encoded zip without creating a hosted resource.

Execution model

When a skill is attached:

  • the model decides whether to invoke it,
  • reads the manifest and relevant files,
  • executes commands via the shell tool,
  • writes outputs to known paths inside the skill directory.

This encourages a “tiny CLI” design: deterministic inputs, explicit outputs, and clear failure modes.

Why this matters

Native skill support formalizes a long-standing pattern: keeping complex procedures and code out of prompts while still making them available to the model. Skills sit cleanly between system prompts (always on, static) and tools (narrow, external), enabling reusable, versioned execution logic with minimal prompt overhead.

For full examples and API details, see the official documentation: https://developers.openai.com/cookbook/examples/skills_in_api

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community