OpenAI launches GPT-5.5: faster agents, fewer tokens, double the price

OpenAI has just rolled out GPT-5.5, bringing bigger gains in agentic coding, knowledge work, and computer use while matching GPT-5.4 latency. It’s arriving first in ChatGPT and Codex, with API access “very soon” and new pricing tiers.

openai cover

TL;DR

  • GPT‑5.5: New flagship; longer, messier tasks with GPT‑5.4 per-token latency in real-world serving
  • Rollout (Apr 23, 2026): ChatGPT and Codex for Plus/Pro/Business/Enterprise; API “very soon”; GPT‑5.5 Pro for Pro/Business/Enterprise
  • Agentic coding gains: Terminal‑Bench 2.0 82.7%; SWE‑Bench Pro 58.6%; Expert‑SWE 73.1%; fewer tokens on Codex tasks
  • Knowledge work + computer use: Better docs/spreadsheets/slides; GDPval 84.9%; OSWorld‑Verified 78.7%; Tau2‑bench Telecom 98.0%
  • Scientific research: Improved GeneBench; leads BixBench; internal Ramsey proof example verified in Lean
  • API pricing: gpt-5.5 $5/M input, $30/M output, 1M context; gpt-5.5-pro $30/M input, $180/M output; Batch/Flex half, Priority 2.5x

OpenAI has announced GPT‑5.5, a new flagship model positioned as a step up for longer, messier tasks—while keeping the same per-token latency as GPT‑5.4 in “real-world serving,” according to the company.

The headline claim: GPT‑5.5 is built to carry more of a task end-to-end, including planning, tool use, checking work, and continuing through ambiguity. OpenAI says the biggest gains show up in agentic coding, computer use, knowledge work, and early scientific research, while also using “significantly fewer tokens” for the same Codex tasks.

Rollout: ChatGPT and Codex first, API “very soon”

As of April 23, 2026, OpenAI says GPT‑5.5 is rolling out to Plus, Pro, Business, and Enterprise users in ChatGPT and Codex.

GPT‑5.5 Pro is rolling out to Pro, Business, and Enterprise users in ChatGPT. OpenAI says API deployments require different safeguards and that GPT‑5.5 and GPT‑5.5 Pro will come to the API “very soon.”

Coding: higher scores, fewer tokens

OpenAI frames GPT‑5.5 as its strongest “agentic coding” model to date, emphasizing command-line and long-horizon software work.

Reported benchmark highlights include:

  • Terminal-Bench 2.0: 82.7% (vs. 75.1% for GPT‑5.4)
  • SWE-Bench Pro (Public): 58.6% (vs. 57.7% for GPT‑5.4)
  • Expert-SWE (Internal): 73.1% (vs. 68.5% for GPT‑5.4)

OpenAI also points to qualitative feedback from early testers. Every CEO Dan Shipper called it “the first coding model I’ve used that has serious conceptual clarity,” while MagicPath CEO Pietro Schirano described GPT‑5.5 successfully merging a large, conflicting frontend/refactor branch into a changed main branch “in one shot.”

Knowledge work and computer use: documents, spreadsheets, and OSWorld

OpenAI says GPT‑5.5 improves document, spreadsheet, and slide generation in Codex, and pairs that with “computer use skills” like navigating interfaces across tools.

On benchmarks tied to knowledge work and operating real environments, OpenAI highlights:

  • GDPval (wins or ties): 84.9%
  • OSWorld-Verified: 78.7%
  • Tau2-bench Telecom: 98.0% (run without prompt tuning)

Internally, OpenAI says more than 85% of the company uses Codex weekly, citing examples across Comms (building a scoring and risk framework for speaking requests), Finance (reviewing 24,771 K-1 forms totaling 71,637 pages while excluding personal information), and Go-to-Market (automating weekly reports to save 5–10 hours per week).

Scientific research: GeneBench, BixBench, and a Ramsey proof example

OpenAI says GPT‑5.5 performs better across multi-stage scientific workflows, pointing to improved results on GeneBench (genetics and quantitative biology) and leading performance on BixBench (bioinformatics and data analysis).

The company also describes an internal GPT‑5.5 setup that helped produce “a new proof” related to off-diagonal Ramsey numbers, later verified in Lean.

Safety and cybersecurity: stricter classifiers, trusted access

OpenAI says GPT‑5.5 ships with its “strongest set of safeguards to date,” including evaluations across safety and preparedness frameworks, internal and external redteaming, and targeted testing for advanced cybersecurity and biology capabilities. It also says feedback was collected from nearly 200 trusted early-access partners.

On cybersecurity specifically, OpenAI says it is deploying “stricter classifiers for potential cyber risk,” and is expanding “Trusted Access for Cyber,” starting with Codex. Under its Preparedness Framework, OpenAI classifies GPT‑5.5’s biological/chemical and cybersecurity capabilities as High, noting the model “didn’t reach Critical” for cybersecurity.

A GPT‑5.5 system card is available at deploymentsafety.openai.com/gpt-5-5.

Pricing: API rates disclosed ahead of launch

OpenAI says gpt-5.5 will be priced in the API at 5,00 € per 1M input tokens and 30,00 € per 1M output tokens—representing a 100% price increase over GPT-5.4—with a 1M context window, across the Responses and Chat Completions APIs. Batch and Flex pricing will be offered at half the standard rate, while Priority processing is priced at 2,5x the standard rate.

It also says gpt-5.5-pro will be priced at $30 per 1M input tokens and $180 per 1M output tokens.

OpenAI’s full details are listed on its API pricing page.


Source: OpenAI — “Introducing GPT‑5.5”

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community