OpenAI has published a new GPT‑5.5 System Card detailing the company’s latest model release and how it was evaluated ahead of deployment. The document positions GPT‑5.5 as a model built for “complex, real-world work,” with an emphasis on end-to-end task completion across multiple tools.
What OpenAI says GPT‑5.5 is optimized to do
In the introduction, OpenAI describes GPT‑5.5 as designed for a range of work tasks, including:
- writing code
- researching online
- analyzing information
- creating documents and spreadsheets
- moving across tools to complete multi-step work
Compared with earlier models, OpenAI says GPT‑5.5 “understands the task earlier,” needs “less guidance,” uses tools more effectively, and “checks it work and keeps going until it’s done.”
Safety evaluations and safeguarded release
OpenAI says GPT‑5.5 went through the company’s “full suite of predeployment safety evaluations” as well as its Preparedness Framework. The company also reports targeted red-teaming focused on “advanced cybersecurity and biology capabilities,” alongside feedback gathered from “nearly 200 early-access partners” before release.
Based on those results, OpenAI says GPT‑5.5 is shipping with its “strongest set of safeguards to date,” with the stated goal of reducing misuse while preserving legitimate uses of the model’s capabilities.
How GPT‑5.5 relates to GPT‑5.5 Pro
The system card also addresses GPT‑5.5 Pro, which OpenAI describes as the same underlying model running with a setting that uses “parallel test time compute.” OpenAI says it generally treats GPT‑5.5’s safety results as “strong proxies” for GPT‑5.5 Pro, but notes it will separately evaluate GPT‑5.5 Pro in some cases where that setting could affect risk or the appropriate safeguards posture.
OpenAI adds that, unless otherwise noted, the evaluations described in its system cards are run in an offline setting.



