GPT-5.3-Codex-Spark just got a notable speed bump, at least according to a brief update from Tibo Sottiaux on X. In a post on Friday, Sottiaux said GPT-5.3-Codex-Spark is now about 30% faster and is serving at over 1200 tokens per second, with “more to come on speed across the board.”
For anyone building with AI-assisted coding tools, that kind of throughput changes the feel of an interaction—even if it doesn’t automatically change the quality of the output.
What was announced (and what wasn’t)
The update itself is narrowly scoped:
- GPT-5.3-Codex-Spark is ~30% faster
- Performance is now over 1200 tok/s
- There’s an implied roadmap of broader speed work (“across the board”)
No details were shared on how the improvement was achieved, whether it affects all tiers, or what the rollout looks like across plans.
The immediate developer reaction: speed vs capability
The replies quickly landed on a familiar tradeoff: raw speed is great, but it’s only useful if the model remains dependable at coding tasks.
Several developers described Spark as fast but less capable than non-Spark variants, with one calling it “watered down” and another saying it feels “lossy vs normal codex.” Others asked directly whether there’s a “dip intelligence” when using Spark, and requested clarity on what tradeoffs are being made—one reply mentioned encountering “weird refusals” when using Spark.
Context window, compaction, and workflow friction
A second theme showed up repeatedly: context window size. Multiple replies asked for a larger context window, with some saying it makes the tool “almost unusable,” and another reporting that context compaction fails and can make Spark slower in practice than GPT-5.3.
At least one commenter framed the new speeds as shifting the dynamic from “prompt and wait” to “steer as it writes,” while another pointed out the practical bottleneck becomes human review—calling for better diffing, provenance, and rollback when output arrives that quickly.
Availability questions keep piling up
A number of replies asked about access: availability for Plus and Team plans, broader enterprise rollout, and interest in “the full 5.3” rather than Spark. None of those questions received answers in the thread itself.