Anthropic used its Code with Claude Opening Keynote to sketch a familiar message: model capabilities are moving faster than most businesses are adapting, and the company wants its products to narrow that gap without unveiling a new model onstage.
Chief Product Officer Ami Bora opened the event by arguing that AI use is moving from linear adoption to an exponential curve, while many organizations remain on slower trajectories. Bora pointed to examples he presented as evidence of the technology’s reach, including a Stripe team that allegedly cut a Scala-to-Java migration from an estimated 10 engineering weeks to four days, and Binti’s use of the Claude API to reduce foster-family licensing time by 20 days. He also claimed Anthropic’s API volume is up nearly 17x year over year and that the average Claude Code developer is now spending 20 hours a week running Claude.
Bora’s biggest product announcement was a capacity update. Anthropic said it is increasing rate limits for developers on Claude Code and the Claude Platform, doubling Claude Code’s five-hour rate limits for Pro, Max, Team and seat-based enterprise plans, while also raising API limits for Claude Opus. The company said it is making that possible through expanded compute partnerships, including a partnership with SpaceX to use capacity from the Colossus One data center.
The keynote then shifted to the model layer, where research leader Diane described a series of releases across Claude’s family, including Haiku, Sonnet, Opus and Mythos. She said Anthropic has shipped eight frontier models in the last 12 months and argued that the next phase involves “higher judgment,” larger context windows paired with memory, and multi-agent coordination. In her view, the useful measure is “task horizon,” or how long a model can work autonomously while still improving its output. She also pointed to examples from AMP, Rakuten and Intuisel as evidence that newer Claude variants can support longer, more complex coding and planning tasks.
On the platform side, Angela and Caitlin focused on what they described as two business problems: getting better outcomes from models and shipping at scale. They introduced an “advisor strategy” that splits execution from advising, allowing a smaller model to call on a larger one when needed. They also revisited Claude Managed Agents, then added three new features: multi-agent orchestration, outcomes, and dreaming. In a live demo built around a fictional company called Lumara, the pair showed agents coordinated across separate context windows, a rubric-based grader for success criteria, and a “dream” function that reviews prior runs and writes learnings to memory.
Claude Code, meanwhile, is getting more automation features. Kat described new surfaces including terminal, IDE and desktop, and said Claude Code is being used more in auto mode, with developers delegating more of the work before reviewing a PR. Boris Cherney demonstrated routines, which can run on a schedule or be triggered by webhooks or API calls, and CI autofix, which can respond to flaky tests, review comments and merge conflicts. He also argued that the product is moving software work toward more asynchronous patterns, where Claude can check its own work and return later with a ready-to-merge result.
The event closed with a broad claim about where the company sees the gap: model capability is already there, but product and infrastructure layers determine how much of it reaches users and businesses. Anthropic’s pitch, as presented onstage, was that Claude’s next phase is less about a headline model launch and more about turning existing capability into deployable systems.
Source: Code with Claude Opening Keynote



