Anthropic has shipped Claude Opus 4.7 as a generally available update, positioning it as a meaningful step up from Opus 4.6 for advanced software engineering—especially the kinds of long-running, multi-step tasks where agentic workflows tend to wobble. The release also lands alongside platform changes that touch effort controls, token budgeting, and Claude Code review workflows, continuing the recent pattern of smoothing out day-to-day “AI pair” operations first seen around Claude Code defaults and model rollouts, including Opus 4.6 1M.
What’s new in Opus 4.7: coding rigor, longer runs, better self-checking
Anthropic frames Opus 4.7 as a jump over Opus 4.6 on the hard end of software engineering: complex tasks that run for a while, require consistent instruction-following, and benefit from verification steps. The company says users are increasingly comfortable handing off harder coding work with less supervision, helped by a model that “devises ways to verify its own outputs before reporting back.”
Early-access feedback highlighted improvements that will sound familiar to anyone building with agents: stronger behavior in async workflows, fewer “plausible” answers when information is missing, and better resilience in long tool-driven runs. Several evaluations are cited, including:
- A reported 13% lift on a 93-task coding benchmark compared with Opus 4.6 (including tasks neither Opus 4.6 nor Sonnet 4.6 could solve).
- A CursorBench jump (70% vs 58% for Opus 4.6).
- A large improvement on Rakuten-SWE-Bench (described as resolving 3× more production tasks than Opus 4.6, with gains in code and test quality).
Vision upgrades: higher-resolution images, more practical multimodal work
Opus 4.7 also gets “substantially better vision,” with support for images up to 2,576 pixels on the long edge (~3.75 MP)—more than three times prior Claude models, according to Anthropic. The pitch here is less about novelty and more about practicality: dense screenshots, complex technical diagrams, and pixel-sensitive references in workflows like computer-use agents or diagram-based extraction.
Anthropic notes this is a model-level change rather than an API parameter. The tradeoff is predictable: higher-resolution images consume more tokens, and Anthropic suggests downsampling images when extra detail isn’t needed.
Cybersecurity safeguards and the Cyber Verification Program
The Opus 4.7 launch is tied to Anthropic’s earlier Project Glasswing announcement about cybersecurity risk. The company reiterates that Claude Mythos Preview will remain limited while safeguards are tested first on less capable models. Opus 4.7 is the first model released under that plan, with automatic detection and blocking for requests that indicate prohibited or high-risk cybersecurity uses.
For legitimate security work—vulnerability research, penetration testing, and red-teaming—Anthropic is directing professionals to its new Cyber Verification Program.
Platform and product updates landing alongside the model
Opus 4.7 comes with a cluster of practical developer-facing changes:
More effort control (including xhigh)
A new xhigh effort level sits between high and max, giving finer control over the reasoning/latency tradeoff. Anthropic also says Claude Code’s default effort level has been raised to xhigh for all plans, and recommends starting with high or xhigh for coding and agentic use cases.
Task budgets (public beta) on the API
On the Claude Platform, Anthropic is also introducing task budgets in public beta, intended to help guide token spend across longer runs.
Claude Code: /ultrareview and expanded auto mode
Claude Code adds a new /ultrareview slash command for a dedicated review session that reads through changes and flags bugs and design issues “a careful reviewer would catch.” Pro and Max users get three free ultrareviews to try it. Anthropic also says it has extended auto mode to Max users.
Availability, pricing, and the migration gotchas
Opus 4.7 is available today across Claude products and the API, plus Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. On the API, developers can target claude-opus-4-7 via the Claude API.
Anthropic flags two migration considerations that can affect token usage:
- An updated tokenizer, where the same input may map to more tokens (roughly 1.0–1.35×, depending on content type).
- More “thinking” at higher effort levels, particularly in later turns for agentic settings, which can increase output tokens.
A dedicated migration guide is available here: migration guide. For deeper alignment and safety detail, Anthropic points to the Claude Opus 4.7 System Card.


