Claude Opus 4.7 gets auto mode, recaps, and effort control

Anthropic’s Boris Cherny shared new workflow tweaks for Claude Opus 4.7 focused on longer, less interruption-prone coding runs. Highlights include auto mode for safer auto-approvals, recaps and focus mode, and verification tools like /go for end-to-end tests.

claude cover

TL;DR

  • Auto mode (Opus 4.7): Model-based classifier auto-approves “safe” commands; positioned as alternative to --dangerously-skip-permissions
  • Auto mode availability: Max, Teams, Enterprise; enable via Shift-Tab (CLI) or dropdown in Desktop/VSCode
  • /fewer-permission-prompts: Flags recurring “safe but annoying” bash/MCP prompts; recommends allowlist entries to reduce interruptions
  • Recaps + CLI /focus: Recaps summarize progress/next steps; focus mode hides intermediate work to emphasize final output
  • /effort replaces thinking budgets: Lower effort boosts speed and reduces tokens; xhigh default; max for hardest tasks, session-only
  • Verification loops + /go: Encourage end-to-end tests via bash/browser/computer use; /go bundles testing, /simplify, and PR creation

Anthropic’s Boris Cherny laid out a set of workflow tweaks for Opus 4.7 in a detailed thread on X, after “dogfooding” the model for several weeks and reporting higher day-to-day productivity. The tips focus on reducing interruption-heavy approvals, making long-running sessions easier to pick up again, and pushing Claude Code toward more reliable end-to-end execution.

Cherny’s throughline is straightforward: Opus 4.7 is designed to run longer and take on more complex tasks—deep research, refactoring, feature-building, and iteration toward performance benchmarks—so the surrounding tooling needs to be tuned for fewer interruptions and better feedback loops.

Auto mode aims to remove permission babysitting

The headline change is “auto mode,” which Cherny describes as a safer alternative to --dangerously-skip-permissions. Instead of prompting repeatedly for approvals during long runs, permission prompts are routed to a model-based classifier that decides whether a command is safe; safe commands are auto-approved.

Cherny argues this doesn’t just reduce babysitting—it also enables more parallel work, since multiple “Claudes” can run while attention moves elsewhere.

Auto mode is available for Opus 4.7 to Max, Teams, and Enterprise users, and can be enabled via Shift-Tab in the CLI or from a dropdown in Desktop or VSCode.

A new skill flags “safe but annoying” prompts

For sessions that still rely on manual approvals, Cherny also pointed to a new /fewer-permission-prompts skill. It scans session history to identify recurring bash and MCP commands that are likely safe but repeatedly triggered permission prompts, then recommends commands to add to an allowlist.

Cherny positioned it as a way to “tune up” permissions and cut down on unnecessary interruptions, particularly outside auto mode. (Link shared in-thread: https://t.co/VOjwuW0FJx)

Recaps and focus mode reduce “session overhead”

Two other additions are about readability rather than capability.

  • Recaps: short summaries of what an agent did and what comes next, intended for returning to long-running sessions after minutes or hours. Cherny said recaps shipped earlier in the week “to prep for Opus 4.7.” (Link shared in-thread: https://t.co/zN4zAp91y4)
  • Focus mode (CLI): hides intermediate work to emphasize the final result, toggled with /focus. Cherny said trust in the model’s command execution and edits has risen enough that reviewing final output often suffices.

“Effort” replaces thinking budgets

Opus 4.7 uses “adaptive thinking” rather than “thinking budgets,” according to Cherny, and the primary tuning knob is effort. Lower effort trades intelligence/capability for speed and lower token usage; higher effort prioritizes deeper work.

Cherny said xhigh effort is used for most tasks, with max effort reserved for the hardest ones. Max applies only to the current session, while other levels persist across sessions. The command is /effort.

Verification, /go, and the push toward reliable loops

The final tip is about building verification into the workflow so the model can prove its own work. Cherny said this has “always” been a way to get 2–3x more out of Claude, and matters more as tasks run longer.

Examples cited:

  • Backend: ensure Claude knows how to start a server/service and test end-to-end.
  • Frontend: use the Claude Chromium extension to give Claude a way to control a browser.
  • Desktop apps: use “computer use.”

Cherny also highlighted a /go skill used in prompts that bundles: (1) end-to-end testing via bash/browser/computer use, (2) running /simplify, and (3) putting up a PR.

Light analysis: fewer interruptions, more autonomy—and new friction points

Taken together, these changes read like an attempt to make “agentic” coding less about one-shot generation and more about sustained execution with guardrails: auto-approvals where safe, less noisy permission management, and built-in mechanisms for resurfacing context and validating results. At the same time, replies show the balance remains delicate—some users praised verification and reduced prompts, while others raised concerns about slowness, restrictions, and workflow questions like planning with Opus and executing with Sonnet.

Source: https://x.com/bcherny/status/2044847848035156457

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community