Opus 4.7 makes Claude Code more autonomous with auto mode

Anthropic’s Opus 4.7 is pushing Claude Code toward longer, more agentic workflows. Boris Cherny details auto mode for fewer permission interruptions, plus recaps, focus mode, effort tuning, and a /go verification loop that can end in a PR.

claude cover

TL;DR

  • Auto mode (Opus 4.7): Model-based classifier auto-approves safe commands; reduces prompt loops; enables parallel “Claudes”
  • Availability: Auto mode for Max, Teams, Enterprise; enable via Shift-tab (CLI) or Desktop/VSCode dropdown
  • /fewer-permission-prompts: Scans history for repeated bash/MCP prompts; recommends permissions allowlist entries https://t.co/VOjwuW0FJx
  • Recaps: Short summaries of what happened and what’s next for returning after long gaps https://t.co/zN4zAp91y4
  • CLI focus mode: /focus hides intermediate work; emphasizes final results for review-oriented workflows
  • Effort + verification: /effort sets adaptive thinking; /go runs E2E tests, /simplify, then opens a PR

Anthropic’s developer tooling continues to tilt toward longer-running, more autonomous workflows, and a recent thread from Boris Cherny frames what that looks like in practice with Opus 4.7. After dogfooding the model for a few weeks, Cherny outlined a set of small workflow tweaks—mostly in Claude Code and adjacent surfaces—that aim to make “agentic” sessions feel less like a demo and more like something that fits into day-to-day engineering.

Auto mode: fewer interruptions, more parallel work

The biggest practical change in the thread is auto mode, positioned as an answer to a familiar pain point: getting stuck in a loop of permission prompts during complex tasks like deep research, refactoring, feature work, and performance iteration.

Previously, long-running sessions often meant either babysitting or using --dangerously-skip-permissions. Auto mode instead routes permission prompts to a model-based classifier that decides whether a command is safe; if so, it’s auto-approved. The obvious win is uninterrupted execution, but Cherny calls out a second-order effect: more parallelism, with multiple “Claudes” running while attention shifts elsewhere.

Auto mode is available for Opus 4.7 for Max, Teams, and Enterprise users, and can be activated via Shift-tab in the CLI, or via a dropdown in Desktop or VSCode.

A more surgical approach: /fewer-permission-prompts

For cases where auto mode isn’t desirable (or where a team wants tighter control), Cherny points to /fewer-permission-prompts. The skill scans session history, identifies repeated bash and MCP commands that triggered prompts, and then recommends entries to add to a permissions allowlist.

That framing is notable: instead of treating permission prompts as pure friction, it treats them as a signal that the allowlist isn’t yet tuned to the actual workflow.

Link: https://t.co/VOjwuW0FJx

Recaps for long-running sessions

Cherny also highlighted recaps, shipped earlier in the week “to prep for Opus 4.7.” They’re described as short summaries of what an agent did and what’s next, aimed at making it easier to return to a session after minutes—or hours—away.

Link: https://t.co/zN4zAp91y4

Focus mode in the CLI

Another CLI-oriented addition is focus mode, which hides intermediate work to emphasize the final output. Cherny describes it as a better fit now that the model can generally be trusted to run appropriate commands and edits, with the workflow shifting toward reviewing results rather than watching the entire process.

Command: /focus toggles it on/off.

Effort levels replace “thinking budgets” tuning

On configuration, Cherny notes that Opus 4.7 uses adaptive thinking rather than “thinking budgets,” and suggests tuning effort to trade speed and token usage against capability. In their own workflow: xhigh for most tasks, max for the hardest ones. Max applies only to the current session, while other effort levels persist.

Command: /effort sets the effort level.

Verification as a first-class habit: /go pipelines testing and a PR

The final tip is less about a single feature and more about elevating the feedback loop: giving Claude a way to verify its work. Cherny argues this can deliver a 2–3× improvement in outcomes and is “more important than ever” with 4.7, especially with long-running work where confidence matters when returning later.

Verification differs by target:

  • Backend: ensure Claude knows how to start the service and test end-to-end
  • Frontend: use the Claude Chromium extension so it can control the browser
  • Desktop apps: use computer use

Cherny also mentions a prompting pattern that ends with /go, described as a skill that has Claude:

  1. test end-to-end using bash, browser, or computer use
  2. run /simplify
  3. put up a PR

The common thread across all of these tips is less “prompt engineering” and more workflow engineering: reducing approval friction, summarizing progress, hiding noise, tuning compute, and closing the loop with systematic verification. In other words, the kinds of changes that make longer, more agentic sessions feel sustainable instead of fragile.

For more Claude Code workflow ideas in a similar vein, see: Claude Code power tips: loops, hooks, and remote workflows.

Original source: https://x.com/bcherny/status/2044847848035156457

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community