Claude Code adds GitHub PR reviews with @claude review mentions

Anthropic’s Claude Code now lets teams tag a GitHub PR with @claude review for automated, line-level feedback. The research preview supports Teams and Enterprise, with REVIEW.md and CLAUDE.md for rules and token-based costs typically $15–$25 per review.

Claude Code adds GitHub PR reviews with @claude review mentions

TL;DR

  • GitHub PR workflow: Tag with @claude review for automated, line-level inline comments with severity and reasoning
  • Availability limits: Research preview for Teams and Enterprise; unavailable for orgs with Zero Data Retention
  • Trigger modes: After PR creation, after every push, or manual; @claude review also opts PR into push-triggered reruns
  • Review pipeline: Parallel specialized agents + verification to reduce false positives; deduplicate/rank; ~20 minutes average
  • Findings and focus: Normal/Nit/Pre-existing; defaults to correctness issues, not formatting or test coverage
  • Setup, permissions, cost: Install Claude GitHub App (Contents/Issues/Pull requests); token-billed extra usage, ~$15–$25; spend caps and analytics dashboards available

Claude Code Code Review is adding a familiar workflow to AI-assisted coding: tag a PR with @claude review and get automated, line-level feedback directly inside GitHub. The feature is currently in research preview for Teams and Enterprise subscriptions, and it’s explicitly not available to organizations with Zero Data Retention enabled.

Rather than trying to replace existing review gates, Claude’s approach is to operate alongside them. Reviews don’t approve or block merges; instead, findings land as inline comments on the exact lines that triggered an issue, with severity tags and an expandable reasoning section.

How Claude Code reviews run (and what happens in GitHub)

Once an org admin enables Code Review and selects repos, reviews can be configured to trigger:

  • Once after PR creation (when opened or marked ready)
  • After every push (continuous feedback as commits land)
  • Manual (only after a @claude review comment)

Regardless of the chosen mode, writing @claude review as a top-level PR comment can start a review. That same comment also opts the PR into push-triggered reviews going forward, meaning subsequent pushes will automatically re-run the reviewer for that PR.

Under the hood, the system runs multiple specialized agents in parallel against the diff and surrounding context, then performs a verification step intended to filter false positives. Claude then deduplicates and ranks results before posting them back to the PR. Anthropic says reviews finish in about 20 minutes on average, with cost scaling by PR size and complexity.

Severity levels and default review focus

Findings are categorized into three buckets:

  • Normal: a bug to fix before merging
  • Nit: a minor issue worth fixing but not blocking
  • Pre-existing: an issue that already existed, not introduced by the PR

By default, Code Review mostly targets correctness issues that could break production, not formatting preferences or missing test coverage—though that scope can be expanded with repo guidance.

Customizing what gets flagged with CLAUDE.md and REVIEW.md

Claude Code reads two optional files to tune review behavior:

  • CLAUDE.md: general project instructions used across Claude Code tasks (not just reviews). Newly introduced violations are treated as nit-level findings. Claude can also flag when code changes make the doc itself outdated. Notably, CLAUDE.md can exist at multiple directory levels, applying rules by subtree.
  • REVIEW.md: review-only rules, discovered automatically at the repository root. This is for things like team conventions, “always flag” requirements (for example, integration tests for new API routes), or explicit exclusions (like generated code directories).

This split is pragmatic: REVIEW.md can stay narrowly focused on what matters during PR scrutiny without cluttering broader Claude Code usage rules.

Setup, permissions, and usage tracking

Setup runs through Claude Code admin settings at claude.ai/admin-settings/claude-code and installs the Claude GitHub App, requesting read/write access for Contents, Issues, and Pull requests. After enabling repos, admins can verify things are working by opening a test PR and looking for a check run named Claude Code Review.

Admins can also monitor activity via an analytics dashboard at claude.ai/analytics/code-review, including PRs reviewed, weekly cost, auto-resolved feedback counts, and per-repo breakdowns.

Pricing and cost controls

Code Review is billed on token usage and is handled as extra usage, separate from any plan-included usage. Reviews typically cost $15–$25, though pricing scales with PR size, overall codebase complexity, and verification workload. Trigger choice matters, too: “after every push” can multiply total spend quickly, and even in Manual mode, commenting @claude review effectively enables push-triggered reviews for that PR.

Monthly spend caps can be set under claude.ai/admin-settings/usage, and spend can be monitored via analytics and per-repo average cost data in admin settings.

For the full documentation, see the original source: https://code.claude.com/docs/en/code-review#set-up-code-review.

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community