“How to Kill the Code Review” makes a blunt case: traditional PR review is already struggling, and the AI era is turning that strain into a structural mismatch. The post’s core premise is that the volume and size of changes are scaling faster than human attention, especially as AI adoption boosts throughput—and with it, backlogs of diffs that still expect careful, line-by-line review.
Why “read every diff” stops scaling
The argument starts from a familiar failure mode in modern engineering orgs: PRs linger, approvals become performative, and large diffs get skimmed because reviewers still have their own work. The piece points to data suggesting that teams with high AI adoption ship more—more tasks completed and far more PRs merged—but also see review time increase sharply, creating a bottleneck around the very gate meant to protect quality.
That framing sets up the central claim: manual code review can’t sustainably be the primary quality gate when code can be produced faster than it can be read.
A shift from reviewing code to reviewing intent
Instead of betting on ever-better AI review tools bolted onto the same PR workflow, the post pushes a shift upstream: make specs, plans, constraints, and acceptance criteria the artifact humans scrutinize. In that model, specs become the source of truth and code becomes an output to be verified against explicit rules, not a document to be audited by eyeballing diffs.
The “layered trust” alternative (without spoiling the whole blueprint)
A big chunk of the essay is spent outlining a layered replacement for code review—less “approval UI,” more stacked verification. Among the layers it highlights:
- Multiple competing AI attempts, selected by objective signals (for example, verification coverage or smaller diffs)
- Deterministic guardrails like tests, type checks, linters, and contract checks defined ahead of time
- Human-authored acceptance criteria (with a nod to BDD-style specs)
- Fine-grained permissions that constrain what an agent can touch, with escalation triggers for sensitive areas
- Adversarial verification, where separate agents implement and attempt to break changes
It’s a strong opinionated read, especially in how it reframes human value as problem definition and constraints, not post-hoc inspection.
Original source: https://www.latent.space/p/reviews-dead?