A recurring theme in AI-assisted coding is that small prompt structures can have outsized effects on agent behavior. In a recent post, Rules and gates, the author digs into a deceptively simple pattern: the “gate,” an explicit condition that must be satisfied before an agent is allowed to proceed.
The framing is practical. Instead of piling on more “be careful” instructions, it treats a prompt like a workflow with checkable state—something an agent can evaluate objectively rather than “remember” to do.
Rules vs. gates: the opt-out problem
The key distinction is that a rule often has an implicit escape hatch. Even if the rule sounds strict, it lives as an intention: easy to rationalize past in the moment.
A gate, by contrast, is written so the next step is blocked until a concrete condition is met. It’s not just “do X,” but “do X, then verify Y, then continue.” The post captures it as a difference in execution:
- A rule can turn into “I’ll do it later.”
- A gate becomes “I can’t proceed because the condition isn’t satisfied.”
This matters because agent failures often aren’t about ignorance—they’re about momentum. If the agent can keep going, it frequently will.
A concrete prompting pattern: make the sequence explicit
The post’s example is a familiar one for anyone building agentic systems: verification before asserting facts.
A rule version might be: verify claims with web research before asserting them. It’s well-intentioned, but easy for an agent to “mentally check off” without actually doing the work—especially when the answer “feels” correct.
The gate version turns verification into an explicit sequence: when a claim about what exists (or doesn’t) is forming → web search happens → URLs are in hand → then the agent speaks.
The critical mechanism here is evaluability: the gate turns “did I verify this?” into “do I have URLs?” That’s a concrete question with a hard, inspectable answer. Until the answer is “yes,” the allowed output collapses to something like “this is based on training data and is unverified.”
Where hooks fit: enforcement from the outside
The post also distinguishes gates from “hooks,” common in coding agent harnesses. Hooks are external triggers—classical software that runs when certain actions occur.
Using a street-crossing analogy:
- Rule: Don’t cross without looking.
- Gate: A step-by-step checklist (look left, confirm zero vehicles; look right, confirm zero vehicles; etc.) that must be completed before crossing.
- Hook: A crossing guard that physically stops movement into traffic.
In other words: rules are internal guidance, gates are internal state checks, hooks are external enforcement. The trio maps neatly onto how modern agent stacks are built: prompt constraints, structured workflows, and tool-level guardrails.