Why AI prompting works better with “gates” than rules

A recent post by the author breaks down “gates” in AI prompts—explicit conditions that must be met before an agent can move on. Unlike rules that can be hand-waved, gates force checkable steps (like holding URLs) and pair well with external “hooks.”

Why AI prompting works better with “gates” than rules

TL;DR

  • Gates: Explicit conditions that must be satisfied before an agent can proceed
  • Rules vs. gates: Rules enable opt-outs (“later”); gates block progress until conditions are met
  • Agent failures often stem from momentum, not ignorance; continuing is easier than stopping
  • Prompt pattern: Verification becomes sequence—web search → URLs obtained → then respond
  • Evaluability: Replace “did I verify?” with “do I have URLs?”; otherwise label output unverified
  • Hooks: External enforcement; rules = guidance, gates = internal checks, hooks = tool-level guardrails

A recurring theme in AI-assisted coding is that small prompt structures can have outsized effects on agent behavior. In a recent post, Rules and gates, the author digs into a deceptively simple pattern: the “gate,” an explicit condition that must be satisfied before an agent is allowed to proceed.

The framing is practical. Instead of piling on more “be careful” instructions, it treats a prompt like a workflow with checkable state—something an agent can evaluate objectively rather than “remember” to do.

Rules vs. gates: the opt-out problem

The key distinction is that a rule often has an implicit escape hatch. Even if the rule sounds strict, it lives as an intention: easy to rationalize past in the moment.

A gate, by contrast, is written so the next step is blocked until a concrete condition is met. It’s not just “do X,” but “do X, then verify Y, then continue.” The post captures it as a difference in execution:

  • A rule can turn into “I’ll do it later.”
  • A gate becomes “I can’t proceed because the condition isn’t satisfied.”

This matters because agent failures often aren’t about ignorance—they’re about momentum. If the agent can keep going, it frequently will.

A concrete prompting pattern: make the sequence explicit

The post’s example is a familiar one for anyone building agentic systems: verification before asserting facts.

A rule version might be: verify claims with web research before asserting them. It’s well-intentioned, but easy for an agent to “mentally check off” without actually doing the work—especially when the answer “feels” correct.

The gate version turns verification into an explicit sequence: when a claim about what exists (or doesn’t) is forming → web search happens → URLs are in hand → then the agent speaks.

The critical mechanism here is evaluability: the gate turns “did I verify this?” into “do I have URLs?” That’s a concrete question with a hard, inspectable answer. Until the answer is “yes,” the allowed output collapses to something like “this is based on training data and is unverified.”

Where hooks fit: enforcement from the outside

The post also distinguishes gates from “hooks,” common in coding agent harnesses. Hooks are external triggers—classical software that runs when certain actions occur.

Using a street-crossing analogy:

  • Rule: Don’t cross without looking.
  • Gate: A step-by-step checklist (look left, confirm zero vehicles; look right, confirm zero vehicles; etc.) that must be completed before crossing.
  • Hook: A crossing guard that physically stops movement into traffic.

In other words: rules are internal guidance, gates are internal state checks, hooks are external enforcement. The trio maps neatly onto how modern agent stacks are built: prompt constraints, structured workflows, and tool-level guardrails.

Original source

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community