AI coding assistants may create cognitive debt beyond technical debt

In a new post, Margaret-Anne Storey argues AI-crafted code can still leave teams “lost,” as shared understanding erodes faster than the codebase. She outlines why cognitive debt is different—and what habits help prevent it. Source: [https://margaretstorey.com/blog/2026/02/09/cognitive-debt/](https:…

AI coding assistants may create cognitive debt beyond technical debt

TL;DR

  • Cognitive debt vs technical debt: Understanding erodes in people’s heads, even when code looks clean
  • “Program is a theory”: Shared mental models matter; source code alone cannot preserve intent
  • Example: student project stalled when changes broke; root cause was lost rationale and component fit
  • AI agents amplify coordination: More implicit decisions, higher overhead, and working-memory overload despite summarization
  • Debt-reducing practices: Pair programming, refactoring, test-driven development, and deliberate review rituals
  • AI guardrail: Require one human fully understands changes before shipping; document what changed and why

Margaret-Anne Storey’s latest post, “How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt,” makes a clean distinction that’s likely to resonate with teams leaning harder on AI-assisted coding: technical debt tends to be framed as something embedded in code and architecture, while cognitive debt accumulates in developers’ heads as shared understanding quietly erodes.

From “messy code” to “lost the plot”

The core idea is deceptively simple: even if an agent produces code that’s readable or “well-structured,” projects can still grind to a halt when humans can’t explain what the system is supposed to do, why certain choices were made, or how to safely change it. Storey ties this to Peter Naur’s view that a program is a theory—a mental model distributed across individuals and teams, not merely a repository of source files.

A concrete example anchors the argument: a student team moving quickly over a semester hit a point where even small changes caused surprising breakage. The initial diagnosis was technical debt, but the deeper issue was cognitive: nobody could articulate the intent behind key decisions or how components were meant to fit together. The most painful debt wasn’t in the codebase—it was in the fragmented understanding around it.

AI agents can amplify coordination and hidden decisions

Storey also pulls in an old but sturdy warning from Brooks: adding more “agents” (human or otherwise) increases coordination overhead and the number of implicit decisions that never fully land as shared knowledge. AI can help summarize and explain changes, but the post argues that the broader push for speed can still overwhelm human working memory—especially when teams skip the work of making “hard changes easy.”

Practices that target cognitive debt—not just code quality

Rather than pitching new tooling, the post emphasizes proven team practices—pair programming, refactoring, test-driven development, and deliberate review rituals—as ways to reduce both technical and cognitive debt. It also suggests a pragmatic guardrail for AI-generated code: requiring at least one human to fully understand a change before it ships, plus lightweight documentation of not only what changed but why.

Storey closes by pointing toward open research questions—how cognitive debt might be measured and caught early, and how it scales in distributed and open-source environments—alongside plans to explore the topic further in an upcoming ICSE Technical Debt Conference keynote and panel.

Source: How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community