Margaret-Anne Storey’s latest post, “How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt,” makes a clean distinction that’s likely to resonate with teams leaning harder on AI-assisted coding: technical debt tends to be framed as something embedded in code and architecture, while cognitive debt accumulates in developers’ heads as shared understanding quietly erodes.
From “messy code” to “lost the plot”
The core idea is deceptively simple: even if an agent produces code that’s readable or “well-structured,” projects can still grind to a halt when humans can’t explain what the system is supposed to do, why certain choices were made, or how to safely change it. Storey ties this to Peter Naur’s view that a program is a theory—a mental model distributed across individuals and teams, not merely a repository of source files.
A concrete example anchors the argument: a student team moving quickly over a semester hit a point where even small changes caused surprising breakage. The initial diagnosis was technical debt, but the deeper issue was cognitive: nobody could articulate the intent behind key decisions or how components were meant to fit together. The most painful debt wasn’t in the codebase—it was in the fragmented understanding around it.
AI agents can amplify coordination and hidden decisions
Storey also pulls in an old but sturdy warning from Brooks: adding more “agents” (human or otherwise) increases coordination overhead and the number of implicit decisions that never fully land as shared knowledge. AI can help summarize and explain changes, but the post argues that the broader push for speed can still overwhelm human working memory—especially when teams skip the work of making “hard changes easy.”
Practices that target cognitive debt—not just code quality
Rather than pitching new tooling, the post emphasizes proven team practices—pair programming, refactoring, test-driven development, and deliberate review rituals—as ways to reduce both technical and cognitive debt. It also suggests a pragmatic guardrail for AI-generated code: requiring at least one human to fully understand a change before it ships, plus lightweight documentation of not only what changed but why.
Storey closes by pointing toward open research questions—how cognitive debt might be measured and caught early, and how it scales in distributed and open-source environments—alongside plans to explore the topic further in an upcoming ICSE Technical Debt Conference keynote and panel.
Source: How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt