Aaron Levie: AI agents outpace enterprise reality due to context gaps

Aaron Levie argues coding agents are winning because repos are clean, digitized context—while enterprise work is scattered across systems, meetings, and permissions. The real blocker is making org knowledge legible (and safe) for agents. Read the thread on X.

Aaron Levie: AI agents outpace enterprise reality due to context gaps

TL;DR

  • Capability overhang: Coding agents advancing faster than deployment across broader knowledge work
  • Cause: Enterprise messiness—scattered systems, fragmented data, tacit workflows outside systems of record
  • Context advantage in coding: Repos are compact, digitized context; other work depends on meetings, handoffs, decisions
  • Primary bottleneck: Context gap—organizational “collective brain” not legible to agents across tools
  • Key constraints: Permissions unevenness, action admissibility, and runtime security (retry storms, probing, velocity)
  • Implementation reality: Professional services and change management needed; implementation labor outweighs model capability

Aaron Levie’s latest thread on “capability overhang in AI” puts a clean frame around something that’s been easy to feel but harder to name: agent capability—especially in coding—has outpaced real deployment in the rest of knowledge work.

In Levie’s telling, the mismatch isn’t coming from a lack of model intelligence. It’s coming from the messy realities of enterprises: scattered systems, fragmented data, and the uncomfortable truth that “how work gets done” often lives outside any system of record. That’s why coding agents appear to be “winning” first—the repo is already a compact, digitized bundle of context, while most other workflows run through meetings, handoffs, and tacit decision-making.

The “context gap” as the bottleneck

Levie argues the main deployment blocker is getting the right context to agents. He points to a broader enterprise problem: the organization’s “collective brain” isn’t legible in a way agents can reliably use. That context is often:

  • Locked in legacy and on-prem systems
  • Spread across unstructured content repositories
  • Fragmented across tools that don’t easily interoperate

Replies in the thread sharpen the point further. One response notes that context isn’t merely messy—it can be effectively invisible until an attempt is made to automate a workflow. Another adds that context portability is “the whole game,” echoing the idea that the real work is stitching together systems and process knowledge, not prompting better.

Permissions, “admissibility,” and operational risk

A second theme is access control. Levie highlights how permissions in real organizations are intentionally uneven, which gets more complex when agents execute far more actions than humans. Several replies extend that into adjacent concerns: not just whether an agent can see something, but whether it should be allowed to act “right now,” under what constraints, and with what safeguards. There’s also a callout for runtime security—even with correct access, agents can misbehave operationally (retry storms, probing, unexpected velocity).

Why services and change management keep showing up

Levie also ties this to the growing professional-services layer around enterprise AI: mapping workflows, modernizing data and systems, and running change management. Multiple replies agree that capability isn’t the limiting factor; implementation labor is—the “blocking and tackling” of making agentic work fit how organizations operate.

The full thread is worth reading for the framing and the range of reactions from builders and operators: Aaron Levie on X.

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community