Analysis: AI Coding Agents Can Consume Over 1 kWh Per Workday

A fresh analysis by Simon P. Couch estimates electricity per Claude Code session using usage logs and energy-per-token proxies. The write-up cites a median session near 41 Wh and a typical multi-instance workday around 1.3 kWh — read the full breakdown.

Analysis: AI Coding Agents Can Consume Over 1 kWh Per Workday

TL;DR

  • Uses per-token energy estimates informed by Epoch AI and Anthropic pricing, scaled against Claude Code session logs
  • Median Claude Code session ≈ 41 Wh; median workday with multiple agent instances ≈ 1,300 Wh (1.3 kWh)
  • Higher energy from very long system prompts and tool descriptions, frequent tool calls that generate more model input, and large 100k–200k token contexts plus cache reads/writes
  • Estimates derived by back-solving per-token Wh from blended values and billing ratios, with public proxies where lab telemetry is unavailable
  • Analysis notes grid mix (renewables vs fossil) changes environmental impact; full token breakdowns, pricing assumptions, and session graphs at https://www.simonpcouch.com/blog/2026-01-20-cc-impact/

Simon P. Couch published an analysis of energy consumption for AI coding agents, which derives per-session electricity estimates from Claude Code usage logs and public energy-per-token proxies.

What the analysis does and the headline numbers

The investigation combines per-token energy estimates (informed by work from Epoch AI and model pricing data from Anthropic) with real Claude Code session logs to scale from a “median query” to the much heavier workloads typical of coding agents. The most notable figures: a median Claude Code session is estimated at about 41 Wh (roughly 138× a typical single chat prompt), and a median workday running multiple coding-agent instances comes to roughly 1,300 Wh (1.3 kWh).

Those numbers provide a compact sense of scale without revealing all methodological details; the full analysis shows how token types (input, output, cache reads/writes) and model context lengths drive the totals.

Why coding agents cost more energy

Several technical factors multiply token usage in coding-agent workflows:

  • Very long system prompts and tool descriptions — Claude Code sessions often start with tens of thousands of tokens of context that are processed repeatedly.
  • Frequent tool calls — a single user instruction can trigger multiple tool executions, each returning substantial text that becomes additional input for the model.
  • Large context windows — models with 100k–200k token contexts amplify the compute needed per token, and caching behavior adds more reads and writes to account for.

Taken together, these characteristics turn what looks like a handful of messages into hundreds of heavy queries.

Caveats and context

The estimates rely on several assumptions: per-token energy is back-solved from blended Wh values and billing ratios, and public proxies are used where lab-level telemetry is unavailable. Energy impact also crucially depends on the grid mix powering the compute — renewables versus fossil sources change the environmental implications. The analysis includes these caveats and suggests broader strategies (such as supporting clean-energy efforts) for addressing systemic impact.

For readers interested in the detailed token breakdowns, pricing assumptions, and session-level graphs, the full write-up contains the complete data and methodology: https://www.simonpcouch.com/blog/2026-01-20-cc-impact/

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community