Tim Kellogg breaks down agent memory: files, blocks, skills

A new post from Tim Kellogg maps three practical patterns for giving AI agents mutable memory—files, memory blocks, and skills—and explains when “memory” is the wrong tool entirely. It also shares tactics for keeping agent workflows observable and manageable.

Tim Kellogg breaks down agent memory: files, blocks, skills

TL;DR

  • **Three mutable memory patterns:** files, memory blocks, skills; memory unnecessary if agent needn’t “learn”
  • **Files as knowledge store:** agent explores/reads/writes via `ls`, `grep`, `cat`; supports long text and hierarchical paths
  • **Non-literal files allowed:** database records or S3 blobs work if they mimic paths and text storage
  • **Memory blocks:** learnable system prompt for behavior, preferences, identity; **WriteBlock** plus optional read/list tools; keep blocks small
  • **Skills:** “indexed files” mixing structured prompts with docs/scripts/data; progressive disclosure; editable skills as experience cache
  • **Operational guidance:** observability, Git versioning, issue trackers as work queues, append-only logs; avoid SQL-backed writable knowledge graphs

Tim Kellogg’s new “Agent Memory Patterns” post lays out three common ways to give AI agents mutable memory: files, memory blocks, and skills. The article argues that if an agent does not need to “learn,” memory may be the wrong tool entirely, but for coding agents and similar systems, the patterns can still be useful.

The first section treats files as a place for data and knowledge, describing them as something agents should be able to explore, read, and write with familiar tools like `ls`, `grep`, and `cat`. The post also notes that files do not need to be literal files; database records or S3 blobs can work as long as they provide hierarchical paths and room for long text.

Memory blocks get a separate treatment as a kind of learnable system prompt, with the article recommending that behavior, preferences, identity, and similar material live there. It also walks through the practical details, including `WriteBlock`, optional read and list tools, and the idea that blocks should stay small enough to avoid confusing the agent.

Skills, meanwhile, are presented as “indexed files” that combine structured prompts with supporting documents, scripts, and data. The post highlights progressive disclosure, editable skills as an “experience cache,” and a harness-side `Skill(name)` tool as optional rather than essential, while also touching on observability, versioning through Git, and why writable knowledge graphs and other SQL-backed models may be a poor fit.

The closing sections point to a few unusual but apparently useful patterns, including issue trackers for searchable work queues and append-only logs for grounding an agent’s account of what actually happened. The full post is worth a look for anyone designing agent workflows, especially since it also includes practical advice on monitoring memory use and keeping the system from becoming unwieldy.

Source: Tim Kellogg

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community