Tim Kellogg’s new “Agent Memory Patterns” post lays out three common ways to give AI agents mutable memory: files, memory blocks, and skills. The article argues that if an agent does not need to “learn,” memory may be the wrong tool entirely, but for coding agents and similar systems, the patterns can still be useful.
The first section treats files as a place for data and knowledge, describing them as something agents should be able to explore, read, and write with familiar tools like `ls`, `grep`, and `cat`. The post also notes that files do not need to be literal files; database records or S3 blobs can work as long as they provide hierarchical paths and room for long text.
Memory blocks get a separate treatment as a kind of learnable system prompt, with the article recommending that behavior, preferences, identity, and similar material live there. It also walks through the practical details, including `WriteBlock`, optional read and list tools, and the idea that blocks should stay small enough to avoid confusing the agent.
Skills, meanwhile, are presented as “indexed files” that combine structured prompts with supporting documents, scripts, and data. The post highlights progressive disclosure, editable skills as an “experience cache,” and a harness-side `Skill(name)` tool as optional rather than essential, while also touching on observability, versioning through Git, and why writable knowledge graphs and other SQL-backed models may be a poor fit.
The closing sections point to a few unusual but apparently useful patterns, including issue trackers for searchable work queues and append-only logs for grounding an agent’s account of what actually happened. The full post is worth a look for anyone designing agent workflows, especially since it also includes practical advice on monitoring memory use and keeping the system from becoming unwieldy.
Source: Tim Kellogg