Agent frameworks may be sabotaging prefix caching and inference speed
In a X thread, Chayenne Zhao argues that many agent frameworks waste tokens in ways that undercut key inference optimizations like prefix caching—hurting cost and throughput in long sessions. The takeaway: better agent–inference co-design may unlock big efficiency gains.