
Cursor brings AI agents to JetBrains IDEs via ACP
AI is breaking code review—here’s the layered alternative
llmfit helps you pick local LLMs that actually fit

OpenAI rolls out GPT-5.4 Thinking and Pro to ChatGPT


OpenAI rolls out GPT-5.4 Thinking and Pro to ChatGPT
OpenAI has just rolled out GPT-5.4 Thinking and GPT-5.4 Pro in ChatGPT, while also bringing GPT-5.4 to the API and Codex. The company spotlights better reasoning and coding plus improved efficiency, and a new mid-response “steering” feature on web and Android.
Qwen launches Qwen3.5 Small models: 0.8B to 9B
Qwen launches Qwen3.5 Small models: 0.8B to 9B
With the launch of Qwen3.5 Small, Qwen is betting on more capability with less compute across four sizes, including Base models for fine-tuning. The lineup hits Ollama on day one with tool calling, “thinking,” and multimodal support for local-first builds.
Introducing the Augmenter Newsletter
Get a curated digest of AI developer news, tutorials, and tools — delivered to your inbox. Designed for developers who want concise, useful updates.
News and Insights on Agentic Coding, Vibe Coding and more
Augmenter is a human-curated collection of AI news, insights, and resources for developers. Content is written with AI, reviewed by humans, and designed to keep you up to date as technology moves forward.
Anthropic opens Claude Max 20x to open-source maintainers for free
Anthropic has just rolled out its Claude for Open Source Program, bringing 6 months of free Claude Max 20x to qualifying maintainers and contributors. Applications are reviewed on a rolling basis, with up to 10,000 spots available.


Benchmark: Claude Code defaults to DIY, not SaaS tools
Amplifying benchmarked 2,430 Claude Code runs to see what tools it picks without being prompted. In 12 of 20 categories, it most often builds Custom/DIY instead. When it does choose vendors, defaults dominate: GitHub Actions, Stripe, shadcn/ui, Vercel.
OpenAI Codex CLI v0.105 boosts syntax highlighting and multi-agent control
OpenAI has just rolled out Codex CLI v0.105, bringing syntax highlighting, faster prompt input, and improved multi-agent workflows. Voice dictation via spacebar is the buzziest addition, though early testers report gaps across setups and platforms.


Claude Code gains auto-memory to persist context across sessions
Claude Code has just rolled out auto-memory, letting it retain project context, debugging patterns, and preferred approaches between sessions. The update also introduces Memory.MD alongside Claude.MD, with a /memory toggle to disable it.


Vercel warns agentic coding tools blur security boundaries
A recent write-up by Vercel takes a closer look at how many AI coding agents run generated code with the same access as real credentials. It outlines practical boundary patterns—plus what still fails—to reduce prompt-injection and secret-leak risk.
Featured Videos
Deep dive videos for AI developers
Ralph: Autonomous Coding Loops for Claude
Autonomous coding loops can move fast—but without visibility and control, they can become hard to trust (and easy to run too long). This video walks through how Ralph Loop and the Ralph TUI add structure to long-running agent workflows, so you can track progress and intervene when needed.
Key takeaways
- Covers what Ralph Loop is and how continuous iteration differs from a single-pass run in Claude Code.
- Breaks down why a task tracker and TUI matter as projects grow, including live task status and output streaming.
- Walks through setup: choosing a tracker (e.g., a local PRD JSON file), selecting an agent (Claude Code or OpenCode), and setting iteration limits.
- Demonstrates generating a PRD, turning it into a task list, and running sub-agents with pause/resume and session persistence.
OpenSource Kimi K2.5 just dropped
Open-source weights are back—but for professionals, the real question is whether the latest drop meaningfully improves day-to-day coding, vision work, and agent workflows. This video walks through what Kimi K2.5 claims to deliver, where it benchmarks well, and what it looks like in hands-on demos.
- Breaks down Kimi K2.5’s focus areas: coding, vision tasks, and “self-directed” agent swarms
- Covers benchmark results across agentic, coding, and vision/video evaluations, plus cost vs. performance claims
- Shows practical examples like generating front-end websites and recreating a site from screenshots (no code provided)
- Demonstrates tool-using behavior, including a web-based price comparison and discussion of local runtime/VRAM needs
From Vibe Coding To Vibe Engineering
Frontend teams have always ridden hype cycles—but LLMs change the day-to-day work: you can “accept” code fast, and just as quickly land in the wrong abstraction. This talk reframes “vibe coding” into “vibe engineering,” focusing on how professionals can collaborate with AI without losing control of quality, context, and maintainability.
- Breaks down what “vibe coding” means in practice and why the definition keeps shifting
- Contrasts hands-off prompting with “vibe engineering” using agents—plus why you should stay skeptical of generated code
- Shares tactics the speaker uses (e.g., voice-to-code, starting from solid primitives, and supplying rules/docs/memory)
- Covers when vibing is appropriate (one-off scripts, simple features) and when it’s risky for teams and juniors
Researchers solved the Context Window Limit
Context windows cap what you can reliably ask an LLM to reason over—and as inputs grow, “context rot” can make quality drop fast. This video breaks down an MIT paper proposing recursive language models: a way to process arbitrarily long prompts at inference time without changing the core model.
Key takeaways
- Covers why stuffing more tokens into a prompt can degrade retrieval and reasoning, even before hitting the physical limit.
- Walks through the RLM setup: storing the long prompt in a Python/REPL environment and giving the model tools to search it.
- Explains the “recursive” step—re-querying relevant sections to go deeper without summarization or compression.
- Reviews how the approach is evaluated on long-context tasks (e.g., BrowseComp+, Oolong, code repository understanding) and what tradeoffs show up in cost variance.
Building Cursor Composer
Building agentic coding systems often fails on a familiar constraint: you can make them fast, or you can make them smart—but professionals need both to stay in flow. This talk walks through how Cursor built Composer, focusing on the infrastructure, training setup, and evaluations behind a low-latency coding agent model.
- Breaks down the “fast vs. smart” trade-off and why token-generation efficiency matters in real workflows
- Explains the rollout-based RL setup, including how tool calls (read/edit/search/lint/shell) are used and scored
- Covers scaling challenges like bursty compute, consistency between training and production, and load balancing for uneven rollouts
- Shows why matching the production environment—and integrating semantic search—shapes stronger agent behavior (e.g., better search/read before editing)
Spec-Driven Development: Sharpening your AI toolbox
AI coding tools are powerful—but without a solid spec process, delivery can become hard to reproduce and hard to trust. This talk walks through spec-driven development in Kiro and shows how structured artifacts can bring more control and reliability into an AI-assisted workflow.
Key takeaways
- Covers how Kiro turns a prompt into requirements (with acceptance criteria), design, and a task list you can execute.
- Breaks down the EARS format (Easy Approach to Requirements Syntax) and why structured natural language matters for later automation.
- Explains how requirements can be translated into correctness properties for property-based testing, tying specs to code behavior.
- Shows how to use MCP servers across requirements, design, and implementation—and how to customize artifacts (e.g., wireframes, explicit test cases).
Continue the conversation on Slack
Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.
Join our community