A practical guide to Claude Code 2.0 and getting more from coding agents
This guide synthesizes a detailed walkthrough of Claude Code 2.0—Anthropic’s CLI-based coding agent—covering user-facing refinements, workflow patterns, and the engineering ideas that make agentic workflows practical. It begins with motivation and model-era context, moves through tangible quality-of-life improvements in CC 2.0, and then dives into commands, sub-agents, the Task tool schema, and context engineering techniques that matter for long-running agent loops.
Why this matters now
Claude Code has become a reference point for CLI-first agent workflows. The product-level harnessing around models such as Opus 4.5 illustrates how layered prompts, tools, and UI work together to create a more predictable agentic experience. Learning these primitives transfers to other agent systems and IDE/CLI integrations, since many modern tooling patterns (commands, skills, sub-agents, hooks) are shared across several platforms.
Quick pointers for the technically-lite
- Context window: refers to the input tokens available to the model. Opus 4.5 has a 200K context window (roughly 150k words); other models cited include GPT-5.2 (400K) and Gemini 3 Pro (1M).
- Tool calling: tools are engineer-defined functions that the LLM can invoke (web search, read file, run bash). Both a tool call and its result become part of the conversation context.
- Agent vs. model: an agent is an LLM that can proactively run tools and manage multi-step tasks. The harness and system prompts shape agent behavior.
Evolution and notable QoL improvements in CC 2.0
Several incremental changes in Claude Code 2.0 materially improve day-to-day usage:
- Syntax highlighting (2.0.71) in diffs and code reviews makes CLI-based review less choppy.
- Checkpointing via
Esc+Escor/rewindallows rewinding conversation and code state. - New commands like
/context,/usage, and/statssurface runtime and token usage;/contextis useful to monitor context saturation. - Prompt suggestions and prompt-history search (Ctrl+R) speed up repeated tasks.
- LSP support and integrations (Slack, Claude Web, Chrome extension) broaden where the agent can be launched.
These changes emphasize faster feedback loops and lower friction for iterative development with agents.
Feature deep dive
Commands
- Slash commands (
/) are predefined prompts appended to the current conversation. Commands can be project-local (.claude/commands/) or global (~/.claude/commands). - Custom commands are appropriate when instructions are repetitive and can be made deterministic. Commands are also how the main agent can record handoffs, compactions, or structured summaries before starting a new conversation.
Sub-agents
- Sub-agents are separate Claude instances spawned by the main agent via the Task tool. They can be created automatically by the system or defined at
.claude/agents/your-agent-name.md. - The Explore sub-agent is a specialized, read-only file-search agent—designed to use glob, grep, read, and limited bash for analysis. Explore is explicitly prohibited from modifying files and is optimized for fast codebase searches.
- Other sub-agent types include
general-purpose(full tool access and inherits full context),Plan(architect-level planning), andclaude-code-guide(documentation/FAQ lookups). General-purpose and Plan inherit the full context; Explore typically starts with a fresh slate.
Task tool schema
- The Task tool schema requires
description,prompt, andsubagent_type, with options to setmodel(sonnet, opus, haiku),resume, andrun_in_background. The main agent uses this schema to decide when and how to spawn sub-agents. - Background agents (
run_in_background) are useful for long-running tasks or monitoring outputs asynchronously.
Context engineering: the hard part
Agents are token guzzlers
Every tool call and its returned text consumes tokens and fills the running conversation. A sample agent loop shows tool results quickly becoming thousands of tokens — both the call and output are needed because models are stateless.
Principles of context engineering
- Optimize token utility: include the minimum tokens necessary for correct behavior.
- Recitation / reminders: repeatedly injecting concise objectives (todo lists, plan recaps) into the tail of the context helps prevent goal drift and keeps the plan within recent attention.
- Compaction and checkpoints: start fresh conversations or compact state before the context becomes counterproductive. The author tends to compact or handoff at ~60% context usage for complex tasks.
MCP and code execution
- MCP servers expose tools and filesystem features to a host client; however, tool definitions loaded upfront can bloat context. An alternate pattern is exposing code APIs and giving a sandboxed execution environment so the model can write code that calls the APIs — this reduces upfront token load from many tool definitions.
Skills, plugins, and hooks
- Skills are on-demand domain packs (a SKILL.md and supporting scripts) that load metadata into the system prompt when relevant. This supports a smaller global instruction set and domain-specific knowledge loaded only when needed.
- Plugins package skills, commands, sub-agents, and hooks for distribution across projects.
- Hooks provide lifecycle triggers (Stop, UserPromptSubmit, etc.) to run scripts before/after stages of the agent loop. Hooks can automate reminders, post-process outputs, or trigger notifications. Combining hooks with skills and recurring reminders yields robust, maintainable agent loops.
Workflow and review patterns
- A common pattern: use Claude (Opus 4.5) for execution and iterative development (fast, communicative pair-programming); use a codex/GPT-series model for critical code review and bug discovery (the author notes GPT-5.2-Codex excels at review and severity classification).
- For complex new features, a throw-away first draft and a sharper second-pass informed by the model’s errors produces better outcomes. Micro-management during execution helps maintain correctness while iterating quickly.
Closing notes
Claude Code exemplifies a product approach to agent harnessing: system prompts, tool schemas, sub-agents, and UI details together shape a predictable developer experience. The practical takeaway is to treat context as a limited, valuable resource and to design workflows that load the right information at the right time: use sub-agents for targeted tasks, skills/plugins for on-demand domain expertise, hooks for lifecycle automation, and compaction/checkpoints to prevent context rot.
Original post: https://sankalp.bearblog.dev/my-experience-with-claude-code-20-and-how-to-get-better-at-using-coding-agents/

