Zed’s new Zeta2 model boosts edit acceptance by 30%

Zed has just rolled out Zeta2 as its default edit prediction model, claiming a 30% higher acceptance rate over Zeta1. The core architecture stays the same, but a rebuilt training pipeline and LSP-based context deliver faster, smarter predictions.

zed cover

TL;DR

  • Zeta2 default in Zed: Reported 30% higher acceptance rate than Zeta1
  • Rebuilt training pipeline: Improved context construction, evaluation, and feedback loops; reported lower latency predictions
  • Training scale change: From ~500 hand-curated examples to nearly 100,000 opt-in examples from open-source licensed repos
  • LSP-based context retrieval: Pulls types and symbol definitions via go-to-definition system; shipped in 0.222.2
  • Open-weight release: Inspectable, self-hostable, fine-tunable; available as zed-industries/zeta-2 on Hugging Face
  • Roadmap and providers: Supports Mercury Coder, Copilot Next-Edit; exploring “jumps” and DPO for tuning

Zeta2 is now the default edit prediction model in Zed, and Zed says it’s delivering a 30% higher acceptance rate than Zeta1. The interesting part isn’t that the core architecture changed—it didn’t—but that Zed rebuilt much of the surrounding machinery: context construction, training, evaluation before rollout, and feedback loops after deployment.

Same architecture, very different pipeline

Zeta1 was effectively a handcrafted effort, trained on a hand-curated set of roughly 500 examples. Zeta2 moves to something closer to an industrial workflow: it’s trained on nearly 100,000 examples, collected on an opt-in basis from Zed users working in open-source licensed repositories.

That scale required a new end-to-end system to collect, process, and orchestrate training and evaluation. Zed notes that some of this work is already visible in open source, including an implementation that lives in the repo under the edit prediction tooling: edit_prediction_cli. A practical side effect of the pipeline work is lower latency, which Zed says translates into faster predictions in the editor.

Context gets smarter via LSP

Edit prediction quality tends to hinge on context: if a model only sees a few lines around the cursor, it has to guess at types, symbol definitions, and cross-module behavior. Zed’s recent shift to LSP-based context retrieval means Zeta2 can pull in the surrounding types and definitions of symbols—using the same underlying system that powers go-to-definition.

That LSP integration shipped as part of the 0.222.2 release, and Zed positions it as a key reason Zeta2 can avoid certain “blind” guesses during edits.

Open-weight, and trained on opt-in open source data

Zed is also making a point about governance and control. Zeta2 is trained entirely on open source code, and the latest iteration is open-weight, meaning it can be inspected, run on private infrastructure, or fine-tuned. The model is available on Hugging Face as zed-industries/zeta-2.

There’s an important nuance: while the training samples are derived from opt-in use in open-source repositories, Zed isn’t publishing the dataset at this scale. Users opted into collection and training, not broad public redistribution of the collected examples.

Providers, “jumps,” and preference-tuned iteration

Zed’s edit predictions aren’t limited to Zeta. The editor also supports Mercury Coder, Copilot Next-Edit, and other providers, with the overall approach described in Choose Your Edit Prediction Provider. Zed frames the choice largely around latency characteristics and how different models map to language and editing style.

On the roadmap, Zed calls out “jumps,” a feature meant to connect the dots across multi-site edits—like updating call sites after a function signature change—by using language server errors as anchors for suggested fixes elsewhere in the codebase.

On the model-training side, Zed says it’s experimenting with Direct Preference Optimization (DPO) to better align predictions with what gets accepted versus dismissed, and also testing different prompt formats aimed at efficiency and accuracy.

Zeta2 is already the default model in Zed, with edit predictions configurable in settings and documented at zed.dev/docs/ai/edit-prediction. Zed also notes that training-data collection can be enabled from the edit prediction status menu, and when enabled it only collects predictions made in open-source repositories.

Original source

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community