Mistral Medium 3.5 brings cloud coding agents to Vibe

Mistral has just rolled out Mistral Medium 3.5, a 128B model with a 256K context window and open weights. It powers new cloud-based coding agents in Vibe plus a Work mode preview in Le Chat for multi-step tasks across connected tools.

mistral cover

TL;DR

  • **Mistral Medium 3.5:** 128B dense model, **256k context**, instruction-following, reasoning, and coding as default for Vibe and Le Chat
  • **Open weights:** Modified MIT license, self-hostable on as few as **four GPUs**, configurable reasoning effort per request
  • **Vision encoder:** Trained from scratch for variable image sizes and aspect ratios
  • **Cloud remote coding agents (Vibe):** Parallel sessions, CLI/Le Chat start, “teleport” local sessions with history, state, approvals
  • **Workflow integrations:** GitHub, Linear, Jira, Sentry, Slack, Teams; isolated sandboxes; PR creation and notifications
  • **Work mode (Le Chat preview):** Multi-step, multi-tool tasks with connectors enabled; visible tool calls and reasoning; approval for sensitive actions

Mistral has introduced Mistral Medium 3.5, a model the company describes as its new default for Mistral Vibe and Le Chat, alongside remote coding agents in Vibe and a new Work mode in Le Chat for multi-step tasks.

The company states that Mistral Medium 3.5 combines instruction-following, reasoning and coding in a single 128B dense model with a 256k context window. Mistral also claims the model is available as open weights under a modified MIT license, can be self-hosted on as few as four GPUs, and supports configurable reasoning effort per request. The model’s vision encoder was trained from scratch to handle variable image sizes and aspect ratios, according to the release.

Mistral cites benchmark results including 77.6% on SWE-Bench Verified and 91.4 on τ³-Telecom. As is often the case with vendor-reported scores, those figures should be read as the company’s own assessment rather than an independent verdict.

Remote coding agents move to the cloud

The bigger product change is Mistral’s move for Vibe coding sessions into the cloud. The company says sessions can now run in parallel, letting coding tasks continue while the local developer steps away.

Sessions can be started from the Mistral Vibe CLI or from Le Chat. While they run, Mistral says the interface can surface file diffs, tool calls, progress states and questions. Existing local CLI sessions can also be “teleported” to the cloud, carrying session history, task state and approvals with them.

Mistral says Vibe connects to tools already used in systems engineering workflows, including GitHub, Linear, Jira, Sentry, Slack and Teams. Each coding session runs in an isolated sandbox, and when work is complete, the agent can open a pull request on GitHub and send a notification.

The company positions the setup for repetitive engineering work such as module refactors, test generation, dependency upgrades, CI investigations and bug fixes.

Mistral also says Workflows orchestrated in Mistral Studio are being used to bring Vibe into Le Chat. The same remote runtime is used whether a task starts in the CLI or in the web app, with the result returning as a finished branch or a draft PR.

Work mode in Le Chat

Mistral’s new Work mode in Le Chat is presented as a preview feature for more complex projects. The company says the agent behind it can read and write, use multiple tools at once and continue through multi-step tasks until completion.

The release lists several current uses for Work mode:

1. Cross-tool workflows, such as catching up across email, messages and calendar, or preparing for a meeting with attendee context, recent news and talking points. 2. Research and synthesis across the web, internal docs and connected tools, with output in the form of a structured brief or report. 3. Inbox triage, drafting replies, creating Jira issues from team or customer discussions, and sending summaries in Slack.

Mistral says Work mode sessions persist longer than a typical chat reply and that connectors are enabled by default so the agent can access documents, mailboxes, calendars and other systems when needed. It also states that each action is visible, with tool calls and the model’s reasoning rationale exposed in the interface. For sensitive actions such as sending a message, writing a document or modifying data, Le Chat will ask for explicit approval based on the user’s permissions.

Availability and pricing

Mistral Medium 3.5 is available in Mistral Vibe and Le Chat, and the company says it powers remote coding agents and Work mode on Pro, Team and Enterprise plans.

Through the API, Mistral lists pricing at $1.50 per million input tokens and $7.50 per million output tokens. The company also says open weights are available on Hugging Face under a modified MIT license.

For prototyping, Mistral says the model is available on NVIDIA GPU-accelerated endpoints at build.nvidia.com and as a containerized inference microservice through NVIDIA NIM.

Mistral adds that it is hiring across research, engineering and product, with open roles listed on its careers page.

Source: Mistral

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community