Back to News
News

OpenClaw Unlocks Adjustable Reasoning for Mistral Small 4 — Here's How the Thinking Levels Map

April 7, 20264 min read

Mistral Small 4 — the compact model that Mistral AI positions as its go-to for fast, cost-efficient inference — just gained a new capability inside OpenClaw: adjustable reasoning. Users can now control how much “thinking” the model does before responding, using the same thinking-level controls that already work with other reasoning-capable models on the platform.

What Changed

OpenClaw's model catalog now marks mistral-small-latest (the API identifier for Mistral Small 4) as reasoning-capable. That single flag unlocks the platform's thinking-level selector for the model, which previously only worked with dedicated reasoning models like Mistral's own Magistral family.

Behind the scenes, OpenClaw translates its internal thinking-level settings into Mistral's reasoning_effort API parameter. The mapping is deliberately simple:

OpenClaw Thinking LevelMistral reasoning_effort
Off / Minimalnone
Low / Medium / High / Extra-High / Adaptivehigh

In practice, this gives users a binary toggle: reasoning off, or reasoning on. Mistral's API doesn't currently expose granular effort levels for Small 4 the way some competing models do, so OpenClaw collapses everything above “minimal” into a single high value. It's honest engineering — no fake granularity.

Why It Matters

Adjustable reasoning is one of those features that sounds incremental until you're paying the bill. Reasoning tokens cost money and add latency. For tasks that don't need chain-of-thought — simple Q&A, message formatting, content moderation — turning reasoning off can cut response times and token costs significantly.

Conversely, flipping reasoning on for the same model lets you tackle harder tasks without switching to a larger, more expensive model. It's the kind of per-request flexibility that makes a mid-tier model genuinely versatile.

Mistral Small 4 vs. Magistral

A natural question: if Mistral Small 4 now supports reasoning in OpenClaw, why would anyone use Mistral's dedicated Magistral reasoning models?

The answer is depth. Magistral models are built from the ground up for extended chain-of-thought reasoning — they're trained to think longer, explore more branches, and self-correct. Mistral Small 4's reasoning mode is more of a lightweight boost: useful for moderately complex tasks, but not a substitute for a purpose-built reasoning model on the hardest problems.

Think of it as the difference between a Swiss Army knife and a dedicated tool. Small 4 with reasoning on is the Swiss Army knife — good enough for most jobs, and you don't have to switch models to get it.

How to Use It

If you're already running Mistral Small 4 (mistral-small-latest) through OpenClaw, the thinking-level control should now appear in your agent configuration. Set it to “Off” for fast, no-frills responses, or any level from “Low” upward to enable reasoning.

No configuration changes are needed on the Mistral side. The reasoning_effort parameter is part of Mistral's standard Chat Completions API, and OpenClaw handles the mapping automatically.

Testing adjustable reasoning on Mistral Small 4? We'd love to hear how it performs for your use case on X @DeployClawHQ.

Ready to deploy OpenClaw?

Get started in under 5 minutes with DeployClaw.