OpenClaw Drops GLM-4.7 Flash, Makes Google's Gemma 4 the Default Ollama Model
If you're running OpenClaw with Ollama and spin up a fresh instance today, the first model you'll be nudged toward is no longer GLM-4.7 Flash. It's Google's Gemma 4. The change quietly rolled out this week as part of a broader refresh to OpenClaw's Ollama onboarding defaults, and it signals a shift in which open-weight models the project considers production-ready for local inference.
Why Gemma 4?
GLM-4.7 Flash has served as OpenClaw's default local model for several months, but the landscape has moved. Google's Gemma 4 has emerged as one of the strongest open-weight models in its class, delivering competitive performance across reasoning, instruction-following, and multilingual tasks — all while running efficiently on consumer hardware through Ollama.
For new users going through OpenClaw's setup wizard, the change is simple: where the documentation and onboarding flow previously suggested running ollama pull glm-4.7-flash, it now reads ollama pull gemma4. Existing installations aren't affected — if you've already configured a different model, nothing changes on your end.
Cloud Defaults Get a Version Bump Too
The update isn't limited to local models. OpenClaw's suggested cloud models for Ollama's cloud-routing feature have also been refreshed:
- GLM 5 → GLM 5.1 — the latest iteration of Zhipu AI's flagship model, with improved reasoning and a larger training corpus
- MiniMax M2.5 → MiniMax M2.7 — a point-release bump that brings better instruction adherence and reduced hallucination rates
Kimi K2.5 remains in the suggested cloud lineup unchanged. The overall effect is a cleaner set of defaults that better reflects where model quality stands in April 2026.
A Small Change, a Meaningful Signal
Default model selections matter more than they might seem. For most self-hosted users, the onboarding default is the first model they'll interact with — and first impressions drive retention. By promoting Gemma 4 to the top of the list, OpenClaw is effectively endorsing it as the best all-around local model for the platform right now.
It's also a reminder of how fast the open-weight model ecosystem is evolving. A model that was the sensible default three months ago can lose that position overnight when a stronger alternative ships. OpenClaw's willingness to rotate defaults keeps its onboarding experience from going stale — something that matters a lot for a project competing on ease of deployment.
What You Need to Do
Nothing, if you're already running OpenClaw with a model you like. But if you're setting up a new instance or haven't tried Gemma 4 yet, it's worth pulling it down:
ollama pull gemma4Then set it as your primary model in OpenClaw's configuration, or let the new onboarding flow handle it for you.
Running Gemma 4 on OpenClaw? Let us know how it compares to your previous default on X @DeployClawHQ.