All News
TrendingPerformance

OpenClaw Was Burning 185 Megabytes Just to Check a Password

Three PRs this week expose a pattern that should embarrass any framework maintainer: auth imports that dragged in the entire plugin graph, a message handler that loaded media parsers for plain text, and an auth store that re-synced on every consumer instead of once per file change. The fixes cut cold-start memory by 96%.

March 21, 20266 min read

The Damage Report

Auth cold-start
185 MB6.5 MB
provider-auth-input.ts heap allocation
96% reduction
Auth load time
6.0s0.3s
provider-auth-input.ts init time
95% faster
Plugin auth SDK
169 MB4.6 MB
plugin-sdk/provider-auth.ts heap
97% reduction

Let me describe a scenario that was happening on every single OpenClaw instance until this week. A user sends a plain-text Telegram message — “hey, what's the weather?” — and the inbound reply handler loads the media understanding module, the link understanding module, and re-syncs the auth store from disk. For a text message. With no media. No links. No credential change.

This is the kind of performance bug that doesn't crash anything. It just makes everything heavier than it needs to be, on every single message, for every single user. And nobody noticed for months because the system still worked. It just worked slowly. And slowly is the hardest symptom to diagnose.

The 185-Megabyte Import Chain

PR #51891 · reduce plugin auth cold-start heap

Contributor: vincentkoc · Size: S · Three critical import paths fixed

Here's the import chain that was quietly running on every OpenClaw instance: provider-auth-input.ts imported the full auth module. The full auth module imported the Discord timeout constants. The Discord timeout constants lived in a config schema file that transitively imported the full runtime entry points for Telegram, WhatsApp, Discord, Matrix, and every other messaging provider.

So when OpenClaw needed to check an API key — a function that should read one string from an environment variable — it loaded the entire plugin graph. 185.7 megabytes. 6 seconds of initialization. For a function whose actual work takes microseconds.

The fix is almost comically simple. Extract resolveEnvApiKey into its own model-auth-env.ts module with zero transitive dependencies. Extract sanitizeForConsole into console-sanitize.ts. Isolate the Discord timeout import in schema.help.ts to target the specific constant instead of the full runtime.

Result: 185.7 MB drops to 6.5 MB. 6 seconds drops to 300 milliseconds. The plugin SDK auth path goes from 169 MB to 4.65 MB. These aren't incremental improvements. This is the difference between a module that accidentally loaded everything and one that loads what it needs.

The Plain-Text Message That Loaded the Media Parser

PR #51899 · trim inbound startup churn

Contributor: vincentkoc · Size: M · Two optimizations, one hot path

Every inbound message — regardless of content — triggered eager initialization of two systems: media understanding and link understanding. These are heavyweight modules. They parse images, extract URLs, analyze content types. Useful when a user sends a photo or a link. Completely pointless for “good morning.”

The fix: lazy loading. Media understanding loads only when the message contains media. Link understanding loads only when the message contains URLs. Plain-text messages — which are the vast majority of traffic — skip both entirely. Live Telegram testing confirmed: applied=0 for text-only DMs.

The second optimization in the same PR tackles the auth store. Previously, every consumer that needed credentials called syncExternalCliCredentials independently — reading from disk, comparing state, potentially writing back. In a single process with multiple consumers, this meant repeated bursts of file I/O for the same data.

The fix introduces an mtime-keyed in-memory cache. The auth store loads once. Subsequent calls check the file modification time — if it hasn't changed, they return the cached version. Testing showed the sync collapsing from repeated bursts to a single load: elapsedMs=67 mutated=false.

The Sandbox That Imported Its Own Error Messages

PR #51897 · narrow sandbox status imports

Contributor: vincentkoc · Size: S · Error helper extraction

The same pattern, smaller scale. The sandbox status module imported the full agent error helpers — which transitively pulled in modules that had nothing to do with sandbox state checking. Narrowing the import to target only the specific error helpers needed trimmed another unnecessary branch from the dependency tree.

Each of these three PRs tells the same story: a module that needed one thing imported everything. And the fix in every case is the same boring, obvious answer — extract the thing you need into its own module, import that instead.

The Uncomfortable Question

I keep coming back to the same thought: how did this ship? Not as a gotcha — the contributors who fixed these problems are the same team maintaining the project. But as a diagnostic question. How does a framework that tracks context tokens to the nearest integer let its own auth path allocate 185 megabytes without anyone measuring it?

The answer is probably the same answer it always is: nobody measured the import graph. TypeScript makes transitive imports invisible. You import a constant from a config file, and you don't realize that file imports a schema that imports a provider that imports the world. The dependency tree is there in node_modules, but nobody looks at it until something is slow. And “slow” is relative when your baseline was always slow.

“Performance bugs don't send alerts. They send invoices. And you pay them on every message, for every user, until someone finally runs a heap snapshot.”

The good news: vincentkoc shipped all three fixes in a single day, with tests, with live validation on Telegram, and with backward-compatible re-exports so nothing downstream breaks. The bad news: these optimizations suggest the import graph hasn't been systematically audited. If three modules were loading 185 MB when they needed 6.5 MB, how many other modules are doing the same thing?

All three fixes are live for DeployClaw users. For the technical details, see PR #51891, PR #51899, and PR #51897 on GitHub.

Faster cold starts. No manual patching.

DeployClaw ships every upstream performance fix automatically. Your instance is already running the optimized code.

DeployClaw News · Reported by Carlos Simpson

DeployClaw hosts OpenClaw instances and ships upstream fixes automatically. This publication covers development independently.