πŸ”§ Error Fixes
Β· 1 min read

Cloudflare Workers: Script Too Large β€” How to Fix It


Error: Script startup exceeded CPU time limit / Worker size exceeds limit

Your Cloudflare Worker is too large (free: 1MB, paid: 10MB compressed).

Why this happens

Cloudflare Workers run on the edge in a V8 isolate with strict resource limits. The bundled script (including all dependencies) must fit within the size limit after compression. Large utility libraries like lodash, embedded data files, or heavy SDKs can easily push your bundle past the threshold. The β€œCPU time limit” variant happens when parsing and initializing a large script takes too long at startup.

Fix 1: Check bundle size

wrangler deploy --dry-run --outdir dist
ls -la dist/

Fix 2: Tree-shake unused code

// ❌ Importing entire library
import _ from 'lodash';

// βœ… Import only what you need
import groupBy from 'lodash/groupBy';

Fix 3: Move large dependencies to KV or R2

Store large data in Cloudflare KV instead of bundling it.

Fix 4: Use external modules (paid plan)

# wrangler.toml
[build]
command = "esbuild --bundle --external:heavy-lib"

Alternative solutions

Replace heavy npm packages with lighter alternatives β€” for example, use date-fns instead of moment, or nanoid instead of uuid. You can also split your Worker into multiple smaller Workers that call each other via Service Bindings, keeping each one under the size limit.

Prevention

  • Add a bundle size check to your CI pipeline (e.g., wrangler deploy --dry-run with a size assertion) so you catch size regressions before they reach production.
  • Audit your import statements regularly β€” a single deep dependency can pull in an unexpectedly large tree of modules.

Related: Wrangler: Deployment Failed β€” How to Fix It Β· Cloudflare Workers vs Vercel Edge