What is Serverless? A Simple Explanation for Developers
Serverless doesnβt mean βno servers.β It means you donβt manage servers. You write a function, deploy it, and the cloud provider runs it for you. You pay only when it executes.
// This is a complete serverless function (AWS Lambda)
export async function handler(event) {
const name = event.queryStringParameters?.name || 'World';
return {
statusCode: 200,
body: JSON.stringify({ message: `Hello, ${name}!` }),
};
}
Deploy this, and it handles 0 to 10,000 requests per second automatically. No servers to configure, no scaling to think about, no patches to install.
How it works
- You write a function
- You deploy it to a cloud provider
- A request comes in β the provider spins up your function β runs it β returns the response
- No requests? Nothing runs. You pay nothing.
The provider handles: servers, operating systems, scaling, load balancing, security patches, availability.
Serverless providers
| Provider | Service | Free tier |
|---|---|---|
| AWS | Lambda | 1M requests/month |
| Google Cloud | Cloud Functions | 2M invocations/month |
| Cloudflare | Workers | 100K requests/day |
| Vercel | Serverless Functions | Generous hobby tier |
| Netlify | Functions | 125K invocations/month |
When to use serverless
Good fit:
- APIs and webhooks
- Cron jobs / scheduled tasks
- Image processing, file handling
- Low-traffic or spiky-traffic apps
- MVPs and side projects (free tier is generous)
Not ideal:
- Long-running processes (most have a 10-30 second timeout)
- WebSocket connections (stateless by design)
- Apps that need persistent in-memory state
- High-throughput, consistent-traffic apps (a regular server is cheaper)
Serverless vs. traditional server
| Serverless | Traditional server | |
|---|---|---|
| Scaling | Automatic | Manual or auto-scaling config |
| Cost at zero traffic | $0 | $5-50/month minimum |
| Cost at high traffic | Can get expensive | Predictable |
| Cold starts | Yes (first request is slower) | No |
| Deployment | Push a function | Deploy to a server |
| Maintenance | None | OS updates, security patches |
The cold start problem
When a serverless function hasnβt been called recently, the provider needs to spin up a new instance. This βcold startβ adds 100ms-2s of latency to the first request. Subsequent requests are fast.
Mitigation: keep functions small, use lightweight runtimes (Node.js, Python), or use providers with minimal cold starts (Cloudflare Workers).
For related concepts, see what is edge computing and Vercel vs Railway vs Fly.io for a comparison of modern deployment platforms.
FAQ
How much does serverless cost at scale?
At low to moderate traffic, serverless is extremely cheap (often free). At very high, consistent traffic (millions of requests per hour), a dedicated server can be more cost-effective. The break-even point varies, but serverless excels for spiky or unpredictable workloads.
Can I run a full web app on serverless?
Yes. Frameworks like Next.js, Remix, and SvelteKit can deploy entirely on serverless functions. Each route becomes a function. However, features requiring persistent connections (like WebSockets) need additional services.
Whatβs the difference between serverless and edge computing?
Serverless runs your code on-demand without managing servers, typically in one or a few regions. Edge computing runs your code in many locations worldwide (close to users) for lower latency. Many platforms now combine both β serverless functions deployed to the edge.
See also: What is CI/CD? | AWS CLI cheat sheet