Serverless doesn’t mean “no servers.” It means you don’t manage servers. You write a function, deploy it, and the cloud provider runs it for you. You pay only when it executes.
// This is a complete serverless function (AWS Lambda)
export async function handler(event) {
const name = event.queryStringParameters?.name || 'World';
return {
statusCode: 200,
body: JSON.stringify({ message: `Hello, ${name}!` }),
};
}
Deploy this, and it handles 0 to 10,000 requests per second automatically. No servers to configure, no scaling to think about, no patches to install.
How it works
- You write a function
- You deploy it to a cloud provider
- A request comes in → the provider spins up your function → runs it → returns the response
- No requests? Nothing runs. You pay nothing.
The provider handles: servers, operating systems, scaling, load balancing, security patches, availability.
Serverless providers
| Provider | Service | Free tier |
|---|---|---|
| AWS | Lambda | 1M requests/month |
| Google Cloud | Cloud Functions | 2M invocations/month |
| Cloudflare | Workers | 100K requests/day |
| Vercel | Serverless Functions | Generous hobby tier |
| Netlify | Functions | 125K invocations/month |
When to use serverless
Good fit:
- APIs and webhooks
- Cron jobs / scheduled tasks
- Image processing, file handling
- Low-traffic or spiky-traffic apps
- MVPs and side projects (free tier is generous)
Not ideal:
- Long-running processes (most have a 10-30 second timeout)
- WebSocket connections (stateless by design)
- Apps that need persistent in-memory state
- High-throughput, consistent-traffic apps (a regular server is cheaper)
Serverless vs. traditional server
| Serverless | Traditional server | |
|---|---|---|
| Scaling | Automatic | Manual or auto-scaling config |
| Cost at zero traffic | $0 | $5-50/month minimum |
| Cost at high traffic | Can get expensive | Predictable |
| Cold starts | Yes (first request is slower) | No |
| Deployment | Push a function | Deploy to a server |
| Maintenance | None | OS updates, security patches |
The cold start problem
When a serverless function hasn’t been called recently, the provider needs to spin up a new instance. This “cold start” adds 100ms-2s of latency to the first request. Subsequent requests are fast.
Mitigation: keep functions small, use lightweight runtimes (Node.js, Python), or use providers with minimal cold starts (Cloudflare Workers).
See also: What is CI/CD? | AWS CLI cheat sheet