The False Dichotomy
Engineering teams often treat serverless vs containers as an architectural religion. It is not. They are tools with different cost profiles, operational models, and performance characteristics. The decision should be made per workload, not per organization.
Where Serverless Wins
Event-driven, spiky workloads
A function that processes incoming webhooks gets 0 invocations at 3am and 10,000 at 9am. With Lambda, you pay for exactly what you use and scale to zero automatically. Running a container for this workload means paying for idle capacity 24/7.
Triggered automation
Image resizing on S3 upload, sending emails on database events, scheduled data exports — these are textbook serverless use cases. The event triggers the function; the function runs; it terminates.
Edge compute
Lambda@Edge and Cloudflare Workers run your code at 200+ global PoPs with sub-10ms cold starts. No container orchestration gets you this latency profile without significant infrastructure investment.
Cost at low scale
For a startup's first API, Lambda's free tier (1M requests/month) and pay-per-invocation pricing is substantially cheaper than running even a single small ECS task 24/7.
Where Containers Win
Long-running processes
WebSocket servers, job queue workers, stream processors — these need to run continuously. Lambda's 15-minute maximum execution time and connection limitations make it a poor fit. A container running indefinitely is the right model.
Predictable, sustained traffic
If your API handles 500 requests/second continuously, Lambda's per-invocation cost at that scale exceeds the cost of reserved container capacity. The math flips around 100–200 requests/second sustained.
Complex runtime dependencies
Custom binaries, GPU workloads, specific Linux library versions, large ML model loading — containers give you full control over the execution environment. Lambda's layers system works but has limits.
Sub-100ms cold starts are critical
Lambda cold starts range from 100ms (Node.js) to 1-2s (JVM). For latency-sensitive user-facing APIs, a warm container pool is more predictable.
The Hybrid Architecture
The best production systems use both:
User Request
│
▼
CloudFront (CDN edge cache)
│
▼
API Gateway + Lambda ← Bursty endpoints, webhooks, lightweight APIs
│
▼
ECS/EKS Containers ← Core application, WebSocket servers, workers
│
▼
Lambda (async) ← Background jobs triggered by SQS/EventBridgeThe Decision Framework
| Factor | Serverless | Containers |
|---|---|---|
| Traffic pattern | Spiky / bursty | Steady / sustained |
| Execution duration | <15 minutes | Long-running |
| Cold start tolerance | Yes | No |
| Traffic volume | Low-medium | High sustained |
| State requirements | Stateless | Stateful possible |
| Cost model | Per invocation | Per hour |
Operational Reality
Serverless reduces infrastructure operations but increases observability complexity. Distributed traces across 50 Lambda functions are harder to debug than a monolith in a container. Invest in structured logging and distributed tracing (AWS X-Ray, Honeycomb) before you need it.
Containers require more upfront setup but offer more control. Kubernetes in particular has a steep learning curve — if your team does not have Kubernetes experience, consider ECS on Fargate as a stepping stone.