Redis Is Not Just a Cache
Teams that only use Redis for caching are leaving significant capability on the table. Redis is a multi-model data structure server — and its less-discussed features solve distributed systems problems that would otherwise require separate infrastructure: a message broker, a job queue, and a rate limiting service.
Pub/Sub: Real-Time Messaging Without Kafka Overhead
Redis Pub/Sub delivers messages to all subscribers on a channel instantly. It is not durable (missed messages are gone) but for real-time use cases where stale data is irrelevant — live dashboards, notification broadcasting, presence systems — it is ideal.
// Publisher
const publisher = createClient()
await publisher.publish('order:updates', JSON.stringify({ orderId: '123', status: 'shipped' }))
// Subscriber
const subscriber = createClient()
await subscriber.subscribe('order:updates', (message) => {
const event = JSON.parse(message)
broadcastToWebSocket(event)
})For durable messaging where every message must be processed, use Redis Streams instead — they provide consumer groups, message acknowledgement, and replay from any offset. This is the lightweight alternative to Kafka for teams that do not need Kafka's scale.
BullMQ: Production Job Queues on Redis
BullMQ is the standard job queue library for Node.js applications, built on Redis. It supports priorities, delayed jobs, retries with backoff, rate-limited workers, and job lifecycle events.
import { Queue, Worker } from 'bullmq'
// Add a job
const emailQueue = new Queue('emails', { connection: redisConnection })
await emailQueue.add('send-welcome', { userId: '456', email: '[email protected]' }, {
attempts: 3,
backoff: { type: 'exponential', delay: 2000 },
})
// Process jobs
const worker = new Worker('emails', async (job) => {
await sendEmail(job.data)
}, { connection: redisConnection, concurrency: 10 })Key production patterns:
- Separate queues by priority: critical (payments), standard (emails), low (analytics)
- Set job TTL: completed jobs accumulate and consume memory
- Monitor with Bull Board: a UI for inspecting queue state, failed jobs, and throughput
Rate Limiting: The Sliding Window Counter
The most robust rate limiting pattern uses Redis's atomic increment and expiry:
async function isRateLimited(identifier: string, limit: number, windowSeconds: number): Promise<boolean> {
const key = `rl:${identifier}:${Math.floor(Date.now() / (windowSeconds * 1000))}`
const current = await redis.incr(key)
if (current === 1) await redis.expire(key, windowSeconds * 2)
return current > limit
}This sliding window approach handles distributed deployments correctly because all instances share the same Redis counter. A per-instance in-memory counter would allow a client to exceed the limit by hitting different servers.
For more sophisticated patterns (token bucket, leaky bucket), use redis-rate-limit or implement with Redis Lua scripts for atomicity guarantees.
Redis Streams for Event Sourcing Lite
// Append to stream
await redis.xAdd('user:events', '*', {
type: 'purchase',
userId: '789',
amount: '99.00',
timestamp: Date.now().toString(),
})
// Consumer group — multiple workers process the stream with at-least-once delivery
await redis.xGroupCreate('user:events', 'analytics-workers', '0', { MKSTREAM: true })
const messages = await redis.xReadGroup('analytics-workers', 'worker-1', [{ key: 'user:events', id: '>' }], { COUNT: 10 })Streams are ideal for audit logs, analytics pipelines, and event-driven microservice communication where you want durable, replayable events without the operational overhead of Kafka.
When Redis Is Not Enough
Redis is single-threaded per shard. For extremely high write throughput (>500k ops/sec), consider Redis Cluster. For true multi-region active-active messaging, Kafka or Pulsar are the right tools. For most applications handling millions of events per day, Redis handles it comfortably.