- Published on
NATS JetStream — Lightweight Messaging That Scales From Laptop to Cloud
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
NATS is a publish-subscribe messaging system that ships as a single binary. JetStream adds persistence (streams) and consumer groups. Unlike Kafka (complex, heavy, storage-intensive), NATS JetStream runs on a laptop and scales to millions of messages per second. This post covers streams, consumers, durability semantics, and production patterns.
- NATS vs Kafka vs SQS: When NATS Wins
- JetStream Streams and Consumers
- Push vs Pull Consumers
- Consumer Durability and Ack Policies
- Key-Value Store (Like Redis But Built-in)
- Object Store for Large Payloads
- NATS Clustering for HA
- Node.js NATS Client Patterns
- Subject-Based Routing vs Kafka Topics
- Checklist
- Conclusion
NATS vs Kafka vs SQS: When NATS Wins
Kafka dominates when you need:
- Long-term retention (weeks/months)
- Rebalancing semantics
- Consumer lag tracking at scale
- Ecosystem of connectors
SQS wins when you want:
- AWS-managed (no operations)
- Simple queue semantics
- Automatic scaling
- Pay-per-request pricing
NATS JetStream is optimal when you need:
- Low operational overhead (single binary)
- Fast message delivery (< 10ms)
- Clustering without ZooKeeper
- Subject-based routing (not just topics)
- Mixed patterns (queues + pub/sub)
// Decision matrix
const platforms = {
kafka: {
throughput: '1M+ msg/s',
latency: '50-100ms',
retention: 'weeks/months',
complexity: 'high',
operationalCost: 'high',
},
nats: {
throughput: '1M+ msg/s',
latency: '< 10ms',
retention: 'hours/days',
complexity: 'low',
operationalCost: 'low',
},
sqs: {
throughput: '100k msg/s',
latency: '50-200ms',
retention: '14 days fixed',
complexity: 'none (managed)',
operationalCost: 'per-request',
},
};
Use NATS for microservice communication, event streaming, and task distribution. Use Kafka for data warehousing and compliance (immutable audit logs).
JetStream Streams and Consumers
A Stream is a durable log of messages on a subject. Multiple publishers write to one stream; multiple consumers read independently.
A Consumer represents a reader's position. Push consumers are delivered messages; pull consumers request messages (backpressure). Consumers track acknowledgments, ensuring durability.
// nats-setup.ts - Initialize stream and consumer
import { connect, NatsConnection, StringCodec, AckPolicy } from 'nats';
export async function setupJetStream() {
const nc = await connect({
servers: ['localhost:4222'],
});
const js = nc.jetstream();
const sc = StringCodec();
// Create stream: all messages on orders.* subjects
await js.addStream({
name: 'orders-stream',
subjects: ['orders.created', 'orders.updated', 'orders.cancelled'],
max_age: 24 * 60 * 60 * 1_000_000_000, // 24 hours in nanoseconds
storage: 'file', // Persistent to disk
discard: 'old', // Delete oldest messages when full
});
// Create durable consumer: queue semantics (load balancing)
await js.addConsumer('orders-stream', {
name: 'order-processor',
durable_name: 'order-processor',
deliver_subject: 'consumer.orders',
ack_policy: 'explicit', // Require explicit ack
deliver_policy: 'all', // Start from oldest unacked
flow_control: {
idle_heartbeat: 5_000_000_000, // 5 seconds
max_ack_pending: 1000,
},
});
return { nc, js, sc };
}
export async function publishOrder(js: any, orderId: string, data: object) {
const sc = StringCodec();
await js.publish(`orders.created`, sc.encode(JSON.stringify({
orderId,
...data,
timestamp: new Date().toISOString(),
})));
}
Push vs Pull Consumers
Push consumers (server delivers messages):
- Lower latency
- Server controls pace
- Harder to implement backpressure
Pull consumers (client requests messages):
- Client controls pace
- Explicit batch size
- Better for variable processing speed
// push-consumer.ts - Server delivers messages
import { connect, AckPolicy } from 'nats';
export async function pushConsumer() {
const nc = await connect();
const js = nc.jetstream();
// Create push consumer with deliver subject
const psub = await js.subscribe('orders.created', {
config: {
durable_name: 'push-processor',
deliver_subject: 'processor.in',
ack_policy: AckPolicy.Explicit,
},
});
for await (const msg of psub) {
try {
const order = JSON.parse(new TextDecoder().decode(msg.data));
console.log(`Processing order ${order.orderId}`);
// Do work...
await processOrder(order);
// Acknowledge after successful processing
msg.ack();
} catch (err) {
console.error(`Failed to process order:`, err);
// Negative ack triggers redelivery
msg.nak();
}
}
}
async function processOrder(order: any) {
// Production: validate, store, emit events
return new Promise(resolve => setTimeout(resolve, 100));
}
// pull-consumer.ts - Client requests batches
import { connect, AckPolicy } from 'nats';
export async function pullConsumer() {
const nc = await connect();
const js = nc.jetstream();
// Create pull consumer (durable, no deliver subject)
const psub = await js.pullSubscribe('orders.created', {
config: {
durable_name: 'batch-processor',
ack_policy: AckPolicy.Explicit,
},
});
// Request 50 messages, wait max 5 seconds
await psub.pull({ batch: 50, expires: 5000 });
for await (const msg of psub) {
try {
const order = JSON.parse(new TextDecoder().decode(msg.data));
console.log(`Batch processing order ${order.orderId}`);
await processOrder(order);
msg.ack();
} catch (err) {
console.error(`Error:`, err);
msg.nak();
}
}
}
Consumer Durability and Ack Policies
Durability guarantees depend on ack policy:
- None: Fire and forget. Fast, unreliable.
- All: Server acknowledges all messages. Client controls flow.
- Explicit: Only confirmed messages are removed. Best for reliability.
// consumer-ack-patterns.ts - Different reliability levels
import { connect, AckPolicy } from 'nats';
export async function explicitAckExample() {
const nc = await connect();
const js = nc.jetstream();
const psub = await js.subscribe('orders.created', {
config: {
durable_name: 'explicit-ack-consumer',
ack_policy: AckPolicy.Explicit,
max_ack_pending: 100, // Allow 100 unacked messages
},
});
for await (const msg of psub) {
try {
const order = JSON.parse(new TextDecoder().decode(msg.data));
// Process with timeout
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Processing timeout')), 30000)
);
await Promise.race([processOrder(order), timeout]);
// Ack after success
msg.ackSync();
} catch (err) {
console.error(`Failed:`, err);
// Negative ack redelivers to another consumer or retries
msg.nak();
// Dead letter handling: skip after N retries
if (msg.metadata().redelivered > 10) {
console.error(`Message exceeded retry limit, skipping`);
msg.ackSync();
}
}
}
}
async function processOrder(order: any) {
// Simulate work
await new Promise(r => setTimeout(r, 100));
}
Key-Value Store (Like Redis But Built-in)
JetStream provides a KV bucket API (similar to Redis) backed by streams. No extra service to manage.
// kv-store.ts - Session storage using JetStream KV
import { connect } from 'nats';
export async function kvExample() {
const nc = await connect();
const js = nc.jetstream();
// Create KV bucket (backed by stream)
const kv = await js.views.kv('sessions', {
purge_interval: 24 * 60 * 60 * 1000, // TTL 24 hours
});
// Set session
await kv.put(`session:user123`, JSON.stringify({
userId: 'user123',
token: 'jwt-token',
expires: Date.now() + 3600000,
}));
// Get session
const entry = await kv.get(`session:user123`);
if (entry) {
const session = JSON.parse(new TextDecoder().decode(entry.value));
console.log(`Session for user123:`, session);
}
// Watch for changes
const w = await kv.watch({ ignore_deletes: false });
for await (const op of w) {
if (op.operation === 'put') {
console.log(`Session updated: ${op.key}`);
} else if (op.operation === 'del') {
console.log(`Session deleted: ${op.key}`);
}
}
}
Object Store for Large Payloads
For files and large binary data, use the Object Store API.
// object-store.ts - Upload documents
import { connect } from 'nats';
import { readFile } from 'fs/promises';
export async function objectStoreExample() {
const nc = await connect();
const js = nc.jetstream();
// Create object store
const os = await js.views.os('documents', {
ttl: 30 * 24 * 60 * 60 * 1000, // 30 days
});
// Upload file
const fileData = await readFile('./contract.pdf');
await os.put({
name: 'contract-2026-03.pdf',
options: {
'content-type': 'application/pdf',
'user-id': 'user123',
},
}, fileData);
// Download file
const obj = await os.get('contract-2026-03.pdf');
if (obj) {
const buffer = await obj.data();
console.log(`Downloaded ${buffer.length} bytes`);
}
// List all
const listing = await os.list();
for await (const info of listing) {
console.log(`File: ${info.name}, Size: ${info.size}`);
}
}
NATS Clustering for HA
Run multiple NATS servers in a cluster. Messages replicate automatically; if one server dies, others continue serving.
# nats-cluster.conf - Production clustering
cluster {
name: "nats-cluster"
listen: 0.0.0.0:6222
routes: [
"nats://nats-1.internal:6222",
"nats://nats-2.internal:6222",
"nats://nats-3.internal:6222",
]
}
jetstream {
domain: "prod"
store_dir: "/data/jetstream"
max_memory_store: 128GB
max_file_store: 1TB
# Replication for durability
replica_count: 3
max_outstanding_catchup: 33MB
}
# Monitor all streams
monitor_port: 8222
Node.js NATS Client Patterns
// nats-client.ts - Production patterns
import { connect, NatsConnection, StringCodec } from 'nats';
export class NatsClient {
private nc!: NatsConnection;
private sc = StringCodec();
async connect(servers: string[] = ['localhost:4222']) {
this.nc = await connect({
servers,
maxReconnectAttempts: -1,
reconnectTimeWait: 1000,
});
this.nc.addEventListener('reconnect', () => {
console.log('Reconnected to NATS');
});
this.nc.addEventListener('disconnect', () => {
console.log('Disconnected from NATS');
});
}
async publishEvent(subject: string, data: any) {
try {
await this.nc.publish(subject, this.sc.encode(JSON.stringify(data)));
} catch (err) {
console.error(`Publish failed for ${subject}:`, err);
throw err;
}
}
async subscribeStream(stream: string, consumer: string, handler: (msg: any) => Promise<void>) {
const js = this.nc.jetstream();
const sub = await js.subscribe(stream, {
config: {
durable_name: consumer,
ack_policy: 'explicit',
},
});
for await (const msg of sub) {
try {
const data = JSON.parse(this.sc.decode(msg.data));
await handler(data);
msg.ack();
} catch (err) {
console.error(`Handler error:`, err);
msg.nak();
}
}
}
async disconnect() {
await this.nc.close();
}
}
Subject-Based Routing vs Kafka Topics
NATS subjects use hierarchical routing (e.g., orders.created, orders.cancelled, invoices.created). One stream can capture all subjects matching a pattern.
Kafka has flat topics; you need separate topics for different message types.
// subject-routing.ts - Hierarchical patterns
import { connect } from 'nats';
export async function subjectRoutingExample() {
const nc = await connect();
const js = nc.jetstream();
// Stream captures all order subjects
await js.addStream({
name: 'ecommerce-stream',
subjects: ['orders.>', 'invoices.>', 'shipments.>'], // Wildcard patterns
});
// Consumer 1: Only order events
await js.addConsumer('ecommerce-stream', {
name: 'order-consumer',
durable_name: 'order-consumer',
filter_subject: 'orders.>', // Filter to orders.* only
});
// Consumer 2: All financial events
await js.addConsumer('ecommerce-stream', {
name: 'finance-consumer',
durable_name: 'finance-consumer',
filter_subject: '*.>',
});
}
Checklist
- Choose NATS for sub-10ms messaging; Kafka for long-term retention
- Start with single server; scale to cluster with 3+ nodes for HA
- Use explicit ack policy for guaranteed delivery
- Implement dead letter handling after N retries
- Use KV bucket for sessions and feature flags
- Monitor consumer lag with
jsinfoAPI - Configure JetStream retention based on disk/memory capacity
- Use pull consumers for backpressure-sensitive workloads
- Subject naming: use hierarchical patterns (e.g.,
domain.entity.action) - Test failover: kill a server, verify consumers auto-reconnect
Conclusion
NATS JetStream fills the gap between simple queues (Redis) and heavy message brokers (Kafka). For microservice communication, event streaming, and task distribution, NATS offers low latency, easy operations, and linear scalability. Use it as your default messaging system; graduate to Kafka only when you outgrow NATS or need Kafka-specific features.