Until NATS 2.11, every message in a JetStream stream shared one fate. You set
MaxAge on the stream, and that was it — sessions, cache entries, order events
— they all expired on the same schedule. If a session token needed 30 seconds
and an order needed 24 hours, you needed separate streams or external cleanup
logic.
Watch the video for a quick walkthrough, or keep reading for the full breakdown.
Per-message TTL changes that. Each message now carries its own expiration. And this one capability quietly unlocks an entire class of architecture patterns that previously required a separate system, custom cron jobs, or awkward workarounds.
Enable it in your stream config:
1name: "ORDERS"2subjects: ["orders.>"]3max_age: "24h"4allow_msg_ttl: trueThen set the Nats-TTL header on any published message:
nats pub orders.session "session-abc" --header "Nats-TTL:30s"nats pub orders.cache "product-42" --header "Nats-TTL:1h"nats pub orders.event "order-7890" # no TTL — uses stream MaxAgeThe server calculates each message’s deadline independently. When a message
expires, it’s removed. Messages without a Nats-TTL header fall back to the
stream’s MaxAge. And if you set Nats-TTL: never, the message ignores
MaxAge entirely and lives indefinitely.
That’s the entire API surface. One header. No client-side timers, no expiration queues, no polling loops.
NATS Key-Value is built on JetStream, so per-message TTL surfaces there too. The
client libraries wrap it with idiomatic APIs — for example, the Go client uses
KeyTTL():
1kv.Create(ctx, "session.temp", data, jetstream.KeyTTL(30*time.Second))2kv.Create(ctx, "cache.price", data, jetstream.KeyTTL(30*time.Second))3kv.Create(ctx, "config.app", data, jetstream.KeyTTL(1*time.Hour))Under the hood, this sets the same Nats-TTL header — so any client that can
set message headers gets per-key TTL. The API names vary by language (KeyTTL
in Go, ttl parameter in Python, direct header access in JS/Rust), but the
protocol is the same.
Each key expires on its own schedule. When a key expires, the server places a
short-lived delete marker so watchers know what happened. You control how
long that marker sticks around with LimitMarkerTTL (or the equivalent bucket
config in your client).
This means watchers get real-time notifications of key expirations — with NATS’s built-in streaming and replay guarantees.
Per-message TTL isn’t just a convenience feature. It fundamentally changes what NATS can do as a stateful system. Here are the architecture patterns that are now practical with NATS alone:
Each item can expire on its own, so session tokens, password reset links, and device pairing codes don’t need separate buckets or cleanup jobs.
Example: A web app stores login sessions with a 30-minute TTL and password reset tokens with a 10-minute TTL in the same KV bucket. When a session expires, a watcher notifies the auth service in real time. No cron jobs, no extra infrastructure.
A lock can expire automatically if the holder dies.
Example: A job runner claims a task by writing a key with a 20-second
KeyTTL and renews it on a heartbeat while healthy. If the process crashes, the key expires and
another worker picks up the task. No lock manager, no deadlock recovery logic.
“Online” exists only while updates keep arriving.
Example: IoT gateways mark a device online with a 15-second TTL and refresh continuously. If refreshes stop, the key expires and a watcher fires the “device offline” event automatically. No polling interval to tune, no stale state to clean up.
The common thread across all these patterns: you no longer need a separate system for ephemeral state.
Before per-message TTL, using NATS for these patterns meant either:
Now, NATS handles it natively. And because it’s NATS, you also get:
The concise version: per-message TTL makes NATS a natural fit for ephemeral state and cache patterns, especially when you also want watches, streams, and event-driven behavior in the same platform.
Here’s a minimal stream demo that publishes three messages with different TTLs and watches them expire independently:
1stream, _ := js.CreateStream(ctx, jetstream.StreamConfig{2 Name: "ORDERS",3 Subjects: []string{"orders.>"},4 MaxAge: 25 * time.Second,5 AllowMsgTTL: true,6})7
8// Each message gets its own lifetime9msgs := []struct {10 subject string11 ttl string12}{13 {"orders.session", "5s"},14 {"orders.cache", "10s"},15 {"orders.event", ""}, // falls back to MaxAge16}17
18for _, m := range msgs {19 msg := &nats.Msg{Subject: m.subject, Data: []byte("data")}20 if m.ttl != "" {21 msg.Header = nats.Header{}22 msg.Header.Set("Nats-TTL", m.ttl)23 }24 js.PublishMsg(ctx, msg)25}And the KV equivalent with per-key TTL:
1kv, _ := js.CreateKeyValue(ctx, jetstream.KeyValueConfig{2 Bucket: "TTL_DEMO",3 Storage: jetstream.MemoryStorage,4 LimitMarkerTTL: 1 * time.Second,5})6
7kv.Create(ctx, "session.temp", []byte("data"), jetstream.KeyTTL(5*time.Second))8kv.Create(ctx, "cache.warm", []byte("data"), jetstream.KeyTTL(10*time.Second))9kv.Create(ctx, "config.stable", []byte("data"), jetstream.KeyTTL(15*time.Second))10
11// Watch keys expire individually12watcher, _ := kv.WatchAll(ctx)13for entry := range watcher.Updates() {14 if entry != nil && entry.Operation() == jetstream.KeyValueDelete {15 fmt.Printf("expired: %s\n", entry.Key())16 }17}Session expires at ~5s. Cache at ~10s. Config at ~15s. Each on its own schedule.
Per-message TTL is available in NATS Server 2.11+. It works with all JetStream
client libraries — set the Nats-TTL header and you’re done.
If you need keys that expire on their own schedule alongside messaging and streams, NATS now handles all of it natively.
Per-message TTL is one of many features in the NATS 2.11 release. Check out the full release notes for everything that’s new.
News and content from across the community