A stream configuration change means one or more configuration fields on a JetStream stream — such as replicas, retention, or limits — changed between consecutive epochs. This check provides an audit trail for stream configuration drift, catching both intentional tuning and accidental modifications that can silently alter data retention, replication, and consumer behavior.
Stream configuration defines the contract between your NATS infrastructure and every application that publishes to or consumes from that stream. Retention policy determines when messages are deleted. Max bytes and max messages control how much data the stream holds. Replica count determines fault tolerance. Changing any of these values has immediate, sometimes irreversible, consequences.
The most dangerous changes are limit reductions. If you lower max_bytes on a stream that currently holds more data than the new limit, the server immediately purges messages to comply. There’s no confirmation prompt, no grace period — data is deleted the moment the new configuration is applied. Similarly, changing the retention policy from limits to interest or workqueue alters the fundamental lifecycle of every message in the stream, potentially causing messages to be deleted before consumers have processed them.
Replica count changes trigger data movement across the cluster. Increasing replicas from R1 to R3 forces the server to copy all existing data to two additional servers, consuming network bandwidth, disk I/O, and CPU. Decreasing replicas removes data from servers, which is fine operationally but reduces fault tolerance. Without change tracking, these shifts in data placement and durability go unnoticed until a failure exposes the reduced resilience.
Manual tuning by an operator. Someone runs nats stream edit to adjust limits, retention, or replica count. This is the most common cause — an operator responding to a capacity issue, a cost optimization effort, or a new requirement. The change itself may be correct, but it needs to be tracked.
CI/CD pipeline applying configuration. Automated deployment tools that manage stream configuration push updated values. Template changes, environment variable differences, or unintended diffs between environments can modify stream configuration without explicit operator intent.
Capacity response. A stream approaching its limits (see JETSTREAM_003) triggers an operator to increase max_bytes or max_age. Or a cost review leads to reducing limits on underutilized streams. Both are valid, but downstream consumers may depend on the old retention behavior.
Replica count adjustment. Scaling replicas up for higher durability or down for cost savings. Replica changes cause data rebalancing across the cluster and temporarily increase network and disk I/O.
Source or mirror reconfiguration. Adding, removing, or modifying stream sources or mirror configuration changes where data flows from. This can affect data availability in other clusters or accounts.
Accidental modification. An operator edits the wrong stream, a script targets the wrong environment, or a configuration template applies settings intended for a different stream. Without change detection, the mistake persists until its effects surface as data loss or consumer failures.
View the full configuration of the affected stream:
nats stream info <stream-name>Review the Configuration section for current values of retention, limits, replicas, and storage type.
If you manage stream configuration as code, diff the current state against your source of truth:
# Export current confignats stream info <stream-name> --json | jq '.config' > current.json
# Compare with your expected configdiff expected.json current.jsonNATS emits advisories when stream configuration changes. Subscribe to the advisory subject to see recent changes:
nats subscribe '$JS.EVENT.ADVISORY.STREAM.UPDATED.>'Each advisory includes the account, stream name, and the updated configuration. For historical events, check your advisory consumer if you have one configured.
JetStream API requests come from authenticated client connections. If your deployment uses distinct credentials per operator or service, correlate the advisory timestamp with your authentication logs to identify the source.
Configuration changes can affect consumers. Check that all consumers are healthy and making progress:
nats consumer report <stream-name>Look for consumers with growing Unprocessed counts, which may indicate that a retention policy change is deleting messages before consumers reach them.
Review what changed and whether it affects data retention. First, confirm whether the configuration change was planned. If it was intentional, monitor for downstream effects on consumers — especially changes to retention policy, replica count, or message limits. The critical question: did the change reduce any limit below current usage? If max_bytes was lowered and data was purged, that data is gone. Check stream state:
nats stream info <stream-name> --json | jq '{messages: .state.messages, bytes: .state.bytes, first_seq: .state.first_seq, last_seq: .state.last_seq}'A sudden jump in first_seq compared to the previous epoch indicates messages were purged.
If the change was accidental, revert it:
nats stream edit <stream-name> \ --max-bytes <original-value> \ --max-msgs <original-value> \ --max-age <original-value> \ --replicas <original-count>Note: reverted limits won’t restore deleted messages. Data purged by a limit reduction is permanently lost.
Check that consumers are compatible with the new configuration. A retention policy change from limits to workqueue means messages are deleted after acknowledgment — consumers that re-read messages will find them gone:
1// Go — check consumer info and lag2js, _ := nc.JetStream()3info, err := js.ConsumerInfo("ORDERS", "order-processor")4if err != nil {5 log.Fatal(err)6}7fmt.Printf("Pending: %d, Ack Pending: %d\n",8 info.NumPending, info.NumAckPending)1# Python (nats.py) — check consumer health2js = nc.jetstream()3info = await js.consumer_info("ORDERS", "order-processor")4print(f"Pending: {info.num_pending}, Ack Pending: {info.num_ack_pending}")Monitor replica rebalancing. If the replica count changed, data is being copied or removed across the cluster. Watch replication progress:
nats stream info <stream-name> --json | jq '.cluster'All replicas should show current: true once rebalancing completes.
Define streams declaratively. Use configuration files or IaC tools to define stream configuration. Store these definitions in version control and apply changes through a review process:
# Export all stream configs for version controlfor stream in $(nats stream list --names); do nats stream info "$stream" --json | jq '.config' > "streams/${stream}.json"doneRestrict JetStream API access. Use account-level permissions to limit which users can modify stream configuration. Separate read-only monitoring credentials from administrative credentials that can edit streams.
Subscribe to stream advisories in your monitoring stack. Ingest $JS.EVENT.ADVISORY.STREAM.UPDATED.> events into your logging and alerting pipeline. This gives you a permanent audit trail of every configuration change, including the timestamp and source connection.
Most fields can be modified: retention policy, max messages, max bytes, max age, max message size, replica count, discard policy, duplicate window, sources, allow purge, and allow rollup. Fields that cannot be changed after creation include the stream name, storage type (file vs. memory), and subjects (though subjects can be modified in newer NATS server versions). Attempting to change an immutable field returns an API error.
It depends on the new policy and current consumer state. Switching from limits to workqueue means acknowledged messages are eligible for immediate deletion. Switching to interest means messages without active consumers are eligible for deletion. The server applies the new policy to all existing messages retroactively, which can cause unexpected data loss if consumers have not yet processed all messages.
Yes. If the stream currently holds more bytes than the new max_bytes limit, the server immediately discards messages (from the oldest, unless discard: new is set) until the stream fits within the new limit. This happens at the moment the configuration is applied — there is no grace period or warning. Always check current stream size before reducing byte limits.
NATS emits JetStream advisories on $JS.EVENT.ADVISORY.STREAM.UPDATED.<stream-name> whenever a stream’s configuration is modified. Create a durable consumer on this subject to capture all change events. Synadia Insights tracks these changes automatically across every epoch, giving you a full configuration history without setting up advisory consumers manually.
No. The stream remains available during replica count changes. When increasing replicas, the new replicas catch up from the leader via snapshot and then join the Raft group. When decreasing replicas, excess replicas are removed from the group. During rebalancing, write performance may be temporarily affected as the new replicas synchronize, but the stream never becomes unavailable.
With 100+ always-on audit Checks from the NATS experts, Insights helps you find and fix problems before they become costly incidents.
No alert rules to write. No dashboards to maintain.
News and content from across the community