A stream without limits is a JetStream stream that has no message count limit (max_msgs = -1), no byte size limit (max_bytes = -1), and no age-based retention (max_age = 0). The stream will grow without bound until it exhausts the server’s available storage. Sealed streams are excluded from this check — they are intentionally frozen and cannot grow.
An unlimited stream is a storage time bomb. Every message published to the stream is retained forever. There is no automatic cleanup mechanism — no old messages expire, no size cap triggers eviction, no message count limit prunes the tail. The stream grows monotonically until the server’s JetStream storage reservation fills up, at which point JETSTREAM_007 (Storage Utilization Critical) fires and new publishes start failing with “insufficient storage” errors.
The failure mode is particularly dangerous because it’s gradual and silent. A stream receiving 1,000 messages per second at 1KB each grows by ~84GB per day. On a server with 1TB of JetStream storage, the stream fills the disk in under two weeks. But the problem doesn’t manifest until the disk is nearly full — there are no warnings at 50% or 75% utilization unless someone is explicitly monitoring. By the time the server rejects publishes, the blast radius extends to every stream on that server, not just the one that consumed all the storage.
In multi-tenant deployments, an unlimited stream in one account can exhaust the server’s shared storage, impacting streams in other accounts. Even with per-account JetStream limits, an unlimited stream within an account can consume the entire account allocation, starving other streams in the same account. The root cause is always the same: nobody set a limit, and the system has no built-in safety net for unbounded growth.
Default stream creation without explicit limits. The JetStream API defaults to unlimited for all retention parameters. Creating a stream with nats stream add or the SDK without specifying --max-bytes, --max-msgs, or --max-age produces an unlimited stream. This is by far the most common cause — operators follow quickstart examples that skip limit configuration.
Copied from documentation or examples. Tutorials and getting-started guides often create streams with minimal configuration to reduce complexity. Operators copy these examples into production without adding limits. The example works great for a demo; it’s a liability in production.
No organizational policy on stream limits. Without a team-wide or org-wide standard requiring limits on every stream, individual developers make ad hoc decisions. Some set limits, some don’t. Over time, the unlimited streams accumulate.
Intentionally unlimited for “keep everything” use cases. Some teams genuinely want to retain all messages forever — audit logs, compliance data, event sourcing streams. The intention is valid, but the implementation is risky without a corresponding storage budget and monitoring. Even “keep everything” streams should have max_bytes set to the allocated storage budget.
Stream created programmatically without limit parameters. Application code that creates streams dynamically (e.g., per-tenant or per-topic streams) often uses a minimal StreamConfig struct. If the code doesn’t explicitly set limits, every programmatically created stream is unlimited.
List all streams and check their limit configuration:
nats stream reportLook for streams where the Limits columns show -1 or unlimited. For a more targeted query:
nats stream ls -j | jq '.[] | select(.config.max_msgs == -1 and .config.max_bytes == -1 and (.config.max_age == 0 or .config.max_age == null)) | {name: .config.name, subjects: .config.subjects, storage: .config.storage}'This lists every stream that has no message count, byte, or age limit.
For each unlimited stream, check its current size and growth rate:
nats stream info ORDERSKey fields:
Calculate the growth rate: bytes / (last_timestamp - first_timestamp) gives you bytes per unit time. Extrapolate to see when storage will be exhausted.
nats server report jetstreamCompare each server’s used storage against its reserved capacity. Unlimited streams on servers that are already at 70%+ utilization are the highest priority.
For multi-account deployments:
nats stream report # account is selected via NATS context/credentialsThis gives a deployment-wide view of stream sizes and limits across accounts.
You can edit a stream’s limits without disrupting publishers or consumers:
# Add a 10GB size limit and 30-day retentionnats stream edit ORDERS --max-bytes 10737418240 --max-age 30d
# Add a message count limitnats stream edit ORDERS --max-msgs 10000000The server immediately starts enforcing the new limits. If the stream already exceeds the limit, old messages are purged to bring it into compliance. This purge happens asynchronously and may cause a brief I/O spike on large streams.
Choose appropriate limits for the workload:
| Workload type | Recommended limits |
|---|---|
| High-throughput events | --max-bytes 10G --max-age 7d |
| Request/reply logs | --max-age 24h --max-msgs 1000000 |
| Audit/compliance | --max-bytes 100G --max-age 365d |
| Ephemeral data | --max-age 1h --max-msgs 100000 |
Update your stream creation logic to always include limits:
1// Go - nats.go2js, _ := nc.JetStream()3
4// Bad: no limits5_, err := js.AddStream(&nats.StreamConfig{6 Name: "ORDERS",7 Subjects: []string{"orders.>"},8})9
10// Good: explicit limits11_, err := js.AddStream(&nats.StreamConfig{12 Name: "ORDERS",13 Subjects: []string{"orders.>"},14 MaxBytes: 10 * 1024 * 1024 * 1024, // 10GB15 MaxAge: 30 * 24 * time.Hour, // 30 days16 Storage: nats.FileStorage,17})1# Python - nats.py2from nats.js.api import StreamConfig3
4# Bad: no limits5await js.add_stream(StreamConfig(6 name="ORDERS",7 subjects=["orders.>"],8))9
10# Good: explicit limits11await js.add_stream(StreamConfig(12 name="ORDERS",13 subjects=["orders.>"],14 max_bytes=10 * 1024 * 1024 * 1024, # 10GB15 max_age=30 * 24 * 60 * 60 * 1_000_000_000, # 30 days in nanoseconds16))Require limits on every stream. Make it a team standard: no stream is created without at least one of max_bytes, max_msgs, max_age, or max_msgs_per_subject. Review stream configurations in code review just like you review database schemas.
Set per-account JetStream limits. Even if individual streams slip through without limits, per-account JetStream reservations cap the total damage:
nsc edit account -n PRODUCTION --js-mem-storage 1G --js-disk-storage 100GThis ensures an unlimited stream in one account can’t consume storage allocated to other accounts.
Use stream templates or configuration-as-code. Define standard stream configurations in your infrastructure-as-code tooling (Terraform, Helm, etc.) with limits baked in. Developers customize subjects and names but inherit sensible defaults for retention:
# Create a stream from a standard confignats stream add ORDERS --config standard-stream.jsonUse Synadia Insights for continuous enforcement. Insights automatically flags every stream that has no limits configured, across all accounts and servers. Instead of hoping developers remember to set limits, unlimited streams surface as findings every collection cycle — before they become storage emergencies.
Start with the question “how long does this data need to be available?” For most operational data, max_age is the primary control — 7 days, 30 days, or whatever your consumers need to process the data. Add max_bytes as a safety cap: even if max_age allows 30 days of data, max_bytes prevents a traffic spike from filling the disk before age-based cleanup kicks in. max_msgs is useful for bounded-size streams like configuration or state, less so for event streams with variable rates. max_msgs_per_subject caps cardinality on a per-subject basis — useful for KV-like patterns where you want one message per key.
No. nats stream edit applies limit changes without disrupting active publishers or consumers. If the stream currently exceeds the new limit, the server prunes old messages to bring the stream into compliance. This pruning can cause a temporary I/O spike on very large streams, but it doesn’t interrupt message flow. Publishers continue publishing, and consumers continue consuming — they just can’t access messages that were pruned.
The stream’s retention policy determines behavior. With the default limits retention policy, the oldest messages are automatically deleted to make room for new ones. With interest retention, messages are deleted once all consumers acknowledge them (limits still apply as a cap). With workqueue retention, messages are deleted after any consumer acknowledges them. In all cases, once a limit is hit, the stream self-regulates — new messages flow in, old messages flow out.
No. Stream limits are part of the stream configuration and are identical across all replicas. The leader enforces limits and replicates the resulting state (including deletions) to followers. You cannot have R1 with 10GB and R3 with 1GB — the configuration is singular and applies uniformly.
Yes. Sealed streams are intentionally frozen — they accept no new messages, so unbounded growth is impossible. Sealing a stream is a valid alternative to adding limits for streams that hold historical data you want to preserve permanently. However, sealing doesn’t address storage consumption of existing data. A sealed 500GB stream still consumes 500GB.
With 100+ always-on audit Checks from the NATS experts, Insights helps you find and fix problems before they become costly incidents.
No alert rules to write. No dashboards to maintain.
News and content from across the community