Checks/OPT_COST_003

NATS Wasted JetStream Memory Reservation: What It Means and How to Fix It

Severity
Info
Category
Consistency
Applies to
Cost
Check ID
OPT_COST_003
Detection threshold
JetStream memory usage < 20% of reserved capacity (max_mem)

Wasted JetStream Memory Reservation means a server’s JetStream memory usage is below 20% of its reserved capacity (max_mem). The server has reserved a significant block of memory for JetStream that is largely unused — capacity that could be reclaimed for other purposes or right-sized to reflect actual needs.

Why this matters

JetStream memory reservations directly affect cluster capacity planning and resource accounting. When a server reserves 4 GiB of memory for JetStream but only uses 500 MiB, the remaining 3.5 GiB is neither available to JetStream streams on other servers (reservations are per-server, not pooled) nor usable by the operating system for file cache or other processes in any guaranteed way.

In multi-tenant environments, over-provisioned memory reservations create an accounting fiction. Account-level JetStream limits are enforced against the server’s max_mem reservation. If the reservation is artificially high, accounts can create memory-backed streams that appear to fit within the server’s capacity but exceed the actual workload requirements. When traffic grows and those streams actually use the reserved memory, the server may not have enough physical RAM — the reservation was a promise the hardware can’t keep.

The operational cost is subtler but real. Right-sizing JetStream memory reservations is part of capacity planning. Servers with large unused reservations distort cluster-level metrics: aggregate “available JetStream memory” looks healthy because most servers show plenty of headroom, but the reality is wasted allocation rather than genuine available capacity. This makes it harder to determine when the cluster actually needs more resources.

Common causes

  • Conservative initial provisioning. The max_mem was set to a generous value during deployment to avoid hitting limits in production. Traffic never materialized to the expected level, or the workload shifted to file-backed storage, leaving the memory reservation far above actual usage.

  • Workload migration to file storage. Memory-backed streams were converted to file-backed storage for durability or cost reasons, but the max_mem reservation was never reduced to match the new, lower memory footprint.

  • Stream deletion without reservation adjustment. A large memory-backed stream was deleted, freeing the memory it consumed, but the server’s max_mem reservation was not reduced. The reservation persists even though the workload no longer exists.

  • Uneven stream placement. In a cluster, memory-backed streams were placed (or migrated) to a subset of servers, leaving other servers with their original memory reservations but no memory-backed streams to fill them.

  • Copy-paste server configuration. All servers in the cluster share the same configuration template with identical max_mem values, regardless of whether each server actually hosts memory-backed streams. Servers that host only file-backed workloads still reserve memory they’ll never use.

How to diagnose

Check memory utilization across all servers

Terminal window
nats server report jetstream

Compare the MEM (used) and MEM MAX (reserved) columns. Any server where usage is below 20% of the reservation is flagged by this check.

Identify which servers have no memory-backed streams

Terminal window
nats stream report

Cross-reference streams with storage: memory against the servers that host them. Servers with zero memory-backed streams need minimal (or zero) JetStream memory reservation.

Check per-stream memory usage

Terminal window
nats stream info <stream_name>

For each memory-backed stream, note the bytes stored. Sum the memory usage across all streams on a given server and compare with the server’s max_mem.

Review current cluster-wide memory allocation

Terminal window
curl http://localhost:8222/jsz | jq '{memory: .memory, reserved_memory: .reserved_memory, pct: ((.memory / .reserved_memory) * 100 | floor)}'

Run this on each server to build a picture of utilization across the cluster.

How to fix it

Immediate: right-size the reservation

Reduce max_mem to match actual usage plus a reasonable buffer. A good heuristic is 2x current peak usage, with a minimum of 256 MiB for overhead:

server.conf
1
jetstream {
2
max_mem: 1GiB # reduced from 4GiB to match actual usage + headroom
3
store_dir: /data/jetstream
4
}

Reload the configuration:

Terminal window
nats-server --signal reload

Note: the server will reject a reload if the new max_mem is below current usage. Reduce stream memory consumption first if needed.

Short-term: consolidate memory-backed streams

Migrate memory-backed streams to fewer servers. If memory-backed streams are spread thinly across many servers, consolidate them onto fewer servers using placement tags. This lets you reduce max_mem to zero on servers that no longer host memory workloads:

1
// Use placement tags to direct memory-backed streams to specific servers
2
js, _ := nc.JetStream()
3
_, err := js.AddStream(&nats.StreamConfig{
4
Name: "FAST_CACHE",
5
Subjects: []string{"cache.>"},
6
Storage: nats.MemoryStorage,
7
MaxBytes: 512 * 1024 * 1024,
8
Placement: &nats.Placement{
9
Tags: []string{"memory-tier"},
10
},
11
})
1
from nats.js.api import StreamConfig, StorageType, Placement
2
3
await js.add_stream(StreamConfig(
4
name="FAST_CACHE",
5
subjects=["cache.>"],
6
storage=StorageType.MEMORY,
7
max_bytes=512 * 1024 * 1024,
8
placement=Placement(tags=["memory-tier"]),
9
))

Tag the designated memory-tier servers in their configuration:

1
server_tags: ["memory-tier"]

Set max_mem to zero on servers with no memory workloads:

1
jetstream {
2
max_mem: 0
3
store_dir: /data/jetstream
4
}

This is explicit: the server cannot host any memory-backed streams or replicas. It prevents accidental placement of memory workloads on servers not equipped for them.

Long-term: differentiate server roles

Define memory-tier and storage-tier servers. Not every server in a cluster needs to host memory-backed streams. Designate specific servers with higher RAM for memory workloads and configure their max_mem accordingly. Other servers focus on file-backed storage with max_mem: 0.

Automate reservation sizing. Include JetStream reservation values in your infrastructure-as-code templates. Derive max_mem from the actual memory-backed streams deployed to each server, with a configurable buffer percentage:

Synadia Insights evaluates this ratio every collection epoch and flags servers where JetStream memory utilization is below 20% of reserved, making it easy to identify right-sizing opportunities across your entire deployment.

Frequently asked questions

Is wasted memory reservation actually harmful?

It’s not harmful in the way that memory pressure is — nothing breaks when memory is under-reserved. The cost is opportunity: the reserved memory is not pooled across the cluster, so it can’t be used by other servers. In cloud environments, you may be paying for RAM that JetStream never uses. In capacity planning, over-provisioned reservations distort your view of actual cluster headroom.

What’s the minimum max_mem I should set?

If a server hosts no memory-backed streams, set max_mem: 0. If it hosts memory-backed streams, set max_mem to at least 2x the current peak usage of those streams. The absolute minimum for any server hosting memory workloads is the sum of max_bytes across all memory-backed streams it hosts, plus 20% for overhead.

Can I reduce max_mem on a running server?

Yes, via a config reload (nats-server --signal reload). The server will accept the new value if current JetStream memory usage is below the new limit. If current usage exceeds the proposed new limit, the reload is rejected — reduce stream memory consumption first by purging, setting retention limits, or migrating streams.

How does this interact with account-level JetStream limits?

Account-level mem_storage limits are enforced against the server’s max_mem reservation. If you reduce max_mem, you may need to reduce account limits proportionally. If the sum of account memory limits exceeds the new max_mem, stream creation may fail for accounts that attempt to use more than the server can provide.

Should I have the same max_mem on every server?

Not necessarily. Uniform configuration is simpler to manage but leads to wasted reservations on servers that don’t host memory workloads. If your cluster has a mix of memory-backed and file-backed streams, differentiate max_mem per server role. Use placement tags to ensure memory-backed streams are directed to servers with adequate reservations.

Proactive monitoring for NATS wasted jetstream memory reservation with Synadia Insights

With 100+ always-on audit Checks from the NATS experts, Insights helps you find and fix problems before they become costly incidents.
No alert rules to write. No dashboards to maintain.

Start a 14-day Insights trial
Cancel