A memory storage large stream is a JetStream stream using in-memory storage that has grown beyond 100 MiB (configurable via the max_memory_mib parameter), consuming expensive server RAM that could be freed by switching to file-backed storage.
JetStream offers two storage backends: memory and file. Memory storage keeps all stream data in the server process’s RAM, providing the lowest possible read and write latency. File storage persists data to disk and relies on the operating system’s page cache for frequently accessed data. For small, ephemeral streams where microsecond latency matters — rate limiters, session caches, real-time aggregations — memory storage is the right choice.
The economics change dramatically as streams grow. A 500MB memory-backed stream consumes 500MB of server RAM — permanently, as long as the stream exists. That same data stored on disk uses negligible RAM for metadata and relies on the OS page cache for hot data. In a three-node R3 cluster, a 500MB memory stream consumes 1.5GB of total cluster RAM. Server RAM is the most constrained and expensive resource in most NATS deployments; disk is comparatively cheap and abundant.
The risk isn’t just cost. Memory-backed streams compete with everything else the NATS server needs RAM for: client connection buffers, subscription routing tables, message processing queues, and the Go runtime itself. A single large memory stream can push the server into memory pressure, triggering more aggressive garbage collection, increasing tail latency for all clients, and in extreme cases causing out-of-memory kills. The server doesn’t distinguish between “memory used for stream data” and “memory used for operations” — it’s all the same heap.
Defaulting to memory storage without sizing estimates. Memory storage is sometimes chosen during initial development for its simplicity and speed, without projecting how large the stream will grow in production. A stream that’s 10MB in development can be 10GB in production.
Stream growth beyond original expectations. A stream was correctly sized for memory storage at creation, but message rates, retention periods, or message sizes grew over time. Without monitoring, the stream gradually consumes more RAM than intended.
Copy-paste configuration from small streams. A team creates a small, fast memory stream that works well. The same storage type gets copied into configurations for other streams that don’t have the same latency requirements or size profile.
No max_bytes limit set. Without a bytes limit, a memory stream can grow until it consumes all available JetStream memory reservation — or all available server RAM. See OPT_SYS_001 for the related issue of streams without limits.
Retention policy mismatch. A stream with limits retention and a long max_age (or no max_age at all) accumulates messages in memory indefinitely. The combination of memory storage with long retention is almost always a mistake.
# List all streams with storage type and sizenats stream reportThe Storage column shows Memory or File. Sort by size to find the largest memory streams. Any memory stream above your threshold (default: 100 MiB) is a candidate for migration.
# Detailed stream info including storage type and current usagenats stream info <stream_name>Key fields: Storage (Memory vs File), Bytes (current size), Max Bytes (configured limit, if any), and Messages (total message count). If Max Bytes is -1 (unlimited), the stream has no ceiling on memory consumption.
# JetStream memory usage per servernats server report jetstreamCompare Memory Used against Memory Reserved and total server memory. If memory streams account for a large percentage of JetStream memory usage, migration to file storage will free significant capacity.
# Check account-level JetStream limitsnats account infoLook at the JetStream section for memory storage limits and current usage. If memory usage is approaching the account limit, large memory streams are likely the primary consumer.
If migration isn’t possible immediately, cap the stream’s memory usage to prevent further growth:
# Set a size limit on an existing memory streamnats stream edit <stream_name> --max-bytes 104857600 # 100MBThis prevents the stream from growing beyond the limit — oldest messages will be discarded when the limit is reached (for limits retention policy). This is a stop-gap, not a solution.
JetStream does not support changing storage type in-place. You must create a new stream with file storage and migrate the data:
# 1. Back up the existing streamnats stream backup <stream_name> /tmp/stream-backup
# 2. Note the stream's current configurationnats stream info <stream_name> --json > /tmp/stream-config.json
# 3. Delete the memory-backed streamnats stream delete <stream_name>
# 4. Recreate with file storage (copy all other settings)nats stream add <stream_name> \ --storage file \ --subjects "<original_subjects>" \ --retention <original_retention> \ --max-bytes <appropriate_limit> \ --replicas <original_replicas>
# 5. Restore the backupnats stream restore <stream_name> /tmp/stream-backupFor streams where downtime is unacceptable, use a mirror to migrate with minimal disruption:
1package main2
3import (4 "context"5 "log"6
7 "github.com/nats-io/nats.go"8 "github.com/nats-io/nats.go/jetstream"9)10
11func main() {12 nc, _ := nats.Connect(nats.DefaultURL)13 js, _ := jetstream.New(nc)14
15 ctx := context.Background()16
17 // Create a file-backed mirror of the memory stream18 _, err := js.CreateStream(ctx, jetstream.StreamConfig{19 Name: "ORDERS_FILE",20 Storage: jetstream.FileStorage,21 Mirror: &jetstream.StreamSource{22 Name: "ORDERS", // original memory stream23 },24 })25 if err != nil {26 log.Fatal(err)27 }28
29 // Once mirror is caught up, switch consumers to ORDERS_FILE30 // Then delete the original ORDERS stream31}1import nats2
3async def migrate_to_file_storage():4 nc = await nats.connect("nats://localhost:4222")5 js = nc.jetstream()6
7 # Create file-backed mirror of the memory stream8 await js.add_stream(9 name="ORDERS_FILE",10 mirror={"name": "ORDERS"},11 storage="file",12 )13
14 # Monitor mirror lag until caught up15 info = await js.stream_info("ORDERS_FILE")16 print(f"Mirror lag: {info.mirror.lag} messages")17
18 # Once lag is 0, redirect consumers and delete originalDefault to file storage. Make file storage the organizational default for all new streams. Memory storage should be an explicit, justified choice — reserved for streams that meet specific criteria: small size (under your threshold), short retention (minutes, not hours), and genuine latency sensitivity.
Enforce max_bytes on all memory streams. Require a size limit for any memory-backed stream to prevent unchecked growth. A good heuristic: max_bytes for a memory stream should never exceed 10-20% of the server’s total available RAM.
Monitor memory stream sizes over time. Synadia Insights automatically flags memory streams that exceed the configured threshold, catching growth before it impacts server stability. Without automated monitoring, memory streams tend to grow silently until they cause problems.
For most workloads, no. File storage in NATS JetStream relies on the operating system’s page cache. Frequently accessed data — recent messages, active consumer read positions — stays in the page cache and is served from RAM at near-memory speeds. The latency difference is measurable in microbenchmarks (single-digit microseconds) but rarely meaningful in production where network latency and consumer processing time dominate. File storage is the correct default unless you have measured evidence that memory storage latency is required.
No. Storage type is immutable after stream creation. To change from memory to file (or vice versa), you must create a new stream with the desired storage type and migrate data. The mirror-based approach described above provides a near-zero-downtime migration path — the mirror catches up in real time while the original stream continues serving traffic.
The default threshold of 100 MiB is a reasonable starting point. The right value depends on your server’s total RAM and how many memory streams exist. A 100 MiB memory stream on a 64 GB server is negligible; the same stream on a 4 GB server is significant. The key principle: if a stream is large enough that its RAM consumption could affect other server operations, it should be on file storage.
Consumers are bound to a specific stream by name. When you delete the memory stream and recreate it as file-backed (or rename a mirror), consumers need to be recreated or redirected to the new stream. Plan for a brief consumer reconfiguration window. With the mirror approach, you can set up consumers on the new stream before deleting the original, minimizing disruption.
With 100+ always-on audit Checks from the NATS experts, Insights helps you find and fix problems before they become costly incidents.
No alert rules to write. No dashboards to maintain.
News and content from across the community