An uncompressed large stream is a file-backed JetStream stream that exceeds 1 GiB of storage with no compression enabled. Enabling S2 compression on these streams reduces disk usage, lowers I/O costs, and can improve replication throughput — with negligible CPU overhead.
Disk is rarely free. In cloud environments, every GiB of provisioned storage costs money — and replicated streams multiply that cost by R3 or R5. A 10 GiB uncompressed stream at R3 consumes 30 GiB of raw storage across the cluster. If S2 compression achieves a typical 50-70% reduction on structured data (JSON, protocol buffers, log lines), that drops to 9-15 GiB. At scale across dozens of streams, the savings are substantial.
Beyond raw cost, large uncompressed streams increase I/O pressure. Every message written and replicated moves more bytes across disk and network. During Raft snapshotting and replica catch-up, the server reads and transfers the full uncompressed data. Compression reduces all of these data paths proportionally.
The CPU cost of S2 compression is minimal. S2 (Snappy-S2) is specifically designed for high-throughput, low-latency compression. On modern hardware, S2 compresses at several GB/s per core — far faster than disk or network throughput. The NATS server compresses messages at write time and decompresses at read time transparently. Consumers see the original uncompressed messages with no application changes required.
Default configuration. Stream compression is not enabled by default in NATS. Streams created without explicitly setting the compression option use no compression. This is the most common reason — it’s not that compression was rejected, it was never configured.
Legacy streams created before compression support. S2 compression was added in NATS Server 2.10. Streams created on earlier versions have no compression, and upgrading the server doesn’t retroactively compress existing streams.
Assumption that compression is expensive. Some operators avoid compression based on experience with gzip or zstd at high compression levels. S2 is a different class — it trades compression ratio for speed, making it suitable for inline message compression at high throughput.
Small streams that grew over time. A stream that started at 100 MB didn’t warrant optimization attention. Over months of accumulated data, it crossed into GiB territory without anyone revisiting the configuration.
nats stream reportLook for file-backed streams with high storage usage. To check a specific stream’s compression setting:
nats stream info <stream_name> --json | jq '{name: .config.name, storage_type: .config.storage, bytes: .state.bytes, compression: .config.compression}'A compression value of "none" or "" (empty) means compression is disabled.
# List all streams with their storage and compression settingnats stream list --json | jq '.[] | select(.config.storage == "file" and (.config.compression == "none" or .config.compression == "") and .state.bytes > 1073741824) | {name: .config.name, bytes: .state.bytes, compression: .config.compression}'S2 compression ratios depend heavily on message content:
| Content type | Typical S2 ratio | Savings |
|---|---|---|
| JSON / structured text | 3:1 to 5:1 | 65-80% |
| Protocol Buffers | 2:1 to 3:1 | 50-65% |
| Log lines / CSV | 3:1 to 6:1 | 65-85% |
| Already compressed (images, video) | ~1:1 | 0% |
| Random / encrypted data | ~1:1 | 0% |
If your stream stores structured data, expect significant savings. If it stores pre-compressed or encrypted payloads, compression won’t help and this check can be ignored for that stream.
S2 compression can be enabled on an existing stream without data loss or downtime. The server compresses new messages going forward — existing messages remain uncompressed until they age out via retention policy.
nats stream edit <stream_name> --compression s2Or via the NATS Go client:
1js, _ := nc.JetStream()2cfg, _ := js.StreamInfo("ORDERS")3cfg.Config.Compression = nats.S2Compression4_, err := js.UpdateStream(&cfg.Config)Verify compression is active:
nats stream info <stream_name> --json | jq '.config.compression'# Should return: "s2"Run a sweep across all streams to identify other candidates:
nats stream list --json | jq '[.[] | select(.config.storage == "file" and (.config.compression == "none" or .config.compression == "")) | {name: .config.name, size_mb: (.state.bytes / 1048576 | floor)}] | sort_by(-.size_mb)'Enable compression on the largest streams first for maximum impact. There’s no reason not to enable S2 on any file-backed stream unless you’ve verified the content is already compressed or encrypted.
Make S2 compression part of your standard stream configuration template so new streams get it automatically:
1_, err := js.AddStream(&nats.StreamConfig{2 Name: "ORDERS",3 Subjects: []string{"orders.>"},4 Storage: nats.FileStorage,5 Replicas: 3,6 Compression: nats.S2Compression, // Always include this7 MaxAge: 24 * time.Hour,8})Document this as a team standard. Any stream creation that doesn’t include compression: s2 should be questioned in code review.
Negligibly. S2 decompression runs at multiple GB/s per core — far faster than network or disk throughput. Consumers receive the original uncompressed messages transparently. No application code changes are needed. In practice, compressed streams can actually be faster to consume because the server reads less data from disk.
Not in-place. When you enable compression on an existing stream, only new messages are compressed. Existing messages remain uncompressed until they age out via retention policy (max_age, max_bytes, max_msgs). To force full compression, you could create a new compressed stream and mirror the data, but this is rarely worth the operational complexity — just wait for natural retention.
Currently, S2 is the only compression algorithm supported by NATS JetStream. S2 was chosen because it provides good compression ratios at extremely high throughput with minimal CPU overhead. It’s from the same family as Snappy (used by Google internally) but with better ratios.
No. Compression is only available for file-backed streams. Memory-backed streams store messages in RAM where the compression/decompression overhead, while small, has no disk I/O benefit to offset it. If a memory-backed stream is using too much RAM, consider switching it to file-backed storage with S2 compression — you’ll use disk instead of memory and compress at the same time.
Yes. NATS replicates the compressed form of messages between servers. This means S2 compression reduces both disk usage and inter-server network traffic proportionally. For streams with R3 or R5 replication across regions, the network bandwidth savings can be significant.
With 100+ always-on audit Checks from the NATS experts, Insights helps you find and fix problems before they become costly incidents.
No alert rules to write. No dashboards to maintain.
News and content from across the community