NATS for Real-Time Financial Services Event Architectures. Join us!
All posts
Series: NATS Community FAQs

Understanding JetStream Memory Usage Patterns

Aug 8, 2025

Why It Matters

Memory management in distributed messaging systems directly impacts performance, reliability, and operational costs. For NATS JetStream users, understanding memory consumption patterns helps architects properly size infrastructure and avoid unexpected resource constraints that could affect message delivery guarantees or system stability.

The Question

A NATS user recently observed high memory usage (13GB out of 16GB GOMEMLIMIT) on a specific node in their JetStream cluster, particularly on stream/consumer leaders and meta leaders. Even after stopping all clients, memory usage remained elevated while other nodes used only 2-3GB.

Understanding JetStream Memory Usage

JetStream’s memory footprint comes from a mix of operational features and performance optimizations. Memory is used to track message state, coordinate cluster operations, and accelerate read/write performance. The main contributors include:

  • Message Deduplication: – In-memory tables store recently seen message IDs to prevent duplicates, based on the configured deduplication window (default: 2 minutes).
  • File Store Caching: – Recently accessed messages and stream data are cached in memory to improve read performance.
  • Metadata & Subject Tracking: – State information for streams, consumers, and subjects is held in memory for quick lookups.
  • Cluster Meta Leadership: – Nodes acting as meta leaders handle coordination and state management for all streams, which adds overhead.

Design Patterns and Solutions

Optimizing Memory Usage

  • Tune Deduplication: If using external deduplication, consider disabling this feature or reducing the window on a per-stream basis
  • Subject Strategy: Use fewer subjects per stream when possible (the user was already doing this correctly)
  • Consumer Patterns: Long-lived consumers are more memory-efficient than short-lived ones
  • Meta Leader Distribution: Consider separating meta leadership from nodes handling high-volume streams

Implementation Tips

1
// Configure stream with minimal deduplication window
2
streamConfig := &jetstream.StreamConfig{
3
Name: "MY_STREAM",
4
Subjects: []string{"single.subject"},
5
Retention: jetstream.WorkQueuePolicy,
6
Duplicates: time.Second * 30, // Reduced from default 2 minutes
7
}

Common Misconceptions

While message count and size affect disk usage, they don’t directly correlate with memory usage unless consumers are performing full stream scans or creating many short-lived consumers. The user’s workload of 300M+ messages across just a few streams with single consumers per stream shouldn’t inherently cause high memory usage.

Lessons Learned

Memory usage in JetStream clusters is more influenced by architectural patterns (meta leadership, deduplication settings, consumer lifecycle) than by raw message counts. When troubleshooting memory issues, focus on examining and optimizing these patterns rather than assuming a direct correlation with message volume or size.

For persistent memory issues that aren’t addressed by any of the pointers above (as was the case for this questioner), profiling with Go’s built-in tools (pprof) provides the most definitive insights into allocation patterns and potential optimization opportunities.


Need help from the NATS experts? Meet with our architects to get guidance tailored to your use case and environment.

Get the NATS Newsletter

News and content from across the community


© 2025 Synadia Communications, Inc.
Cancel