All posts

Message Tracing in NATS 2.11: Debug Your Entire System with One Command

Peter Humulock
Jan 29, 2026
Message Tracing in NATS 2.11: Debug Your Entire System with One Command

NATS 2.11 introduced message tracing—a built-in way to see exactly where your messages go. No external tools, no code instrumentation. Just run a command and watch your message flow through clusters, gateways, leaf nodes, and JetStream.

If you’ve ever needed to debug your system or just wanted to peek under the hood, this one’s for you.

Watch the video overview: Message Tracing in NATS

The Problem

In a distributed NATS deployment with multiple clusters, leaf nodes, and gateways, a single message might hop through several servers before reaching its destination. When something goes wrong—a message isn’t delivered, latency spikes, or routing behaves unexpectedly—you need visibility into what’s actually happening.

Traditional approaches require external tracing infrastructure or manual instrumentation throughout your codebase. NATS takes a different approach: tracing is built directly into the server.

How It Works

The nats trace command sends a message through your system and shows you exactly where it goes:

Terminal window
nats trace demo.subject "hello world"

When you run this, NATS:

  1. Creates a temporary inbox to receive trace messages from each server
  2. Publishes your message with special trace headers
  3. Collects responses as the message flows through the system
  4. Builds a routing tree showing the complete path

Each server your message touches sends back a trace event. These events are aggregated and displayed as a visual tree showing every hop, connection type, and destination.

The Trace Headers

When you run the trace command, three headers are automatically added to your message:

HeaderPurpose
Nats-Trace-DestThe inbox subject where trace events are sent
Nats-Trace-OnlyEnables dry-run mode—message is traced but not delivered
Accept-EncodingCompression format for trace events (gzip or snappy)

The Nats-Trace-Only header is set by default, so your message is traced but never actually delivered or stored in JetStream. This enables safe tracing of message routing without any side effects.

This built-in approach sets NATS apart from other messaging systems, which typically require integration with external distributed tracing tools like Jaeger, Zipkin, or dedicated observability platforms. With NATS, the server handles everything—no sidecars, no agents, no additional infrastructure.

Understanding Trace Output

Each trace event contains four main fields:

  • server: Information about the server (name, cluster, version)
  • request: The original message headers
  • hops: Number of outgoing hops from this server
  • events: Everything that happened on this server

Event Types

Event types tell you what happened:

TypeMeaning
INIngress—message arrived at this server
EGEgress—message left this server
JSJetStream—message was stored in a stream
SEStream Export—message crossed an account boundary (export side)
SIStream Import—message crossed an account boundary (import side)
SMSubject Mapping—subject was transformed via mapping rules

Event Kinds

Event kinds tell you the connection type:

KindConnection Type
0Client
1Router (same cluster)
2Gateway (cross-cluster)
3System (internal)
4Leaf Node
5JetStream
6Account (internal)

The Routing Tree

As your message flows through the system, NATS builds a routing tree using the Nats-Trace-Hop header. Here’s how it works:

  • First server receives the message (hop 1)
  • If that server sends to three destinations, they become hops 1.1, 1.2, and 1.3
  • If hop 1.2 branches to two more servers, those become 1.2.1 and 1.2.2

This tree structure lets you visualize exactly how your message fans out across the entire system.

CLI Examples

Basic Trace

Trace a message to see where it goes:

Terminal window
nats trace orders.new "test message"

This shows condensed output with the routing tree visualization.

Detailed Trace

For the full trace data including all server responses:

Terminal window
nats trace orders.new "test message" --trace

Example: Multi-Cluster Trace

Consider a system with three clusters (C1, C2, C3), each with three servers, plus leaf nodes connected to C1 and C3. You have:

  • A JetStream stream on C3N3
  • A subscriber on a leaf node off C1
  • A subscriber on a cluster node in C2

When you publish a trace message from a leaf node connected to C3:

1
Client → C3L3 (leaf) → C3N2 (cluster node)
2
├→ C3N3 → JetStream
3
├→ C1N1 → C1N3 → C1L2 → Subscriber
4
└→ C2N2 → C2N3 → Subscriber

The trace output shows this entire tree with connection types (leaf, router, gateway) and exactly which servers handled each hop.

Account Boundaries

By default, tracing stops at account boundaries for security. If your message crosses from one account to another via imports/exports, the trace won’t follow it unless you explicitly opt in.

To enable cross-account tracing, set allow_trace: true on your import or export configuration:

1
exports: [
2
{ stream: "events.>", allow_trace: true }
3
]

The rule: the receiver always opts in. For stream imports, the importer sets it. For service imports, the exporter sets it.

When to Use It

  • Debugging delivery issues — see exactly where a message goes (or doesn’t go)
  • Understanding routing — visualize how messages flow through clusters and gateways
  • Verifying configuration — confirm subject mappings and account boundaries work as expected
  • Performance analysis — identify which paths messages take and how many hops are involved

Wrapping Up

Message tracing in NATS gives you instant visibility into your distributed system without any external tools or code changes. Run a command, see where your messages go. One architecture, one tool—no need to cobble together separate monitoring solutions for each layer of your stack.

No more guessing why a message didn’t arrive. No more adding debug logging throughout your services. Just trace it.

Coming up next: We’ll cover how to integrate NATS message tracing with OpenTelemetry for production observability. Stay tuned.


We’re building the future of NATS at Synadia. Want more content like this? Subscribe to our newsletter

Get the NATS Newsletter

News and content from across the community


© 2026 Synadia Communications, Inc.
Cancel