NATS Weekly #40
Week of August 15 - 21, 2022
🗞 Announcements, writings, and projects
A short list of announcements, blog posts, projects updates and other news.
️ 💁 News
News or announcements that don't fit into the other categories.
Clarification on NATS headers in ADR-4 to note that application defined headers are case-preserving as defined by the publisher. Also as a reminder, headers leveraged by the server such as
Nats-Msg-id, are also case sensitive. Each client library has constants that ensure the proper casing is used.
Official releases from NATS repos and others in the ecosystem.
nats.py - v2.1.7
k8s/surveyor - v0.13.3
k8s/nats - v0.17.5
prometheus-nats-exporter - v0.10.0
nats-surveyor - v0.3.1
nsc - v2.7.2
Github Discussions from various NATS repositories.
nats.rs - Timeouts in async-nats
nats.java - Store object for Java client
New or updated examples on NATS by Example.
Queue Push Consumer - Go
💡 Recently asked questions
Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.
How can I get a baseline benchmark for a workload?
Effective benchmarking is a very nuanced topic especially when considering factors like the network, hardware differences, resources available, number of publishers, number of subscribers, payload size, and whether streams and consumers being used (and their configuration).
This clearly requires a full-fledged guide on how to do proper assessment given all of these factors, but this will serve as a quick introduction to the
nats bench command available in the CLI. Check out a set of examples for expanded usage in the docs.
For starters, you need a server running locally on the hardware/VM/container runtime you want to benchmark against. Simply download the latest stable release and run
nats-server (we will not enable JetStream for this example).
It is recommnded to run it on bare metal (your laptop) for starters to get a baseline. If you have access to another bare metal server, feel free to run benchmarks there. Of course, this is a baseline and not equivalent to the production environment. For example, if you intend to deploy to VMs or Kubernetes, you will likely see higher local latencies due to the virtualization overhead in the networking stack, but your observations may vary.
Now that you have the server running (assuming the default
0.0.0.0:4222) you can open two shells, one for the subscriber...
nats bench test-subject --reply --sub 1
...and one for the publisher.
nats bench test-subject --request --pub 1
You will see the publisher command start a publish with a progress bar. The output will be something like:
Pub stats: 20,860 msgs/sec ~ 2.55 MB/sec
Since the publisher was in request-reply mode, each publish waited for a reply before publishing the next message. To calculate round-trip latency, we can divide the number of messages into a second
1/20860 and multiply by one million to get to microsecond resolution which results in ~47.9 μs. This also means that the latency per publish is half that, or ~24μs. For reference, the default payload size is 128 bytes and 100,000 messages are published, both of which can be changed.
--pub 4 we can emulate four concurrent publishers sending a proportion of the messages (25,000 in this case). Running this for me, I get:
Pub stats: 48,209 msgs/sec ~ 5.88 MB/sec
This may seem like a toy example, but trying various payload sizes, publishers and/or subscribers (and many of other features in the
bench command), you can simulate quite a few setups. Of course you can run the server, publishers, and subscribers on separate machines which allows you to benchmark the RTT for these various workloads over the network.
How can I contribute to NATS?
Thanks to Ludovico Russo for asking this question in Slack!
The primary type of contribution that is invaluable to the NATS team and community is helping respond to questions in Slack and GitHub issues and discussions. This may sound nebulous or you might feel like you don't know enough.. but if you see a question, bug report, or feature request that seems to be missing information, a simple follow-up question can help with discovery.
Arguably, more importantly, for the individuals, people like to be heard and acknowledged. With the NATS core team being a dozen or so people, responding to every single question is tall order while also doing net new development. Community members who help facilitate Q&A help spread out this effort and knowledge.
The second best type of contribution include documentation and examples. The NATS team has heard and committed to a revamp of the docs site with a set of release over the next couple months. This includes aethestic, navigation, and search improvements, but more importantly, a revised structure and more content to fill known gaps.
As many folks in Slack has done thus far, calling out areas of NATS (server or clients) where docs can't be found, difficult to find, or lack sufficient information or clarity, please (constructively) call this out. The team pays attention and has a running list of documentation items to work on.
Supplementary to documentation are curated, detailed, and runnable examples that people learning NATS or a specific feature for the first time can reference. I started NATS by Example to act as this resource and optimized it for quick turnaround of new examples by relying on a continuous integration build process.
Each example is intended to be implemented across clients, so if you, as a contributor, are interesting in writing up or scaffolding up an example in your client of choice, this would really help others in the community. Myself or others on the NATS team will review and work with you to get the example published.
A great side effect of writing these examples is that you get more deeply acquainted with how NATS works, be it the server or the client. Each example has, typically, more detailed commentary than code and should be devoid of assumptions.
After writing a handful of examples, this may become a gateway to contributing some small patches to the client libraries or the server repos for open and accepted issues.
Please reach out if you have any questions on how to get started.