Introducing:
synadia-agents SDKs for building on top of itLook at organizations where AI is doing real work at scale. AI isn’t in any single place — and certainly not in a chatbox. An agent in the IDE writing code. An agent in CI reviewing it. An agent in support triaging tickets. An agent on a vehicle or a factory line, pinned to data that can’t leave the edge.
Each one was the right tool for its job. None of them were built to talk to each other.
The model isn’t the bottleneck anymore. Coordinating across the fleet you’ve already deployed — across providers, runtimes, and the physical places agents live — is.
Today we’re publishing the substrate for that coordination. It’s at github.com/synadia-ai/synadia-agents. The rest of this post is a tour of what’s in there and how it lays the foundation for our meta-agent vision on synadia.com/agentic-ai.
A demo of the repo end-to-end is on the launch page. Throughout this post, callouts like this one point to specific moments where the demo shows the primitive in action.
There’s a vocabulary collapse happening in agentic AI. Every layer is trying to own the loop. SDKs are becoming runtimes. Frameworks are bundling harnesses. Harnesses are bundling sandboxes.
The gap is between loops. When you have four harnesses doing four different jobs across two clouds and a factory floor, the question isn’t which loop wins? It’s what do all of them stand on?
The repo is our answer to the second question.
The center of the repo is a protocol, not a runtime. An agent is anything that registers as a NATS micro service named agents and serves three endpoints: prompt, status, and hb (heartbeat). Subjects follow agents.{verb}.{token}.{owner}.{session}. Requests are plain UTF-8 text or a JSON envelope with optional base64 attachments. Responses stream typed JSON chunks — response, status, query — and end with an empty-body terminator. Errors ride on the Nats-Service-Error-Code header.
That’s it. Two pages of spec. Everything else in the repo is built on top.
The choice to make this a NATS micro service — not a bespoke RPC — is critical. Discovery is standard NATS micro: nats req '$SRV.INFO.agents' lists everyone on the fabric. Heartbeats are standard NATS micro. Multi-tenancy, account isolation, connectivity from cloud to factory floor — those properties already exist in NATS and the agents inherit them for free.
Demo, ~5:30.
nats micro listandnats requestdirectly from the terminal — no client library — to discover and prompt running agents. The protocol is the spec; the SDKs streamline development; NATS domain knowledge isn’t necessary.
The big picture: Connectivity that spans cloud to factory. The protocol is intentionally thin so the substrate underneath can be substrate.
agents/ contains pre-built plugins that put existing harnesses on the fabric without writing code:
| Agent | Token | What it is |
|---|---|---|
| Claude Code | cc | Anthropic’s coding agent, on NATS |
| OpenClaw | oc | Independent open-source coding agent and personal AI assistant |
| PI Agent | pi | The Pi coding agent (pi.dev) |
| Hermes | hermes | Self-learning personal assistant (upstream fork in progress) |
| open-agent | open-agent | Bridge for vercel-labs/open-agents, with a LocalSandbox path |
| DSPy ReAct | dspy | Example agent built on DSPy (in examples/) |
Each plugin is a thin shim — register as an agents service with a token, forward prompt requests into the harness, stream the harness’s output back as protocol chunks. None of them know about each other. The fabric is what knows about all of them.
Demo, ~1:00. A single UI discovers Hermes, OpenClaw, and Claude Code as they come online — three different harnesses, three vendors, three runtimes — each registering itself the moment it starts. No DNS changes. No firewall holes. No API gateway config.
The big picture: Heterogeneous agents, one fabric. Multi-provider isn’t a future requirement; it’s the design assumption.
client-sdk/typescript/ and client-sdk/python/ are the libraries you reach for when you’re the caller: discovering agents, prompting them, streaming responses, managing sessions. npm i @synadia-ai/agents or pip install synadia-ai-agents.
This is the meta agent’s library. Want to fan a prompt out to every coding agent on your fabric and merge the responses? Caller SDK. Want to write a CLI that lets your team prompt any agent without remembering subject names? Caller SDK. Want to build a UI that shows every agent on the network and lets your team operate them? Caller SDK.
Demo, ~4:25. Four agents selected at once, one prompt fanned out, four response streams reassembled into one virtual session. The smallest meta agent it’s possible to ship.
The big picture: Behind every agent orchestrator is a meta agent. Meta agents will be tailored to fit their organizations and use cases. One size does not fit all. Any meta agent can be built on the caller SDK.
agent-sdk/typescript/ and agent-sdk/python/ are the other half: libraries for hosting an agent so it shows up on the fabric. npm i @synadia-ai/agent-service or pip install synadia-ai-agent-service.
The split between the two SDKs matters. Most agent libraries assume one process and one known endpoint — agent.invoke(...) in the same memory space, or an HTTP call to a service you already know about. There’s nothing to discover. The protocol is built for the opposite shape: many processes, many agents, none of them known to the caller in advance. The SDKs reflect that. Caller-only consumers install just the caller package; agent-host authors install both halves.
The host SDK is also how you go headless. The plugins in agents/ wrap CLI binaries, but you don’t have to. The host SDK lets you build an agent service that exposes the same prompt/status/hb endpoints with no terminal, no executable, and full programmatic control over session lifecycle — including ephemeral sessions with TTLs and fan-out across multiple working directories.
Demo, ~7:30. A headless Pi controller — no terminal, no Pi executable — spawns three concurrent sessions across
/tmp,~, and/usr, fans out the same prompt, gets three answers back. Same pattern works for Claude Code, with permission modes (auto,plan,bypass,default) riding on the protocol.
The big picture: Identity for every agent type and every running instance. Each session is a distinct, addressable participant on the fabric — not a hidden child process.
tests/test_interop_e2e.py runs the TypeScript reference agent against the Python client. Both halves of the protocol, both languages, validated on every commit.
This sounds like a small thing, but a protocol is only as real as its second implementation. The interop test is what keeps @synadia-ai/agents and synadia-ai-agents honest with each other — and what keeps the spec honest with both of them. As more languages land, the same interop test extends to cover them.
The big picture: We’re shipping the fabric, not the meta agent. The fabric is the spec. The SDKs are conveniences — but only if they adhere to the spec.
examples/ contains the DSPy ReAct example and examples/open-agent-vercel/, a working integration with Vercel’s open-agents runtime. They’re not published as packages — they’re recipes for “how would I put my-favorite-harness on this fabric?”
If your harness isn’t in agents/ yet, this is where to start.
The page on synadia.com/agentic-ai lays out five capabilities Synadia is building toward the meta agent: connectivity, identity, durable state and handoff, audit trail, and zero-trust security.
Three of them are in the repo today, end to end:
| Capability | What’s shipping |
|---|---|
| Connectivity | Protocol + plugins; agents discoverable in one NATS round-trip |
| Identity | Subject-level addressing per {token, owner, session}; per-instance heartbeats |
| Audit trail | Every prompt and response is a NATS message; subject-level inspection comes free |
Two are roadmap:
That’s the gap between what’s in the repo today and the meta agent fabric we’re describing. It’s a much shorter gap than where the rest of the industry is.
A CLI that fans your team’s standup question across every coding agent on the network and posts a digest to Slack. A meta agent that watches a ticket queue, dispatches the right harness per issue, and reports back. A factory-floor monitor that wakes a vision agent when telemetry crosses a threshold and a coding agent when remediation needs a config change.
The repo doesn’t ship any of these. The protocol makes them a weekend project.
Three doors:
npm i @synadia-ai/agents or pip install synadia-ai-agents. The repo is wired so you can run the demo locally.


News and content from across the community