This mini-tutorial shows how to run 2 NATS server in local Docker containers interconnected via Synadia Cloud Platform. NGS is a global managed NATS network of NATS, and the local containers will connect to it as leaf nodes.
Start by creating a free account on https://cloud.synadia.com/.
Once you are logged in, go into the default account (you can manage multiple isolated NGS account within your Synadia Cloud account).
In Settings > Limits, increase Leaf Nodes to 2. Save the configuration change. (Your free account comes with up to 2 leaf connection, but the account is configured to use at most 1 initially).
Now navigate to the Users section of your default account and create 2 users, red and blue. (Users are another way you can isolate parts of your systems customizing permissions, access to data, limits and more)
For each of the two users, select Get Connected and Download Credentials.
You should now have 2 files on your computer: default-red.creds and default-blue.creds.
Create a minimal NATS Server configuration file leafnode.conf, it will work for both leaf nodes:
Let's start the first leafnode (for user red) with:
-p 4222:4222 maps the server port 4222 inside the container to your local port 4222. -v leafnode.conf:/leafnode.conf mounts the configuration file created above at location /leafnode.conf in the container. -v /etc/ssl/cert.pem:/etc/ssl/cert.pem installs root certificates in the container, since the nats image does not bundle them, and they are required to verify the TLS certificate presented by NGS. -v default-red.creds:/ngs.creds installs the credentials for user red at location /ngs.creds inside the container. -c /leafnode.conf are arguments passed to the container entry point (nats-server).
Launching the container, you should see the NATS server starting successfully:
Now start the second leaf nodes with two minor tweaks to the command:
Notice we bind to local port 4333 (since 4222) is busy, and we mount blue credentials.
Congratulations, you have 2 leaf nodes connected to the NGS global network. Despite this being a global shared environment, your account is completely isolated from the rest of the traffic, and vice versa.
Now let's make 2 clients connected to the 2 leaf nodes talk to each other.
Let us start a simple service on the Leafnode of user red:
Using the LeafNode run by user blue, let's send a request:
Congratulations, you just connected 2 Leaf nodes to the global NGS network and used them to send a request and receive a response.
Your messages were routed transparently with millions of others, but they were not visible to anyone outside of your Synadia Cloud account.
Official and
and
leafnodes {
remotes = [
{
url: "tls://connect.ngs.global"
credentials: "ngs.creds"
},
]
}docker run -p 4222:4222 -v leafnode.conf:/leafnode.conf -v /etc/ssl/cert.pem:/etc/ssl/cert.pem -v default-red.creds:/ngs.creds nats:latest -c /leafnode.conf[1] 2024/06/14 18:03:51.810719 [INF] Server is ready
[1] 2024/06/14 18:03:52.075951 [INF] 34.159.142.0:7422 - lid:5 - Leafnode connection created for account: $G
[1] 2024/06/14 18:03:52.331354 [INF] 34.159.142.0:7422 - lid:5 - JetStream using domains: local "", remote "ngs"docker run -p 4333:4222 -v leafnode.conf:/leafnode.conf -v /etc/ssl/cert.pem:/etc/ssl/cert.pem -v default-blue.creds:/ngs.creds nats:latest -c /leafnode.confnats -s localhost:4222 reply docker-leaf-test "At {{Time}}, I received your request: {{Request}}"$ nats -s localhost:4333 request docker-leaf-test "Hello World"
At 8:15PM, I received your request: Hello WorldThe NATS server is provided as a Docker image on Docker Hub that you can run using the Docker daemon. The NATS server Docker image is extremely lightweight, coming in under 10 MB in size.
Synadia actively maintains and supports the NATS server Docker image.
The nightly build container can be found
To use the Docker container image, install Docker and pull the public image:
Run the NATS server image:
By default the NATS server exposes multiple ports:
4222 is for clients.
8222 is an HTTP management port for information reporting.
6222 is a routing port for clustering.
The default ports may be customized by providing either a -p or -P option on the docker run command line.
The following steps illustrate how to run a server with the ports exposed on a docker network.
First, create the docker network 'nats:'
Then start the server:
First, run a server with the ports exposed on the 'nats' docker network:
Next, start additional servers pointing them to the seed server to cause them to form a cluster:
NOTE Since the Docker image protects routes using credentials we need to provide them above. Extracted
To verify the routes are connected, you can make a request to the monitoring endpoint on /routez and confirm that there are now 2 routes:
It is also straightforward to create a cluster using Docker Compose. Below is a simple example that uses a network named 'nats' to create a full mesh cluster.
Now we use Docker Compose to create the cluster that will be using the 'nats' network:
Now, the following should work: make a subscription on one of the nodes and publish it from another node. You should be able to receive the message without problems.
Inside the container:
Stopping the seed node, which received the subscription, should trigger an automatic failover to the other nodes:
Output extract:
Publishing again will continue to work after the reconnection:
See the for more instructions on using the NATS server Docker image.
docker pull natsdocker run natsdocker network create natsdocker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats --http_port 8222docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats --http_port 8222 --cluster_name NATS --cluster nats://0.0.0.0:6222[1] 2021/09/28 09:21:56.554756 [INF] Starting nats-server
[1] 2021/09/28 09:21:56.554864 [INF] Version: 2.6.1
[1] 2021/09/28 09:21:56.554878 [INF] Git: [c91f0fe]
[1] 2021/09/28 09:21:56.554894 [INF] Name: NDIQLLD2UGGPSAEYBKHW3S2JB2DXIAFHMIWWRUBAX7FC4RTQX4ET2JNQ
[1] 2021/09/28 09:21:56.555001 [INF] ID: NDIQLLD2UGGPSAEYBKHW3S2JB2DXIAFHMIWWRUBAX7FC4RTQX4ET2JNQ
[1] 2021/09/28 09:21:56.557658 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2021/09/28 09:21:56.557967 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2021/09/28 09:21:56.559224 [INF] Server is ready
[1] 2021/09/28 09:21:56.559375 [INF] Cluster name is NATS
[1] 2021/09/28 09:21:56.559433 [INF] Listening for route connections on 0.0.0.0:6222docker run --name nats-1 --network nats --rm nats --cluster_name NATS --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222
docker run --name nats-2 --network nats --rm nats --cluster_name NATS --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222curl http://127.0.0.1:8222/routez{
"server_id": "NDIQLLD2UGGPSAEYBKHW3S2JB2DXIAFHMIWWRUBAX7FC4RTQX4ET2JNQ",
"now": "2021-09-28T09:22:15.8019785Z",
"num_routes": 2,
"routes": [
{
"rid": 5,
"remote_id": "NBRAUY3YSVFYU7BFWI2YF5VPQFGO2XCKKAHYZ7ETCMGB3SQY3FDFTYOQ",
"did_solicit": false,
"is_configured": false,
"ip": "172.18.0.3",
"port": 59092,
"pending_size": 0,
"rtt": "1.2505ms",
"in_msgs": 4,
"out_msgs": 3,
"in_bytes": 2714,
"out_bytes": 1943,
"subscriptions": 35
},
{
"rid": 6,
"remote_id": "NA5STTST5GYFCD22M2I3VDJ57LQKOU35ZVWKQY3O5QRFGOPC3RFDIDVJ",
"did_solicit": false,
"is_configured": false,
"ip": "172.18.0.4",
"port": 47424,
"pending_size": 0,
"rtt": "1.2008ms",
"in_msgs": 4,
"out_msgs": 1,
"in_bytes": 2930,
"out_bytes": 833,
"subscriptions": 35
}
]
}version: "3.5"
services:
nats:
image: nats
ports:
- "8222:8222"
command: "--cluster_name NATS --cluster nats://0.0.0.0:6222 --http_port 8222 "
networks: ["nats"]
nats-1:
image: nats
command: "--cluster_name NATS --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222"
networks: ["nats"]
depends_on: ["nats"]
nats-2:
image: nats
command: "--cluster_name NATS --cluster nats://0.0.0.0:6222 --routes=nats://ruser:T0pS3cr3t@nats:6222"
networks: ["nats"]
depends_on: ["nats"]
networks:
nats:
name: natsdocker-compose -f nats-cluster.yaml up[+] Running 3/3
โ ฟ Container xxx_nats_1 Created
โ ฟ Container xxx_nats-1_1 Created
โ ฟ Container xxx_nats-2_1 Created
Attaching to nats-1_1, nats-2_1, nats_1
nats_1 | [1] 2021/09/28 10:42:36.742844 [INF] Starting nats-server
nats_1 | [1] 2021/09/28 10:42:36.742898 [INF] Version: 2.6.1
nats_1 | [1] 2021/09/28 10:42:36.742913 [INF] Git: [c91f0fe]
nats_1 | [1] 2021/09/28 10:42:36.742929 [INF] Name: NCZIIQ6QT4KT5K5WBP7H2RRBM4MSYD4C2TVSRZOZN57EHX6VTF4EWXAU
nats_1 | [1] 2021/09/28 10:42:36.742954 [INF] ID: NCZIIQ6QT4KT5K5WBP7H2RRBM4MSYD4C2TVSRZOZN57EHX6VTF4EWXAU
nats_1 | [1] 2021/09/28 10:42:36.745289 [INF] Starting http monitor on 0.0.0.0:8222
nats_1 | [1] 2021/09/28 10:42:36.745737 [INF] Listening for client connections on 0.0.0.0:4222
nats_1 | [1] 2021/09/28 10:42:36.750381 [INF] Server is ready
nats_1 | [1] 2021/09/28 10:42:36.750669 [INF] Cluster name is NATS
nats_1 | [1] 2021/09/28 10:42:36.751444 [INF] Listening for route connections on 0.0.0.0:6222
nats-1_1 | [1] 2021/09/28 10:42:37.709888 [INF] Starting nats-server
nats-1_1 | [1] 2021/09/28 10:42:37.709977 [INF] Version: 2.6.1
nats-1_1 | [1] 2021/09/28 10:42:37.709999 [INF] Git: [c91f0fe]
nats-1_1 | [1] 2021/09/28 10:42:37.710023 [INF] Name: NBHTXXY3HYZVPXITYQ73BSDA5CQZINTKYRM23XFI46RWWTTUP5TAXQMB
nats-1_1 | [1] 2021/09/28 10:42:37.710042 [INF] ID: NBHTXXY3HYZVPXITYQ73BSDA5CQZINTKYRM23XFI46RWWTTUP5TAXQMB
nats-1_1 | [1] 2021/09/28 10:42:37.711646 [INF] Listening for client connections on 0.0.0.0:4222
nats-1_1 | [1] 2021/09/28 10:42:37.712197 [INF] Server is ready
nats-1_1 | [1] 2021/09/28 10:42:37.712376 [INF] Cluster name is NATS
nats-1_1 | [1] 2021/09/28 10:42:37.712469 [INF] Listening for route connections on 0.0.0.0:6222
nats_1 | [1] 2021/09/28 10:42:37.718918 [INF] 172.18.0.4:52950 - rid:4 - Route connection created
nats-1_1 | [1] 2021/09/28 10:42:37.719906 [INF] 172.18.0.3:6222 - rid:4 - Route connection created
nats-2_1 | [1] 2021/09/28 10:42:37.731357 [INF] Starting nats-server
nats-2_1 | [1] 2021/09/28 10:42:37.731518 [INF] Version: 2.6.1
nats-2_1 | [1] 2021/09/28 10:42:37.731531 [INF] Git: [c91f0fe]
nats-2_1 | [1] 2021/09/28 10:42:37.731543 [INF] Name: NCG6UQ2N3IHE6OS76TL46RNZBAPHNUCQSA64FDFHG5US2LLJOQLD5ZK2
nats-2_1 | [1] 2021/09/28 10:42:37.731554 [INF] ID: NCG6UQ2N3IHE6OS76TL46RNZBAPHNUCQSA64FDFHG5US2LLJOQLD5ZK2
nats-2_1 | [1] 2021/09/28 10:42:37.732893 [INF] Listening for client connections on 0.0.0.0:4222
nats-2_1 | [1] 2021/09/28 10:42:37.733431 [INF] Server is ready
nats-2_1 | [1] 2021/09/28 10:42:37.733491 [INF] Cluster name is NATS
nats-2_1 | [1] 2021/09/28 10:42:37.733835 [INF] Listening for route connections on 0.0.0.0:6222
nats_1 | [1] 2021/09/28 10:42:37.740860 [INF] 172.18.0.5:54616 - rid:5 - Route connection created
nats-2_1 | [1] 2021/09/28 10:42:37.741557 [INF] 172.18.0.3:6222 - rid:4 - Route connection created
nats-1_1 | [1] 2021/09/28 10:42:37.743981 [INF] 172.18.0.5:6222 - rid:5 - Route connection created
nats-2_1 | [1] 2021/09/28 10:42:37.744332 [INF] 172.18.0.4:40250 - rid:5 - Route connection createddocker run --network nats --rm -it natsio/nats-boxnats sub -s nats://nats:4222 hello &
nats pub -s "nats://nats-1:4222" hello first
nats pub -s "nats://nats-2:4222" hello seconddocker-compose -f nats-cluster.yaml stop nats...
16e55f1c4f3c:~# 10:47:28 Disconnected due to: EOF, will attempt reconnect
10:47:28 Disconnected due to: EOF, will attempt reconnect
10:47:28 Reconnected [nats://172.18.0.4:4222]nats pub -s "nats://nats-1:4222" hello again
nats pub -s "nats://nats-2:4222" hello againCreate an overlay network for the cluster (in this example, nats-cluster-example), and instantiate an initial NATS server.
First create an overlay network:
docker network create --driver overlay nats-cluster-exampleNext instantiate an initial "seed" server for a NATS cluster listening for other servers to join route to it on port 6222:
The 2nd step is to create another service which connects to the NATS server within the overlay network. Note that we connect to to the server at nats-cluster-node-1:
Now you can add more nodes to the Swarm cluster via more docker services, referencing the seed server in the -routes parameter:
In this case, nats-cluster-node-1 is seeding the rest of the cluster through the autodiscovery feature. Now NATS servers nats-cluster-node-1 and nats-cluster-node-2 are clustered together.
Add in more replicas of the subscriber:
Then confirm the distribution on the Docker Swarm cluster:
The sample output after adding more NATS server nodes to the cluster, is below - and notice that the client is dynamically aware of more nodes being part of the cluster via auto discovery!
Sample output after adding more workers which can reply back (since ignoring own responses):
From here you can experiment adding to the NATS cluster by simply adding servers with new service names, that route to the seed server nats-cluster-node-1. As you've seen above, clients will automatically be updated to know that new servers are available in the cluster.
This mini-tutorial shows how to run a NATS server with JetStream enabled in a local Docker container. This enables quick and consequence-free experimentation with the many features of JetStream.
Using the official nats image, start a server. The -js option is passed to the server to enable JetStream. The -p option forwards your local 4222 port to the server inside the container, 4222 is the default client connection port.
docker run -p 4222:4222 nats -jsTo persist JetStream data to a volume, you can use the -v option in combination with -sd:
docker run -p 4222:4222 -v nats:/data nats -js -sd /dataWith the server running, use nats bench to create a stream and publish some messages to it.
JetStream persists the messages (on disk by default). Now consume them with:
You can use nats to inspect various aspects of the stream, for example:
Official and
and
In this tutorial you run the . The Docker image provides an instance of the NATS Server. Synadia actively maintains and supports the nats-server Docker image. The NATS image is only 6 MB in size.
1. Set up Docker.
See for guidance.
The easiest way to run Docker is to use the .
2. Run the nats-server Docker image.
3. Verify that the NATS server is running.
You should see the following:
Followed by this, indicating that the NATS server is running:
Notice how quickly the NATS server Docker image downloads. It is a mere 6 MB in size.
Start a lightweight Docker container:
Or you can also mount local creds via a volume:
Install nats.py and dependencies to install nkeys:
Get the Python examples using curl:
Create a subscription that lingers:
Publish a message:
docker service create --network nats-cluster-example --name nats-cluster-node-1 nats:1.0.0 -cluster nats://0.0.0.0:6222 -DVnats bench -s localhost:4222 benchsubject --js --pub 1 --msgs=100000docker service create --network nats-cluster-example --name nats-cluster-node-2 nats:1.0.0 -cluster nats://0.0.0.0:6222 -routes nats://nats-cluster-node-1:6222 -DVdocker run --entrypoint /bin/bash -it python:3.8-slim-busterdocker run --entrypoint /bin/bash -v $HOME/.nkeys/creds/synadia/NGS/:/creds -it python:3.8-slim-busterapt-get update && apt-get install -y build-essential curl
pip install asyncio-nats-client[nkeys]curl -o nats-pub.py -O -L https://raw.githubusercontent.com/nats-io/nats.py/master/examples/nats-pub/__main__.py
curl -o nats-sub.py -O -L https://raw.githubusercontent.com/nats-io/nats.py/master/examples/nats-sub/__main__.pypython nats-sub.py --creds /creds/NGS.creds -s tls://connect.ngs.global:4222 hello &4. Test the NATS server to verify it is running.
An easy way to test the client connection port is through using telnet.
Expected result:
You can also test the monitoring endpoint, viewing http://localhost:8222 with a browser.
docker run -p 4222:4222 -p 8222:8222 -p 6222:6222 --name nats-server -ti nats:latestUnable to find image 'nats:latest' locally
latest: Pulling from library/nats
2d3d00b0941f: Pull complete
24bc6bd33ea7: Pull complete
Digest: sha256:47b825feb34e545317c4ad122bd1a752a3172bbbc72104fc7fb5e57cf90f79e4
Status: Downloaded newer image for nats:latesttelnet localhost 4222docker service create --name ruby-nats --network nats-cluster-example wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 -e '
NATS.on_error do |e|
puts "ERROR: #{e}"
end
NATS.start(:servers => ["nats://nats-cluster-node-1:4222"]) do |nc|
inbox = NATS.create_inbox
puts "[#{Time.now}] Connected to NATS at #{nc.connected_server}, inbox: #{inbox}"
nc.subscribe(inbox) do |msg, reply|
puts "[#{Time.now}] Received reply - #{msg}"
end
nc.subscribe("hello") do |msg, reply|
next if reply == inbox
puts "[#{Time.now}] Received greeting - #{msg} - #{reply}"
nc.publish(reply, "world")
end
EM.add_periodic_timer(1) do
puts "[#{Time.now}] Saying hi (servers in pool: #{nc.server_pool}"
nc.publish("hello", "hi", inbox)
end
end'docker service scale ruby-nats=3docker service ps ruby-natsID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
25skxso8honyhuznu15e4989m ruby-nats.1 wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 node-1 Running Running 2 minutes ago
0017lut0u3wj153yvp0uxr8yo ruby-nats.2 wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 node-1 Running Running 2 minutes ago
2sxl8rw6vm99x622efbdmkb96 ruby-nats.3 wallyqs/ruby-nats:ruby-2.3.1-nats-v0.8.0 node-2 Running Running 2 minutes ago[2016-08-15 12:51:52 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}]
[2016-08-15 12:51:53 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}]
[2016-08-15 12:51:54 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}]
[2016-08-15 12:51:55 +0000] Saying hi (servers in pool: [{:uri=>#<URI::Generic nats://10.0.1.3:4222>, :was_connected=>true, :reconnect_attempts=>0}, {:uri=>#<URI::Generic nats://10.0.1.7:4222>, :reconnect_attempts=>0}, {:uri=>#<URI::Generic nats://10.0.1.6:4222>, :reconnect_attempts=>0}][2016-08-15 16:06:26 +0000] Received reply - world
[2016-08-15 16:06:26 +0000] Received reply - world
[2016-08-15 16:06:27 +0000] Received greeting - hi - _INBOX.b8d8c01753d78e562e4dc561f1
[2016-08-15 16:06:27 +0000] Received greeting - hi - _INBOX.4c35d18701979f8c8ed7e5f6eadocker service create --network nats-cluster-example --name nats-cluster-node-3 nats:1.0.0 -cluster nats://0.0.0.0:6222 -routes nats://nats-cluster-node-1:6222 -DVnats bench -s localhost:4222 benchsubject --js --sub 3 --msgs=100000nats -s localhost:4222 stream list
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Streams โ
โโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโโโโโโโค
โ Name โ Description โ Created โ Messages โ Size โ Last Message โ
โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโผโโโโโโโโโผโโโโโโโโโโโโโโโค
โ benchstream โ โ 2024-06-07 20:26:38 โ 100,000 โ 16 MiB โ 35s โ
โฐโโโโโโโโโโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโโโโโโโฏpython nats-pub.py --creds /creds/NGS.creds -s tls://connect.ngs.global:4222 hello -d world[1] 2019/06/01 18:34:19.605144 [INF] Starting nats-server version 2.0.0
[1] 2019/06/01 18:34:19.605191 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2019/06/01 18:34:19.605286 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/06/01 18:34:19.605312 [INF] Server is ready
[1] 2019/06/01 18:34:19.608756 [INF] Listening for route connections on 0.0.0.0:6222Trying ::1...
Connected to localhost.
Escape character is '^]'.
INFO {"server_id":"NDP7NP2P2KADDDUUBUDG6VSSWKCW4IC5BQHAYVMLVAJEGZITE5XP7O5J","version":"2.0.0","proto":1,"go":"go1.11.10","host":"0.0.0.0","port":4222,"max_payload":1048576,"client_id":13249}