Deploying NATS with Helm
The NATS Helm charts can be used to deploy a StatefulSet of NATS servers using Helm templates which are easy to extend. Using Helm3 you can add the NATS Helm repo as follows:
1
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
2
helm install my-nats nats/nats
Copied!
The ArtifactHub NATS Helm package contains a complete list of configuration options. Some common scenarios are outlined below.

Configuration

Server Image

1
nats:
2
image: nats:2.7.4-alpine
3
pullPolicy: IfNotPresent
Copied!

Limits

1
nats:
2
# The number of connect attempts against discovered routes.
3
connectRetries: 30
4
5
# How many seconds should pass before sending a PING
6
# to a client that has no activity.
7
pingInterval:
8
9
# Server settings.
10
limits:
11
maxConnections:
12
maxSubscriptions:
13
maxControlLine:
14
maxPayload:
15
16
writeDeadline:
17
maxPending:
18
maxPings:
19
lameDuckDuration:
20
21
# Number of seconds to wait for client connections to end after the pod termination is requested
22
terminationGracePeriodSeconds: 60
Copied!

Logging

Note: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.
1
nats:
2
logging:
3
debug:
4
trace:
5
logtime:
6
connectErrorReports:
7
reconnectErrorReports:
Copied!

TLS setup for client connections

You can find more on how to set up and troubleshoot TLS connections at: running-a-nats-service/configuration/securing_nats/tls
1
nats:
2
tls:
3
secret:
4
name: nats-client-tls
5
ca: "ca.crt"
6
cert: "tls.crt"
7
key: "tls.key"
Copied!

Clustering

If clustering is enabled, then a 3-node cluster will be set up. More info at: running-a-nats-server/configuration/clustering#nats-server-clustering
1
cluster:
2
enabled: true
3
replicas: 3
4
5
tls:
6
secret:
7
name: nats-server-tls
8
ca: "ca.crt"
9
cert: "tls.crt"
10
key: "tls.key"
Copied!
Example:
1
helm install nats nats/nats --set cluster.enabled=true
Copied!

Leafnodes

Leafnode connections to extend a cluster. More info at: running-a-nats-server/configuration/leafnodes
1
leafnodes:
2
enabled: true
3
remotes:
4
- url: "tls://connect.ngs.global:7422"
5
# credentials:
6
# secret:
7
# name: leafnode-creds
8
# key: TA.creds
9
# tls:
10
# secret:
11
# name: nats-leafnode-tls
12
# ca: "ca.crt"
13
# cert: "tls.crt"
14
# key: "tls.key"
15
16
#######################
17
# #
18
# TLS Configuration #
19
# #
20
#######################
21
#
22
# # You can find more on how to setup and trouble shoot TLS connnections at:
23
#
24
# # https://docs.nats.io/running-a-nats-server/configuration/securing_nats/tls
25
#
26
tls:
27
secret:
28
name: nats-client-tls
29
ca: "ca.crt"
30
cert: "tls.crt"
31
key: "tls.key"
Copied!

Websocket Configuration

1
websocket:
2
enabled: true
3
port: 443
4
5
tls:
6
secret:
7
name: nats-tls
8
cert: "fullchain.pem"
9
key: "privkey.pem"
Copied!

Setting up External Access

Using HostPorts

In case of both external access and advertisements being enabled, an initializer container will be used to gather the public IPs. This container will be required to have enough RBAC policy to be able to make a look up of the public IP of the node where it is running.
For example, to set up external access for a cluster and advertise the public IP to clients:
1
nats:
2
# Toggle whether to enable external access.
3
# This binds a host port for clients, gateways and leafnodes.
4
externalAccess: true
5
6
# Toggle to disable client advertisements (connect_urls),
7
# in case of running behind a load balancer (which is not recommended)
8
# it might be required to disable advertisements.
9
advertise: true
10
11
# In case both external access and advertise are enabled
12
# then a service account would be required to be able to
13
# gather the public IP from a node.
14
serviceAccount: "nats-server"
Copied!
Where the service account named nats-server has the following RBAC policy for example:
1
---
2
apiVersion: v1
3
kind: ServiceAccount
4
metadata:
5
name: nats-server
6
namespace: default
7
---
8
apiVersion: rbac.authorization.k8s.io/v1
9
kind: ClusterRole
10
metadata:
11
name: nats-server
12
rules:
13
- apiGroups: [""]
14
resources:
15
- nodes
16
verbs: ["get"]
17
---
18
apiVersion: rbac.authorization.k8s.io/v1
19
kind: ClusterRoleBinding
20
metadata:
21
name: nats-server-binding
22
roleRef:
23
apiGroup: rbac.authorization.k8s.io
24
kind: ClusterRole
25
name: nats-server
26
subjects:
27
- kind: ServiceAccount
28
name: nats-server
29
namespace: default
Copied!
The container image of the initializer can be customized via:
1
bootconfig:
2
image: natsio/nats-boot-config:latest
3
pullPolicy: IfNotPresent
Copied!

Using LoadBalancers

When using a load balancer for external access, it is recommended to disable no advertise so that internal IPs from the NATS Servers are not advertised to the clients connecting through the load balancer.
1
nats:
2
image: nats:alpine
3
4
cluster:
5
enabled: true
6
noAdvertise: true
7
8
leafnodes:
9
enabled: true
10
noAdvertise: true
11
12
natsbox:
13
enabled: true
Copied!
You could then use an L4 enabled load balancer to connect to NATS, for example:
1
apiVersion: v1
2
kind: Service
3
metadata:
4
name: nats-lb
5
spec:
6
type: LoadBalancer
7
selector:
8
app.kubernetes.io/name: nats
9
ports:
10
- protocol: TCP
11
port: 4222
12
targetPort: 4222
13
name: nats
14
- protocol: TCP
15
port: 7422
16
targetPort: 7422
17
name: leafnodes
18
- protocol: TCP
19
port: 7522
20
targetPort: 7522
21
name: gateways
Copied!

Gateways

A supercluster can be formed by pointing to remote gateways. You can find more about gateways in the NATS documentation: running-a-nats-server/configuration/gateways.
1
gateway:
2
enabled: false
3
name: 'default'
4
5
#############################
6
# #
7
# List of remote gateways #
8
# #
9
#############################
10
# gateways:
11
# - name: other
12
# url: nats://my-gateway-url:7522
13
14
#######################
15
# #
16
# TLS Configuration #
17
# #
18
#######################
19
#
20
# # You can find more on how to setup and trouble shoot TLS connnections at:
21
#
22
# # https://docs.nats.io/running-a-nats-server/configuration/securing_nats/tls
23
#
24
# tls:
25
# secret:
26
# name: nats-client-tls
27
# ca: "ca.crt"
28
# cert: "tls.crt"
29
# key: "tls.key"
Copied!

Auth setup

Auth with a Memory Resolver

1
auth:
2
enabled: true
3
4
# Reference to the Operator JWT.
5
operatorjwt:
6
configMap:
7
name: operator-jwt
8
key: KO.jwt
9
10
# Public key of the System Account
11
systemAccount:
12
13
resolver:
14
############################
15
# #
16
# Memory resolver settings #
17
# #
18
##############################
19
type: memory
20
21
#
22
# Use a configmap reference which will be mounted
23
# into the container.
24
#
25
configMap:
26
name: nats-accounts
27
key: resolver.conf
Copied!

Auth using an Account Server Resolver

1
auth:
2
enabled: true
3
4
# Reference to the Operator JWT.
5
operatorjwt:
6
configMap:
7
name: operator-jwt
8
key: KO.jwt
9
10
# Public key of the System Account
11
systemAccount:
12
13
resolver:
14
##########################
15
# #
16
# URL resolver settings #
17
# #
18
##########################
19
type: URL
20
url: "http://nats-account-server:9090/jwt/v1/accounts/"
Copied!

JetStream

Setting up Memory and File Storage

File Storage is always recommended, since JetStream's RAFT Meta Group will be persisted to file storage. The Storage Class used should be block storage. NFS is not recommended.
1
nats:
2
image: nats:alpine
3
4
jetstream:
5
enabled: true
6
7
memStorage:
8
enabled: true
9
size: 2Gi
10
11
fileStorage:
12
enabled: true
13
size: 10Gi
14
# storageClassName: gp2 # NOTE: AWS setup but customize as needed for your infra.
Copied!

Using with an existing PersistentVolumeClaim

For example, given the following PersistentVolumeClaim:
1
---
2
kind: PersistentVolumeClaim
3
apiVersion: v1
4
metadata:
5
name: nats-js-disk
6
annotations:
7
volume.beta.kubernetes.io/storage-class: "default"
8
spec:
9
accessModes:
10
- ReadWriteOnce
11
resources:
12
requests:
13
storage: 3Gi
Copied!
You can start JetStream so that one pod is bound to it:
1
nats:
2
image: nats:alpine
3
4
jetstream:
5
enabled: true
6
7
fileStorage:
8
enabled: true
9
storageDirectory: /data/
10
existingClaim: nats-js-disk
11
claimStorageSize: 3Gi
Copied!

Clustering example

1
nats:
2
image: nats:alpine
3
4
jetstream:
5
enabled: true
6
7
memStorage:
8
enabled: true
9
size: "2Gi"
10
11
fileStorage:
12
enabled: true
13
size: "1Gi"
14
storageDirectory: /data/
15
storageClassName: default
16
17
cluster:
18
enabled: true
19
# Cluster name is required, by default will be release name.
20
# name: "nats"
21
replicas: 3
Copied!

Misc

NATS Box

A lightweight container with NATS and NATS Streaming utilities deployed along the cluster to confirm the setup. You can find the image at: https://github.com/nats-io/nats-box
1
natsbox:
2
enabled: true
3
image: nats:alpine
4
pullPolicy: IfNotPresent
5
6
# credentials:
7
# secret:
8
# name: nats-sys-creds
9
# key: sys.creds
Copied!

Configuration Reload sidecar

The NATS config reloader image to use:
1
reloader:
2
enabled: true
3
image: natsio/nats-server-config-reloader:latest
4
pullPolicy: IfNotPresent
Copied!

Prometheus Exporter sidecar

You can toggle whether to start the sidecar to be used to feed metrics to Prometheus:
1
exporter:
2
enabled: true
3
image: natsio/prometheus-nats-exporter:latest
4
pullPolicy: IfNotPresent
Copied!

Prometheus operator ServiceMonitor support

You can enable Prometheus operator ServiceMonitor:
1
exporter:
2
# You have to enable exporter first
3
enabled: true
4
serviceMonitor:
5
enabled: true
6
## Specify the namespace where Prometheus Operator is running
7
# namespace: monitoring
8
# ...
Copied!

Pod Customizations

Security Context

1
# Toggle whether to use setup a Pod Security Context
2
# ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
3
securityContext:
4
fsGroup: 1000
5
runAsUser: 1000
6
runAsNonRoot: true
Copied!

Affinity

matchExpressions must be configured according to your setup
1
affinity:
2
nodeAffinity:
3
requiredDuringSchedulingIgnoredDuringExecution:
4
nodeSelectorTerms:
5
- matchExpressions:
6
- key: node.kubernetes.io/purpose
7
operator: In
8
values:
9
- nats
10
podAntiAffinity:
11
requiredDuringSchedulingIgnoredDuringExecution:
12
- labelSelector:
13
matchExpressions:
14
- key: app
15
operator: In
16
values:
17
- nats
18
- stan
19
topologyKey: "kubernetes.io/hostname"
Copied!

Service topology

Service topology is disabled by default but can be enabled by setting topologyKeys. For example:
1
topologyKeys:
2
- "kubernetes.io/hostname"
3
- "topology.kubernetes.io/zone"
4
- "topology.kubernetes.io/region"
Copied!

CPU/Memory Resource Requests/Limits

Sets the pods CPU/memory requests/limits
1
nats:
2
resources:
3
requests:
4
cpu: 2
5
memory: 4Gi
6
limits:
7
cpu: 4
8
memory: 6Gi
Copied!
No resources are set by default.

Annotations

1
podAnnotations:
2
key1 : "value1",
3
key2 : "value2"
Copied!

Name Overrides

Can change the name of the resources as needed with:
1
nameOverride: "my-nats"
Copied!

Image Pull Secrets

1
imagePullSecrets:
2
- name: myRegistry
Copied!
Adds this to the StatefulSet:
1
spec:
2
imagePullSecrets:
3
- name: myRegistry
Copied!