First, we need a Kubernetes cluster with a provider that offers a service with a ReadWriteMany
filesystem available. In this short guide, we will create the cluster on AWS and then use EFS for the filesystem:
# Create 3 nodes Kubernetes clustereksctl create cluster --name stan-k8s \--nodes 3 \--node-type=t3.large \ # t3.small--region=us-east-2​# Get the credentials for your clustereksctl utils write-kubeconfig --name stan-k8s --region us-east-2
For the FT mode to work, we will need to create an EFS volume which can be shared by more than one pod. Go into the AWS console and create one and make the sure that it is in a security group where the k8s nodes will have access to it. In case of clusters created via eksctl, this will be a security group named ClusterSharedNodeSecurityGroup
:
Confirm from the FilesystemID from the cluster and the DNS name, we will use those values to create an EFS provisioner controller within the K8S cluster:
---apiVersion: v1kind: ServiceAccountmetadata:name: efs-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: efs-provisioner-runnerrules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-efs-provisionersubjects:- kind: ServiceAccountname:efs-provisioner# replace with namespace where provisioner is deployednamespace: defaultroleRef:kind: ClusterRolename: efs-provisioner-runnerapiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-efs-provisionerrules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-efs-provisionersubjects:- kind: ServiceAccountname: efs-provisioner# replace with namespace where provisioner is deployednamespace: defaultroleRef:kind: Rolename: leader-locking-efs-provisionerapiGroup: rbac.authorization.k8s.io---apiVersion: v1kind: ConfigMapmetadata:name: efs-provisionerdata:file.system.id: fs-c22a24bbaws.region: us-east-2provisioner.name: synadia.com/aws-efsdns.name: ""---kind: DeploymentapiVersion: apps/v1metadata:name: efs-provisionerspec:replicas: 1selector:matchLabels:app: efs-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: efs-provisionerspec:serviceAccountName: efs-provisionercontainers:- name: efs-provisionerimage: quay.io/external_storage/efs-provisioner:latestenv:- name: FILE_SYSTEM_IDvalueFrom:configMapKeyRef:name: efs-provisionerkey: file.system.id- name: AWS_REGIONvalueFrom:configMapKeyRef:name: efs-provisionerkey: aws.region- name: DNS_NAMEvalueFrom:configMapKeyRef:name: efs-provisionerkey: dns.name​- name: PROVISIONER_NAMEvalueFrom:configMapKeyRef:name: efs-provisionerkey: provisioner.namevolumeMounts:- name: pv-volumemountPath: /efsvolumes:- name: pv-volumenfs:server: fs-c22a24bb.efs.us-east-2.amazonaws.compath: /---kind: StorageClassapiVersion: storage.k8s.io/v1metadata:name: aws-efsprovisioner: synadia.com/aws-efs---kind: PersistentVolumeClaimapiVersion: v1metadata:name: efsannotations:volume.beta.kubernetes.io/storage-class: "aws-efs"spec:accessModes:- ReadWriteManyresources:requests:storage: 1Mi
Result of deploying the manifest:
serviceaccount/efs-provisioner createdclusterrole.rbac.authorization.k8s.io/efs-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-efs-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-efs-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-efs-provisioner createdconfigmap/efs-provisioner createddeployment.extensions/efs-provisioner createdstorageclass.storage.k8s.io/aws-efs createdpersistentvolumeclaim/efs created
Now create a NATS Streaming cluster with FT mode enabled and using NATS embedded mode that is mounting the EFS volume:
---apiVersion: v1kind: Servicemetadata:name: stanlabels:app: stanspec:selector:app: stanclusterIP: Noneports:- name: clientport: 4222- name: clusterport: 6222- name: monitorport: 8222- name: metricsport: 7777---apiVersion: v1kind: ConfigMapmetadata:name: stan-configdata:stan.conf: |http: 8222​cluster {port: 6222routes [nats://stan-0.stan:6222nats://stan-1.stan:6222nats://stan-2.stan:6222]cluster_advertise: $CLUSTER_ADVERTISEconnect_retries: 10}​streaming {id: test-clusterstore: filedir: /data/stan/storeft_group_name: "test-cluster"file_options {buffer_size: 32mbsync_on_flush: falseslice_max_bytes: 512mbparallel_recovery: 64}store_limits {max_channels: 10max_msgs: 0max_bytes: 256gbmax_age: 1hmax_subs: 128}}---apiVersion: apps/v1kind: StatefulSetmetadata:name: stanlabels:app: stanspec:selector:matchLabels:app: stanserviceName: stanreplicas: 3volumeClaimTemplates:template:metadata:labels:app: stanspec:# STAN ServerterminationGracePeriodSeconds: 30​containers:- name: stanimage: nats-streaming:alpine​ports:# In case of NATS embedded mode expose these ports- containerPort: 4222name: client- containerPort: 6222name: cluster- containerPort: 8222name: monitorargs:- "-sc"- "/etc/stan-config/stan.conf"​# Required to be able to define an environment variable# that refers to other environment variables. This env var# is later used as part of the configuration file.env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: CLUSTER_ADVERTISEvalue: $(POD_NAME).stan.$(POD_NAMESPACE).svcvolumeMounts:- name: config-volumemountPath: /etc/stan-config- name: efsmountPath: /data/stanresources:requests:cpu: 0livenessProbe:httpGet:path: /port: 8222initialDelaySeconds: 10timeoutSeconds: 5- name: metricsimage: synadia/prometheus-nats-exporter:0.5.0args:- -connz- -routez- -subz- -varz- -channelz- -serverz- http://localhost:8222ports:- containerPort: 7777name: metricsvolumes:- name: config-volumeconfigMap:name: stan-config- name: efspersistentVolumeClaim:claimName: efs
Your cluster now will look something like this:
kubectl get podsNAME READY STATUS RESTARTS AGEefs-provisioner-6b7866dd4-4k5wx 1/1 Running 0 21mstan-0 2/2 Running 0 6m35sstan-1 2/2 Running 0 4m56sstan-2 2/2 Running 0 4m42s
If everything was setup properly, one of the servers will be the active node.
$ kubectl logs stan-0 -c stan[1] 2019/12/04 20:40:40.429359 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.16.2[1] 2019/12/04 20:40:40.429385 [INF] STREAM: ServerID: 7j3t3Ii7e2tifWqanYKwFX[1] 2019/12/04 20:40:40.429389 [INF] STREAM: Go version: go1.11.13[1] 2019/12/04 20:40:40.429392 [INF] STREAM: Git commit: [910d6e1][1] 2019/12/04 20:40:40.454212 [INF] Starting nats-server version 2.0.4[1] 2019/12/04 20:40:40.454360 [INF] Git commit [c8ca58e][1] 2019/12/04 20:40:40.454522 [INF] Starting http monitor on 0.0.0.0:8222[1] 2019/12/04 20:40:40.454830 [INF] Listening for client connections on 0.0.0.0:4222[1] 2019/12/04 20:40:40.454841 [INF] Server id is NB3A5RSGABLJP3WUYG6VYA36ZGE7MP5GVQIQVRG6WUYSRJA7B2NNMW57[1] 2019/12/04 20:40:40.454844 [INF] Server is ready[1] 2019/12/04 20:40:40.456360 [INF] Listening for route connections on 0.0.0.0:6222[1] 2019/12/04 20:40:40.481927 [INF] STREAM: Starting in standby mode[1] 2019/12/04 20:40:40.488193 [ERR] Error trying to connect to route (attempt 1): dial tcp: lookup stan on 10.100.0.10:53: no such host[1] 2019/12/04 20:40:41.489688 [INF] 192.168.52.76:40992 - rid:6 - Route connection created[1] 2019/12/04 20:40:41.489788 [INF] 192.168.52.76:40992 - rid:6 - Router connection closed[1] 2019/12/04 20:40:41.489695 [INF] 192.168.52.76:6222 - rid:5 - Route connection created[1] 2019/12/04 20:40:41.489955 [INF] 192.168.52.76:6222 - rid:5 - Router connection closed[1] 2019/12/04 20:40:41.634944 [INF] STREAM: Server is active[1] 2019/12/04 20:40:41.634976 [INF] STREAM: Recovering the state...[1] 2019/12/04 20:40:41.655526 [INF] STREAM: No recovered state[1] 2019/12/04 20:40:41.671435 [INF] STREAM: Message store is FILE[1] 2019/12/04 20:40:41.671448 [INF] STREAM: Store location: /data/stan/store[1] 2019/12/04 20:40:41.671524 [INF] STREAM: ---------- Store Limits ----------[1] 2019/12/04 20:40:41.671527 [INF] STREAM: Channels: 10[1] 2019/12/04 20:40:41.671529 [INF] STREAM: --------- Channels Limits --------[1] 2019/12/04 20:40:41.671531 [INF] STREAM: Subscriptions: 128[1] 2019/12/04 20:40:41.671533 [INF] STREAM: Messages : unlimited[1] 2019/12/04 20:40:41.671535 [INF] STREAM: Bytes : 256.00 GB[1] 2019/12/04 20:40:41.671537 [INF] STREAM: Age : 1h0m0s[1] 2019/12/04 20:40:41.671539 [INF] STREAM: Inactivity : unlimited *[1] 2019/12/04 20:40:41.671541 [INF] STREAM: ----------------------------------[1] 2019/12/04 20:40:41.671546 [INF] STREAM: Streaming Server is ready
First need to create a PVC (PersistentVolumeClaim), in Azure we can use azurefile to get a volume with ReadWriteMany
:
---kind: PersistentVolumeClaimapiVersion: v1metadata:name: stan-efsannotations:volume.beta.kubernetes.io/storage-class: "azurefile"spec:accessModes:- ReadWriteManyresources:requests:storage: 100Mi
Next create a NATS cluster using the Helm charts:
helm repo add nats https://nats-io.github.io/k8s/helm/charts/helm install nats nats/nats
To create an FT setup using AzureFile you can use the following Helm chart values file:
stan:image: nats-streaming:alpinereplicas: 2nats:url: nats://nats:4222store:type: fileft:group: my-groupfile:path: /data/stan/storevolume:enabled: true​# Mount path for the volume.mount: /data/stan​# FT mode requires a single shared ReadWriteMany PVC volume.persistentVolumeClaim:claimName: stan-efs
Now deploy with Helm:
helm install stan nats/stan -f ./examples/deploy-stan-ft-file.yaml
Send a few commands to the NATS Server to which STAN/NATS Streaming is connected:
kubectl port-forward nats-0 4222:4222 &​stan-pub -c stan foo bar.1stan-pub -c stan foo bar.2stan-pub -c stan foo bar.3
Subscribe to get all the messages:
stan-sub -c stan -all foo