NATS Cluster and Cert Manager
First we need to install the cert-manager component from jetstack:
1
kubectl create namespace cert-manager
2
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
3
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.yaml
Copied!
If you are running Kubernetes < 1.15, use cert-manager-legacy.yaml instead.
1
apiVersion: cert-manager.io/v1alpha2
2
kind: ClusterIssuer
3
metadata:
4
name: selfsigning
5
spec:
6
selfSigned: {}
Copied!
1
clusterissuer.certmanager.k8s.io/selfsigning unchanged
Copied!
Next, let's create the CA for the certs:
1
---
2
apiVersion: cert-manager.io/v1alpha2
3
kind: Certificate
4
metadata:
5
name: nats-ca
6
spec:
7
secretName: nats-ca
8
duration: 8736h # 1 year
9
renewBefore: 240h # 10 days
10
issuerRef:
11
name: selfsigning
12
kind: ClusterIssuer
13
commonName: nats-ca
14
usages:
15
- cert sign
16
organization:
17
- Your organization
18
isCA: true
19
---
20
apiVersion: cert-manager.io/v1alpha2
21
kind: Issuer
22
metadata:
23
name: nats-ca
24
spec:
25
ca:
26
secretName: nats-ca
Copied!
Now create the certs that will match the DNS name used by the clients to connect, in this case traffic is within Kubernetes so we are using the name nats which is backed up by a headless service (here is an example of sample deployment)
1
---
2
apiVersion: cert-manager.io/v1alpha2
3
kind: Certificate
4
metadata:
5
name: nats-server-tls
6
spec:
7
secretName: nats-server-tls
8
duration: 2160h # 90 days
9
renewBefore: 240h # 10 days
10
issuerRef:
11
name: nats-ca
12
kind: Issuer
13
usages:
14
- signing
15
- key encipherment
16
- server auth
17
organization:
18
- Your organization
19
commonName: nats.default.svc.cluster.local
20
dnsNames:
21
- nats.default.svc
Copied!
In case of using the NATS operator, the Routes use a service named $YOUR_CLUSTER-mgmt (this may change in the future)
1
---
2
apiVersion: cert-manager.io/v1alpha2
3
kind: Certificate
4
metadata:
5
name: nats-routes-tls
6
spec:
7
secretName: nats-routes-tls
8
duration: 2160h # 90 days
9
renewBefore: 240h # 10 days
10
issuerRef:
11
name: nats-ca
12
kind: Issuer
13
usages:
14
- signing
15
- key encipherment
16
- server auth
17
- client auth
18
organization:
19
- Your organization
20
commonName: "*.nats-mgmt.default.svc.cluster.local"
21
dnsNames:
22
- "*.nats-mgmt.default.svc"
Copied!
Now let's create an example NATS cluster with the operator:
1
apiVersion: "nats.io/v1alpha2"
2
kind: "NatsCluster"
3
metadata:
4
name: "nats"
5
spec:
6
# Number of nodes in the cluster
7
size: 3
8
version: "2.1.4"
9
10
tls:
11
# Certificates to secure the NATS client connections:
12
serverSecret: "nats-server-tls"
13
14
# Name of the CA in serverSecret
15
serverSecretCAFileName: "ca.crt"
16
17
# Name of the key in serverSecret
18
serverSecretKeyFileName: "tls.key"
19
20
# Name of the certificate in serverSecret
21
serverSecretCertFileName: "tls.crt"
22
23
# Certificates to secure the routes.
24
routesSecret: "nats-routes-tls"
25
26
# Name of the CA in routesSecret
27
routesSecretCAFileName: "ca.crt"
28
29
# Name of the key in routesSecret
30
routesSecretKeyFileName: "tls.key"
31
32
# Name of the certificate in routesSecret
33
routesSecretCertFileName: "tls.crt"
Copied!
Confirm that the pods were deployed:
1
kubectl get pods -o wide
Copied!
1
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
2
nats-1 1/1 Running 0 4s 172.17.0.8 minikube <none>
3
nats-2 1/1 Running 0 3s 172.17.0.9 minikube <none>
4
nats-3 1/1 Running 0 2s 172.17.0.10 minikube <none>
Copied!
Follow the logs:
1
kubectl logs nats-1
Copied!
1
[1] 2019/12/18 12:27:23.920417 [INF] Starting nats-server version 2.1.4
2
[1] 2019/12/18 12:27:23.920590 [INF] Git commit [not set]
3
[1] 2019/12/18 12:27:23.921024 [INF] Listening for client connections on 0.0.0.0:4222
4
[1] 2019/12/18 12:27:23.921047 [INF] Server id is NDA6JC3TGEADLLBEPFAQ4BN4PM3WBN237KIXVTFCY3JSTDOSRRVOJCXN
5
[1] 2019/12/18 12:27:23.921055 [INF] Server is ready
Copied!
Export as PDF
Copy link
Edit on GitHub