Configure readinessProbe and livenessProbe for Jira Container on Kubernetes


The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.

The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

The kubelet uses startup probes to know when a Container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don’t interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.

I configured both readinessProbe and livenessProbe for Jira, they monitor different URL path. What I want to achieve is that:

  • Don’t route the traffics to the Jira pod when it is not ready. e.g when the pod is stilling booting up, or the pod is doing re-index.
  • Re-create the pod if Jira process is not working properly. e.g JVM crash, deadlock.

Here is my configuration:

    spec:
      containers:
      - name: jira
        image: atlassian/jira-software:8.5 
        readinessProbe:
          httpGet:
            path: /status
            port: 8080
          initialDelaySeconds: 120
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 600
          periodSeconds: 10  
$ kubectl describe sts/jira

...
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       app=jira
  Annotations:  kubectl.kubernetes.io/restartedAt: 2019-11-05T14:43:42+11:00
  Containers:
   jira:
    Image:      atlassian/jira-software:8.5
    Port:       8080/TCP
    Host Port:  0/TCP
    Liveness:   http-get http://:8080/ delay=600s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:8080/status delay=120s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      jira-config  ConfigMap  Optional: false
    Environment:   <none>
    Mounts:
      /var/atlassian/application-data/jira from local-home (rw)
      /var/atlassian/application-data/jira/shared-home from jira-share-pv (rw)
  Volumes:
   jira-share-pv:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  jira-share-pvc
    ReadOnly:   false
Volume Claims:
  Name:          local-home
  StorageClass:  
  Labels:        <none>
  Annotations:   <none>
  Capacity:      5Gi
  Access Modes:  [ReadWriteOnce]
Events:          <none>

Why I configured it this way? Let do a quick test. I will monitor the status code of both / and /status when the Jira pod is doing re-indexing.

$  kubectl port-forward jira-2 9999:8080 &

# Kick off the foreground re-index on pod jira-2

# Monitor the status code of both '/' and '/status'.

$ for i in $(seq 20); do echo $i 'check --------------'; echo -n '/: '; curl -I -s http://localhost:9999/ | grep HTTP/1.1 ; echo -n '/status: '; curl -I -s http://localhost:9999/status | grep HTTP/1.1 ; done

1 check --------------
/: HTTP/1.1 200 
/status: HTTP/1.1 200 
2 check --------------
/: HTTP/1.1 200 
/status: HTTP/1.1 200 
3 check --------------
/: HTTP/1.1 200 
/status: HTTP/1.1 200 
4 check --------------
/: HTTP/1.1 200 
/status: HTTP/1.1 200 
5 check --------------
/: HTTP/1.1 200 
/status: HTTP/1.1 503 
6 check --------------
/: HTTP/1.1 302 
/status: HTTP/1.1 503 
7 check --------------
/: HTTP/1.1 302 
/status: HTTP/1.1 503 
8 check --------------
/: HTTP/1.1 302 
/status: HTTP/1.1 503 
9 check --------------
/: HTTP/1.1 302 
/status: HTTP/1.1 503 
10 check --------------
/: HTTP/1.1 302 
/status: HTTP/1.1 503 
...

From above we can see / return 302 and /status return 503. According to Kubernetes documents, 302 indicates success, 503 indicates failure. So the Jira pod won’t be killed when running the re-index, but it will be removed from the service so there will be no traffics routed to it.

Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.

Reference:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Advertisement

One thought on “Configure readinessProbe and livenessProbe for Jira Container on Kubernetes

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s