Continue with the part one, let’s do some configurations.
As the folder /var/atlassian/application-data/jira/shared-home does not exist in the offical Jira image, we need to create it manually in the container or fork the repository and modify the Dockerfile to create it.
You should have one pod called jira-0 running now if you did everything correctly in the part one.
$ kubectl exec -it jira-0 bash
$ root@jira-0:/var/atlassian/application-data/jira# mkdir shared-home/
$ chown -R jira.jira shared-home
$ exit
Then let’s kill the pod, it will be re-created automatically with the correct mount.
kubectl delete pod jira-0
Wait for the pod to become ready. And access the Jira service either via NodePort or the Ingress depends on how you set it up. In my case, it is http://jira-sand.mydomain.com:32631. Then follow the Jira setup wizard, the database host should be jira-postgres.default.svc.cluster.local, port 5432, user: admin, password: admin.
Once the first Jira pod is up and running properly, we need to scale up the statefulSet and copy the local home folder to the other pods.
E.g I want to run Jira cluster on 3 nodes, what I need to do is to copy the local home folder from Jira-0 to Jira-1 an Jira-2.
To create the local home folder PV for Jira-1 and Jira-2, we need to scale up to 3 first, then scale down to 3 to kill all pods as it is safe to copy the file without a running pod.
$ kubectl scale --replicas=3 sts/jira
$ kubectl get pods -l app=jira
NAME READY STATUS RESTARTS AGE
jira-0 1/1 Running 0 101m
jira-1 0/1 Running 0 99m
jira-2 0/1 Running 0 97m
$ kubectl scale --replicas=0 sts/jira
Now, we need to find out the physical locations of the local home folder PV for Jira-0, 1, 2.
Here is a example to find out the jira-0 local home folder pv
$ kubectl get pv | grep local-home-jira-0
pvc-f228561a-753d-47b3-8f40-d1fb6b77c97a 5Gi RWO Delete Bound default/local-home-jira-0 glusterfs 6d22h
Then lets find out the Gluster volume
$ kubectl describe pv pvc-f228561a-753d-47b3-8f40-d1fb6b77c97a
Name: pvc-f228561a-753d-47b3-8f40-d1fb6b77c97a
Labels: <none>
Annotations: Description: Gluster-Internal: Dynamically provisioned PV
gluster.kubernetes.io/heketi-volume-id: a9a710b95851c277452585346cdf60e9
gluster.org/type: file
kubernetes.io/createdby: heketi-dynamic-provisioner
pv.beta.kubernetes.io/gid: 2004
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: glusterfs
Status: Bound
Claim: default/local-home-jira-0
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-dynamic-f228561a-753d-47b3-8f40-d1fb6b77c97a
EndpointsNamespace: default
Path: vol_a9a710b95851c277452585346cdf60e9
ReadOnly: false
Events: <none>
In my case, I need to find out the physical location of the volume via heketi
$ heketi-cli topology info
Name: vol_a9a710b95851c277452585346cdf60e9
Size: 5
Id: a9a710b95851c277452585346cdf60e9
Cluster Id: 15f86aa24bc4c182cdb9fc78fe7a91f6
Mount: xxx.xxx.xxx.xxx:vol_a9a710b95851c277452585346cdf60e9
Mount Options: backup-volfile-servers=
Durability Type: none
Snapshot: Enabled
Snapshot Factor: 1.00
Bricks:
Id: d2af70591e6bd0e2614cbd327b144322
Path: /var/lib/heketi/mounts/vg_68f2c41690c5977dd16a52d5c5514852/brick_d2af70591e6bd0e2614cbd327b144322/brick
Size (GiB): 5
Node: bf4f0b5b28ab58579390056b385ee2e1
Device: 68f2c41690c5977dd16a52d5c5514852
Use the same method to find out the physical location for jira-1, and jira-2, and copy everything from jira-0 local home to both jira-1 and jira-2 local home.
In the following example: /var/lib/heketi/mounts/vg_68f2c41690c5977dd16a52d5c5514852/brick_ad00bfb5b65962574d9e9b7d59ccc81c/brick is the physical location for jira-1 local home folder. /var/lib/heketi/mounts/vg_68f2c41690c5977dd16a52d5c5514852/brick_d2af70591e6bd0e2614cbd327b144322/brick is the jira-0 local home folder physical location.
cd /var/lib/heketi/mounts/vg_68f2c41690c5977dd16a52d5c5514852/brick_ad00bfb5b65962574d9e9b7d59ccc81c/brick
cp -rfv /var/lib/heketi/mounts/vg_68f2c41690c5977dd16a52d5c5514852/brick_d2af70591e6bd0e2614cbd327b144322/brick/*.* .
cp -rfv /var/lib/heketi/mounts/vg_68f2c41690c5977dd16a52d5c5514852/brick_d2af70591e6bd0e2614cbd327b144322/brick/* .
Once the above steps are done, create a file named cluster.properties under each local home folder. And change the node id accordingly.
$ cat cluster.properties
# This ID must be unique across the cluster
jira.node.id = jira-0
# The location of the shared home directory for all Jira nodes
jira.shared.home = /var/atlassian/application-data/jira/shared-home
Now we are ready to go! Scale the cluster back to 3, and finger crossed 😉
Here we go, all three pods are ready to serve traffics.
$ kubectl scale --replicas=3 sts/jira
$ kubectl get pods -l app=jira
NAME READY STATUS RESTARTS AGE
jira-0 1/1 Running 0 114m
jira-1 1/1 Running 0 112m
jira-2 1/1 Running 0 110m

Scroll down to the Jira page bottom, then you can see which pod is serving you. Refresh the page and it may become another pod.

Hi Jackie, Thank you for providing the details steps, I have followed the above steps and Jira DataCenter cluster is working fine with 1 replica/1 pod Sts, but when I start the second pod or scale to 2, the url is hanging for some time and once the jira is up in 2nd pod, some how jira url working, but if I clink on any of the tab, is again coming to dashboard. If I scale down to 1 pod Sts, its working fine. Could you please help in this regard. Thanks you.
Looks like your Jira cluster is not working properly. Where did you run your Kubernetes? And what Kubernetes CNI did you use? There are 3 node discovery methods in Jira cluster, one of them is multicast. And not all Kubernetes CNI support that. I have another post talking about that https://jackiechen.blog/2019/11/01/replace-flannel-with-weave-net-in-kubernetes/