I recently setup a single node Kubernetes cluster on-premise for testing Jira and Confluence Data Center (which I will write a seperate blog later). In this blog, I want to share how I use Glusterfs as the shared storage which supports dynamic volume provisioning.
Traditionally, to use persistent volume (PV) in a pod, you have to create the PV and create a PVC and bound to the PV, then you can mount the PVC inside the pod. With dynamic volume provisioning, you only need to create PVC, and the PV is created on the fly.
How it works? The secret receipt is in the Storage Class. The Storage Class talks to the Storage management API to create the PV dynamically. Not all Storage Classes support dynamic volume provisioning, e.g at the time of writing (Kubernetes v1.16) local storage class does not support it yet.
As a example, Glusterfs is the backend storage, heketi is the REST API for managing Glusterfs. Lets dig into the details (My test node is CentOS 7).
# Install Glusterfs
yum install centos-release-gluster -y
yum install glusterfs gluster-cli glusterfs-libs glusterfs-server glusterfs-fuse -y
systemctl enable glusterd.service
systemctl start glusterd.service
# Install Heketi (Can be on same or different node as Glusterfs. One Heketi can manage multiple Glusterfs clusters)
yum insall heketi heketi-client -y
systemctl enable heketi.service
systemctl start heketi.service
# Create a SSH key pair for Heketi to access Glusterfs nodes
# <user> can be a sudo user or root
# <server> is the Glusterfs node. Run ssh-copy-id to copy the SSH pub key to all Glusterfs nodes.
$ ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
$ chown heketi:heketi /etc/heketi/heketi_key*
$ ssh-copy-id -i /etc/heketi/heketi_key.pub <user>@<server>
# Config /etc/heketi/heketi.json
# Enable use_auth for production. In test I just leave it off.
# Set executor to SSH, as my Glusterfs runs on a node not on Kubernetes.
# Set sudo as true if you use a sudo user not root.
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": false,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "My Secret"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "My Secret"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
"executor": "ssh",
"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key",
"user": "<user>",
"port": "22",
"fstab": "/etc/fstab",
"sudo": true
},
"_kubeexec_comment": "Kubernetes configuration",
"kubeexec": {
"host" :"https://kubernetes.host:8443",
"cert" : "/path/to/crt.file",
"insecure": false,
"user": "kubernetes username",
"password": "password for kubernetes user",
"namespace": "OpenShift project or Kubernetes namespace",
"fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"
},
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"_loglevel_comment": [
"Set log level. Choices are:",
" none, critical, error, warning, info, debug",
"Default is warning"
],
"loglevel" : "debug"
}
}
# Create Heketi topology file /etc/heketi/topology.json
# Support multiple clusters and nodes.
# zone normally represents the failure domain, e.g Set as the same zone if nodes share the same switch, power etc.
# Add one or more devices for each node, it has to be raw device. You may need to wipe the filesystem from the device first (e.g wipefs -a /dev/sdb)
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"<glusterfs server ip address>"
],
"storage": [
"<sglusterfs server ip address>"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
# Load the topology to add Glusterfs cluster
$ heketi-cli topology load --json=/etc/heketi/topology.json
# Create Glusterfs storage class in Kubernetes.
# Sample glusterfsStorageClass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://<heketi server ip address>:8080"
volumetype: none
allowVolumeExpansion: true
$ kubectl apply -f glusterfsStorageClass.yaml
# Optionally you can make glusterfs as the default storage class.
$ kubectl patch storageclass glusterfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# Create a PVC to test
# Sample glusterfs-test-pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-test-pvc
labels:
app: glusterfs-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: glusterfs
$ kubectl apply -f glusterfs-pvc.yaml

One thought on “Use Glusterfs for Dynamic Volume Provisioning in Kubernetes”