In my previous post, I have shown you how to create a Kubernetes cluster on AWS with kops. And this article I will dig deeper in to the Kubernetes HA (High-Availability) that is built by kops. Here is a diagram that shows you the high level HA design of the cluster that I built.
Here are some key points that I think you need to understand:
- The cluster has 3 masters and 3 nodes.
- Each master is in a dedicated auto scaling group (min:1 max:1) that is in a separate availability zone. For example master-a in ap-southeast-2a, master-b in ap-southeast-2b and master-c in ap-southeast-2c. And please bear in mind that the cluster will not be functional if more than 49% of the nodes are offline. So this cluster can tolerate the loss of one master maximally.
- The etcd data and events are saved in two ESB volumes respectively in each availability zone, so there are 3 copies. kops uses protokube to discover and mount the volumes to the master instance. And I think this is why the masters are split into 3 auto scaling groups.
- A ELB sits in frond of the 3 master auto scaling group, so the API traffics can be diverted to the any master that is alive.
- All nodes are in one auto scaling group, the min/max numbers, instance type can be adjusted to your needs. To adjust the number of nodes, you can use kops edit ig nodes then kops update cluster –yes to apply the change. Additionally, you can set up CloudWatch and Lambda to dynamically configure it to meet the workload needs.
- Ingress and ingress controller can be setup to divert the traffics to the services that are running on the nodes auto scaling group. I will write post later to explain how it works.
Reference:
https://kubernetes.io/docs/admin/high-availability/building/
https://github.com/kubernetes/kops/blob/master/docs/high_availability.md