Install Kubernetes on AWS with kops


It was year 2015 when I first time installed Kubernetes. Back then, installing Kubernetes is not a simple task like what it is Today. Nowadays, there are a few handy tools that you can choose, eg. kops, heptio. Additionally, you can also choose managed Kubernetes, e.g GKE, AKS, EKS and more.

I recently used kops to create a kubernetes on AWS. Here a few things that I learned and would like to share.

  • kops requires kubectl, so install it as the first thing.
  • kops requires a proper DNS name for your cluster unless you choose to use gossip-based cluster (.k8s.local).
  • kops requires a S3 bucket to store the meta-data of your cluster, like cluster spec, instance group spec, PKI … And by default, kops thinks it is in us-east-1 region (Virginia).
  • By default, kops create a VPC and all associated resources (subnets, routing table, autoscaling group). Something like:
    kops create cluster --zones ap-southeast-2 ${NAME}
  • kops also supports to use existing VPC and subnets. The process is that you use kops to create a base configuration file, then you edit the file to meet your needs. Once you are happy with the configuration, you can start to build the cluster. I will explain it in the step by step format:
  1. Create your base configuration file, this only generates the config file and saves it to the S3 bucket. Here is the sample with comments.
    kops create cluster \
    --name k8s-01.nprod.my.com \
    --state s3://k8s-01-nprod-state-store \
    --cloud aws \
    --vpc vpc-292dc45e \
    --subnets subnet-5483ec1d,subnet-646b1603,subnet-8c7f82d4 \
    --master-zones ap-southeast-2a,ap-southeast-2b,ap-southeast-2c \
    --zones ap-southeast-2a,ap-southeast-2b,ap-southeast-2c \
    --ssh-access 10.0.0.0/8,111.222.333.444/24 \
    --networking calico \
    --master-size t2.medium \
    --node-size t2.medium \
    --node-count 3 \
    --dns public \
    --dns-zone Z1ROMYT5188FK1 \
    
  2. Edit the kops config file with following command.
    kops edit cluster ${NAME}

    Here I want to place the API load balance in my public subnets (it is called Utility subnets in kops). All masters and nodes are in the private subnets, so I need to add the egress (I use nat gateway) in the config. Also as my nodes are across AZ, I need to set calico as crossSubnet: true.

    apiVersion: kops/v1alpha2
    kind: Cluster
    metadata:
      creationTimestamp: 2018-04-22T23:29:48Z
      name: k8s-01.nprod.my.com
    spec:
     api:
      loadBalancer:
        type: Public
      authorization:
        rbac: {}
      channel: stable
      cloudProvider: aws
      configBase: s3://k8s-01-nprod-state-store/k8s-01.nprod.my.com
      dnsZone: Z1ROMYT5188FK1
      etcdClusters:
        - etcdMembers:
          - instanceGroup: master-ap-southeast-2a
            name: a
          - instanceGroup: master-ap-southeast-2b
            name: b
          - instanceGroup: master-ap-southeast-2c
            name: c
          name: main
      - etcdMembers:
        - instanceGroup: master-ap-southeast-2a
          name: a
        - instanceGroup: master-ap-southeast-2b
          name: b
        - instanceGroup: master-ap-southeast-2c
          name: c
        name: events
      iam:
        allowContainerRegistry: true
        legacy: false
      kubernetesApiAccess:
      - 10.0.0.0/8
      - 111.222.333.0/24
      kubernetesVersion: 1.9.3
      masterInternalName: api.internal.k8s-01.nprod.my.com
      masterPublicName: api.k8s-01.nprod.my.com
      networkCIDR: 10.101.39.0/24
      networkID: vpc-292dc45e
      networking:
        calico:
          crossSubnet: true
      nonMasqueradeCIDR: 100.64.0.0/10
      sshAccess:
      - 10.0.0.0/8
      - 111.222.333.0/24
      sshKeyName: mykey
      subnets:
      - cidr: 10.101.39.0/26
        egress: nat-0085410d12e6c1342
        id: subnet-5483ec1d
        name: ap-southeast-2a
        type: Private
        zone: ap-southeast-2a
      - cidr: 10.101.39.192/28
        id: subnet-b888e7f1
        name: ap-southeast-2a-utility
        type: Utility
        zone: ap-southeast-2a
      - cidr: 10.101.39.64/26
        egress: nat-0a9f34c6c6babd533
        id: subnet-646b1603
        name: ap-southeast-2b
        type: Private
        zone: ap-southeast-2b
      - cidr: 10.101.39.208/28
        id: subnet-886a17ef
        name: ap-southeast-2b-utility
        type: Utility
        zone: ap-southeast-2b
      - cidr: 10.101.39.128/26
        egress: nat-000ada32a30155785
        id: subnet-8c7f82d4
        name: ap-southeast-2c
        type: Private
        zone: ap-southeast-2c
      - cidr: 10.101.39.224/28
        id: subnet-ee7c81b6
        name: ap-southeast-2c-utility
        type: Utility
        zone: ap-southeast-2c
      topology:
        dns:
          type: Public
        masters: private
        nodes: private
    
  3. Review the change, and if you are happy with it. Then you can start to build your cluster with the command:
    kops update cluster ${NAME} --yes
    
  4. By now, your k8s cluster should be up and running in AWS now. To delete it, just type:
    kops delete cluster ${NAME} --yes
Advertisement

3 thoughts on “Install Kubernetes on AWS with kops

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s