Kubernetes* migration

This guide describes how to migrate Kubernetes container orchestration system on Clear Linux* OS from 1.17.x to 1.19.x.

Background

The version of Kubernetes* was bumped from 1.17.7 to 1.19.4 in Clear Linux* OS release 34090. This guide and the Clear Linux OS bundle k8s-migration were created to help facilitate migration of a cluster from 1.17.x to the latest 1.19.x .

The new Clear Linux OS bundle k8s-migration was added in Clear Linux* OS release 34270.

Prerequisites

  • Make sure you check any updates to kubernetes upgrade doc for caveats related to the version that is running in the cluster.

  • Make sure ALL the nodes are in Ready state. Without that, the cluster cannot be upgraded. Either fix the broken nodes or remove them from the cluster.

Upgrade 1.17.x —> 1.18.15

  1. Upgrade Control Node to 1.18.15 first

    First step would be to upgrade one of the main control node and update kubernetes components on them. You will need to have a newer version of kubeadm for the upgrade to work. Please consult kubeadm upgrade guide for any caveats from your current version to the new one.

    Update Clear Linux OS to the latest release to update the kubernetes version.

    sudo -E swupd update
    

    Note

    Note: PLEASE DO NOT REBOOT YOUR SYSTEM AT THIS TIME. Clear Linux OS is awesome and your stuff will work just fine.

  2. Add the new Kubernetes migration bundle which contains the 1.18.15 binaries.

    sudo -E swupd bundle-add k8s-migration
    
  3. Find the upgrade version of kubeadm that can used. This should be 1.18.15.

    This command will show the command and possible jumps that can be made from the current kubernetes version.

    sudo -E /usr/k8s-migration/bin/kubeadm upgrade plan
    

    Sample output:

    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Running cluster health checks
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.17.17
    [upgrade/versions] kubeadm version: v1.18.15
    I0209 21:12:49.868786  832739 version.go:252] remote version is much newer: v1.20.2; falling back to: stable-1.18
    [upgrade/versions] Latest stable version: v1.18.15
    [upgrade/versions] Latest stable version: v1.18.15
    [upgrade/versions] Latest version in the v1.17 series: v1.17.17
    [upgrade/versions] Latest version in the v1.17 series: v1.17.17
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     3 x v1.17.7   v1.18.15
    
    Upgrade to the latest stable version:
    
    COMPONENT            CURRENT    AVAILABLE
    API Server           v1.17.17   v1.18.15
    Controller Manager   v1.17.17   v1.18.15
    Scheduler            v1.17.17   v1.18.15
    Kube Proxy           v1.17.17   v1.18.15
    CoreDNS              1.6.5      1.6.7
    Etcd                 3.4.3      3.4.3-0
    
    You can now apply the upgrade by executing the following command:
    
      kubeadm upgrade apply v1.18.15
    
  4. Upgrade the node to the intermediate 1.18.15 version of Kubernetes.

    sudo -E /usr/k8s-migration/bin/kubeadm upgrade apply v1.18.15
    

    Note

    Note: Do not reboot the system yet.

  5. Upgrade Additional Control Nodes to 1.18.15

    In multi-node control plane, verify all the control plane nodes are updated prior to upgrading the worker nodes/SUTs.

  6. Upgrade Other Nodes to 1.18.15

    For each of the other nodes:

    1. Update Clear Linux OS to the latest release to update the kubernetes version.

      sudo -E swupd update
      
    2. Add the new Kubernetes migration bundle which contains the 1.18.15 binaries.

      sudo -E swupd bundle-add k8s-migration
      
    3. On the Admin node, drain the Client node FIRST

      /usr/k8s-migration/bin/kubectl drain <CLIENT_NODE_NAME> --ignore-daemonsets --delete-local-data
      
    4. Back on the Client node, upgrade Kubernetes on the Client

      sudo -E /usr/k8s-migration/bin/kubeadm upgrade node
      
    5. On the Admin node, re-enable the Client

      /usr/k8s-migration/bin/kubectl uncordon <CLIENT_NODE_NAME>
      
    6. Back on the Client node, restart Kubernetes on the Client

      sudo -E systemctl restart kubelet
      
  7. Restart Kubernetes on the Admin node(s) to finish the 1.18.x upgrade

    sudo -E systemctl restart kubelet
    

    Note

    Note: Wait for all nodes to be Ready and showing the 1.19.x version. This version will now show as it is the released version the service files will see and use, but the Nodes are not upgraded yet.

Upgrade 1.18.15 —> 1.19.x

  1. Upgrade Control Node to 1.19.x

    Now that systems are upgraded to the intermediate release of 1.18.15 each of the nodes can be upgraded to the latest 1.19.x release.

  2. Find the upgrade version of kubeadm that can used. This should be 1.19.x.

    This command will show the command and possible jumps that can be made from the current kubernetes version.

    sudo -E kubeadm upgrade plan
    

    Sample output:

    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Running cluster health checks
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.18.15
    [upgrade/versions] kubeadm version: v1.19.7
    I0209 23:08:23.810900  925910 version.go:252] remote version is much newer: v1.20.2; falling back to: stable-1.19
    [upgrade/versions] Latest stable version: v1.19.7
    [upgrade/versions] Latest stable version: v1.19.7
    [upgrade/versions] Latest version in the v1.18 series: v1.18.15
    [upgrade/versions] Latest version in the v1.18 series: v1.18.15
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    kubelet     3 x v1.17.7   v1.19.7
    
    Upgrade to the latest stable version:
    
    COMPONENT                 CURRENT    AVAILABLE
    kube-apiserver            v1.18.15   v1.19.7
    kube-controller-manager   v1.18.15   v1.19.7
    kube-scheduler            v1.18.15   v1.19.7
    kube-proxy                v1.18.15   v1.19.7
    CoreDNS                   1.6.7      1.7.0
    etcd                      3.4.3-0    3.4.13-0
    
    You can now apply the upgrade by executing the following command:
    
      kubeadm upgrade apply v1.19.7
    
    The table below shows the current state of component configs as understood by this version of kubeadm.
    Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
    resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
    upgrade to is denoted in the "PREFERRED VERSION" column.
    
    API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
    kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
    kubelet.config.k8s.io     v1beta1           v1beta1             no
    
  3. Upgrade the node to the latest 1.19.x version of Kubernetes.

    sudo -E /usr/bin/kubeadm upgrade apply v1.19.7
    

    Note

    Note: Do not reboot the system yet.

  4. Upgrade Additional Control Nodes to 1.19.x

    In multi-node control plane, verify all the control plane nodes are updated prior to upgrading the worker nodes/SUTs.

  5. Upgrade Other Nodes to 1.19.x

    For each of the other nodes:

    1. On the Admin node, drain the Client FIRST

      kubectl drain <CLIENT_NODE_NAME> --ignore-daemonsets
      
    2. Back on the Client node, upgrade Kubernetes on the Client

      sudo -E kubeadm upgrade node
      
    3. On the Admin node, re-enable the Client

      kubectl uncordon <CLIENT_NODE_NAME>
      
    4. Back on the Client node, if you wish reboot the Client, it is now safe to do so.

      sudo reboot
      
  6. Reboot the Control Node (optional)

If you wish reboot the nodes, it is now safe to do so.

sudo reboot

Congratulations!

You’ve successfully installed and set up Kubernetes in Clear Linux OS using CRI-O and kata-runtime. You are now ready to follow on-screen instructions to deploy a pod network to the cluster and join worker nodes with the displayed token and IP information.

Clean up: Remove the migration bundle for each node

sudo -E swupd bundle-remove k8s-migration