Kubernetes*

This tutorial describes how to install, configure, and start the Kubernetes container orchestration system on Clear Linux* OS.

A Kubernetes cluster can be setup on Clear Linux OS using the Clear Linux OS cloud-native-setup scripts to automate the process or can be setup through a manual step-by-step process. This tutorial covers both scenarios.

Prerequisites

This tutorial assumes you have already installed Clear Linux OS. For detailed instructions on installing Clear Linux OS on a bare metal system, follow the bare metal installation tutorial.

  1. Review and make sure the requirements for kubeadm are satisfied for the host system.

  2. Before you continue, update your Clear Linux OS installation with the following command:

    sudo swupd update
    

    Learn about the benefits of having an up-to-date system for cloud orchestration on the swupd page.

  3. Kubernetes, a set of supported CRI runtimes, CNI and cloud-native-setup scripts are included in the cloud-native-basic bundle. Install the cloud-native-basic bundle to get these components:

    sudo swupd bundle-add cloud-native-basic
    

Set up Kubernetes automatically

Clear Linux OS provides cloud-native-setup scripts to automate system setup and Kubernetes cluster initialization which allows you to get a cluster up and running quickly.

Note

By default, the scripts will update Clear Linux OS to the latest version, set up the system as a Kubernetes master-node with canal for container networking and crio for container runtime, and taint the master node to allow workloads to run on it. Kata is installed as an optional alternative runtime. The script can be configured to use other CNI’s and CRI’s by following the directions on the README.

See What is a Container Network Interface (CNI)? and What is a Container Runtime Interface (CRI)? for more information.

Important

If network proxy settings are required for Internet connectivity, configure them now because the scripts will propagate proxy configuration based on the running configuration. It is especially important to set the no_proxy variable appropriately for Kubernetes.

The script will also modify the /etc/environment and /etc/profile.d/proxy.sh files, if they exist, with the proxy environment variables in the running shell when the script is executed.

See the Setting proxy servers for Kubernetes section for details.

  1. Run the system-setup.sh script to configure the Clear Linux OS system settings.

    sudo /usr/share/clr-k8s-examples/setup_system.sh
    
  2. Stop docker and containerd to avoid conflicting CRIs being detected. The scripts use CRIO for the CRI.

    sudo systemctl stop docker
    sudo systemctl stop containerd
    
  3. Install git as it’s a dependency of the create_stack.sh.

    sudo swupd bundle-add git
    
  4. Run the create_stack.sh script to initialize the Kubernetes node and setup a container network plugin.

    sudo /usr/share/clr-k8s-examples/create_stack.sh minimal
    
  5. Follow the output on the screen and continue onto the section on using your cluster.

Uninstalling

  1. If you need to delete the Kubernetes cluster or want to start from scratch run the reset_stack.sh script.

    Warning

    This will stop components in the stack including Kubernetes, all CNI and CRIs and will delete all containers and networks.

    sudo /usr/share/clr-k8s-examples/reset_stack.sh
    

Set up Kubernetes manually

Configure host system

This tutorial uses the basic default Kubernetes configuration to get started. You can customize your Kubernetes configuration according to your specific deployment and security needs.

The Kubernetes administration tool, kubeadm, performs some “preflight checks” when initializing and starting a cluster. The steps below are necessary to ensure those preflight checks pass successfully.

  1. Enable IP forwarding:

    • Create the file /etc/sysctl.d/60-k8s.conf to set the net.ipv4.ip_forward parameter

      sudo mkdir -p /etc/sysctl.d/
      
      sudo tee /etc/sysctl.d/99-kubernetes-cri.conf > /dev/null <<EOF
      net.bridge.bridge-nf-call-iptables  = 1
      net.ipv4.ip_forward                 = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      
    • Apply the change:

      sudo sysctl --system
      
  2. Disable swap:

    sudo systemctl mask $(sed -n -e 's#^/var/\([0-9a-z]*\).*#var-\1.swap#p' /proc/swaps) 2>/dev/null
    sudo swapoff -a
    

    Note

    Kubernetes is designed to work without swap. Performance degradation of other workloads can occur with swap disabled on systems with constrained memory resources.

  3. Add the the system’s hostname to the /etc/hosts file. Kubernetes will read this file to locate the master host.

    echo "127.0.0.1 localhost `hostname`" | sudo tee --append /etc/hosts
    
  4. Enable the kubelet agent service to start at boot automatically:

    sudo systemctl enable kubelet.service
    

Important

If network proxy settings are required for Internet connectivity, configure them now because the scripts will propagate proxy configuration based on the running configuration. It is especially important to set the no_proxy variable for Kubernetes. See the Setting proxy servers for Kubernetes section for details.

Initialize the master node

In Kubernetes, a master node is part of the Kubernetes Control Plane.

Initializing a new Kubernetes cluster involves crafting a kubeadm init command. Adding parameters to this command can control the fundamental operating components of the cluster. This means it is important to understand and choose network and runtime options before running a kubeadm init command.

Choose a pod network add-on

See What is a Container Network Interface (CNI)? for information on what pod network add-ons and CNIs.

It is important to decide which CNI will be used early because some pod network add-ons require configuration during cluster initialization. Check whether or not your add-on requires special flags when you initialize the master control plane.

If your chosen network add-on requires appending to the kubeadm init command, make note of it before continuing. For example, if you choose the flannel pod network add-on, then in later steps you must add the following to the kubeadm init command:

--pod-network-cidr 10.244.0.0/16

Important

The version of CNI plugins installed needs to be compatible with the version of Kubernetes that is installed otherwise the cluster may fail. Check the Kubernetes version with kubeadm version -o short and refer to the documentation of the CNI plugins to obtain a compatible version.

Choose a container runtime

See What is a Container Runtime Interface (CRI)? for more information on what a CRI is.

Clear Linux OS supports Kubernetes with the various runtimes below with or without Kata Containers:

The container runtime that you choose will dictate the steps necessary to initialize the master cluster with kubeadm init.

CRI+O

For information on CRI+O as a Kubernetes CRI, see What is CRI+O?. To use CRI+O as the Kubernetes CRI:

  1. Start the CRI-O service and enable it to run at boot automatically:

    sudo systemctl enable --now crio.service
    

    When the crio service starts for the first time, it will create a configuration file for crio at /etc/crio/crio.conf.

  2. Run the kubeadm command to initialize the master node with the --cri-socket parameter:

    Important

    You may need to add additional parameters to the command below, depending the pod network addon in use.

    In this example, the --pod-network-cidr 10.244.0.0/16 parameter is to use flannel as the pod networking. See Choose a pod network add-on for more information.

    sudo kubeadm init \
    --cri-socket=unix:///run/crio/crio.sock \
    --pod-network-cidr 10.244.0.0/16
    
  3. (Optional) By default, CRI+O will use runc as the default runtime. CRI+O can optionally provide Kata Containers as a runtime. See the Add the Kata runtime to Kubernetes section for details.

    With CRI+O, the Kata Containers can be set as the runtime with a per-pod RuntimeClass annotation.

    Note

    If you are using CRI-O + Kata Containers as the runtime and choose the flannel for pod networking (see Choose a pod network add-on), the /etc/crio/crio.conf file needs to include the value below. On Clear Linux OS this is done automatically.

    [crio.runtime]
    manage_network_ns_lifecycle = true
    
  4. Once the cluster initialization is complete, continue reading about how to Use your cluster.

containerd

For information on containerd as as Kubernetes CRI, see What is containerd?. To use containerd as the Kubernetes CRI:

  1. Start the containerd service and enable it to run at boot automatically:

    sudo systemctl enable --now containerd.service
    
  2. Configure kubelet to use containerd. and reload the service.

    sudo mkdir -p  /etc/systemd/system/kubelet.service.d/
    
    cat << EOF | sudo tee  /etc/systemd/system/kubelet.service.d/0-containerd.conf
    [Service]
    Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
    EOF
    
  3. Configure kubelet to use systemd as the cgroup driver. and reload the service.

    sudo mkdir -p /etc/systemd/system/kubelet.service.d/
    
    cat << EOF | sudo tee  /etc/systemd/system/kubelet.service.d/10-cgroup-driver.conf
    [Service]
    Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd"
    EOF
    
  4. Reload the systemd manager configuration.

    sudo systemctl daemon-reload
    
  5. Run the kubeadm command to initialize the master node with the --cri-socket parameter:

    Important

    You may need to add additional parameters to the command below, depending the pod network addon in use.

    In this example, the --pod-network-cidr 10.244.0.0/16 parameter is to use flannel as the pod networking. See Choose a pod network add-on for more information.

    sudo kubeadm init \
    --cri-socket=/run/containerd/containerd.sock
    --pod-network-cidr 10.244.0.0/16
    
  6. (Optional) By default, containerd will use runc as the default runtime. containerd can optionally provide Kata Containers as a runtime. See the Add the Kata runtime to Kubernetes section for details.

    With containerd, the Kata Containers can be set as the runtime with a per-pod RuntimeClass annotation.

  7. Once the cluster initialization is complete, continue reading about how to Use your cluster.

Docker

For information on Docker, see What is Docker?. To use Docker as the Kubernetes container runtime:

  1. Make sure Docker is installed:

    sudo swupd bundle-add containers-basic
    
  2. Start the Docker service and enable it to start automatically at boot:

    sudo systemctl enable --now docker.service
    
  3. Configure kubelet to use the Clear Linux OS directory for cni-plugins and reload the service.

    sudo mkdir -p  /etc/systemd/system/kubelet.service.d/
    
    cat << EOF | sudo tee  /etc/systemd/system/kubelet.service.d/0-cni.conf
    [Service]
    Environment="KUBELET_EXTRA_ARGS=--cni-bin-dir=/usr/libexec/cni"
    EOF
    
    sudo systemctl daemon-reload
    
  4. Run the kubeadm command to initialize the master node:

    Important

    You may need to add additional parameters to the command below, depending the pod network addon in use.

    In this example, the --pod-network-cidr 10.244.0.0/16 parameter is to use flannel as the pod networking. See Choose a pod network add-on for more information.

    sudo kubeadm init \
    --pod-network-cidr 10.244.0.0/16
    
  5. Once the cluster initialization is complete, continue reading about how to Use your cluster.

Add the Kata runtime to Kubernetes

For information on Kata as a container runtime, see What is Kata Containers*?. Using Kata Containers is optional.

You can use kata-deploy to install all the necessary parts of Kata Containers after you have a Kubernetes cluster running with one of the CRI’s using the default runc runtime. Follow the steps in the Kubernetes quick start section of the kata-containers GitHub README to install Kata.

Use your cluster

Once your master control plane is successfully initialized, follow the instructions presented about how to use your cluster and its IP, token, and hash values are displayed. It is important that you record this information because it is required to join additional nodes to the cluster.

A successful initialization looks like this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

...

You can now join any number of machines by running the following on each node
as root:

kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

With the first node of the cluster setup, you can continue expanding the cluster with additional nodes and start deploying containerized applications. For further information on using Kubernetes, see Related topics.

Note

By default, the master node does not run any pods for security reasons. To setup a single-node cluster and allow the master node to also run pods, the master node will need to be untained. See the Kubernetes documentation on control plane node isolation.

Troubleshooting

Package configuration customization

Clear Linux OS is a stateless system that looks for user-defined package configuration files in the /etc/<package-name> directory to be used as default. If user-defined files are not found, Clear Linux OS uses the distribution-provided configuration files for each package.

If you customize any of the default package configuration files, you must store the customized files in the /etc/ directory. If you edit any of the distribution-provided default files, your changes will be lost in the next system update as the default files will be overwritten with the updated files.

Learn more about Stateless in Clear Linux OS.

Logs

  • Check the kubelet service logs sudo journalctl -u kubelet

Setting proxy servers for Kubernetes

If you receive any of the messages below, check outbound Internet access. You may be behind a proxy server.

  • Images cannot be pulled.

  • Connection refused error.

  • Connection timed-out or Access Refused errors.

  • The warnings when kubeadm init is run.

    [WARNING HTTPProxy]: Connection to "https://<HOST-IP>" uses proxy "<PROXY-SERVER>". If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "<PROXY-SERVER>". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
    [WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "<PROXY-SERVER>". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
    

If you use an outbound proxy server, you must configure proxy settings appropriately for all components in the stack including kubectl and container runtime services.

Configure the proxy settings, using the standard HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables. The NO_PROXY values are especially important for Kubernetes to ensure private IP traffic does not try to go out the proxy.

  1. Set your environment proxy variables. Ensure that your local IP address is explicitly included in the environment variable NO_PROXY. Setting localhost is not sufficient!

    export http_proxy=http://proxy.example.com:80
    export https_proxy=http://proxy.example.com:443
    export no_proxy=.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,`hostname`,localhost
    

    Important

    kubeadm commands specifically use these shell variables for proxy configuration. Ensure they are set your running terminal before running kubeadm commands.

  2. Run the following command to add systemd drop-in configurations for each service to include proxy settings:

    services=(kubelet docker crio containerd)
    for s in "${services[@]}"; do
    sudo mkdir -p "/etc/systemd/system/${s}.service.d/"
    cat << EOF | sudo tee "/etc/systemd/system/${s}.service.d/proxy.conf"
    [Service]
    Environment="HTTP_PROXY=${http_proxy}"
    Environment="HTTPS_PROXY=${https_proxy}"
    Environment="SOCKS_PROXY=${socks_proxy}"
    Environment="NO_PROXY=${no_proxy}"
    EOF
    done
    
  3. Reload the systemd manager configuration.

    sudo systemctl daemon-reload
    

If you had a previously failed initialization due to a proxy issue, restart the process with the kubeadm reset command.

DNS issues

  • <HOSTNAME> not found in <IP> message.

    Your DNS server may not be appropriately configured. Try adding an entry to the /etc/hosts file with your host’s IP and Name.

    Use the commands hostname and hostname -I to retrieve them.

    For example:

    10.200.50.20 myhost
    
  • coredns pods are stuck in container creating state and logs show entries similar to one of the following:

      Warning  FailedCreatePodSandBox  5m7s                 kubelet, kata3     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get network JSON for pod sandbox k8s_coredns-<ID>>-5gpj2_kube-system_<UUID>): cannot convert version ["" "0.1.0" "0.2.0"] to 0.4.0
    
    In this case the :file:`/etc/cni/net.d/10-flannel.conf` or another CNI file
    is using an incompatible version. Delete the file and restart the stack.
    
    Warning  FailedCreatePodSandBox  117s (x197 over 45m)  kubelet, kata3     (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-<ID>>-npsm5_kube-system_<UUID>: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
    

    In this case, there may be multiple CNI configuration files in the /etc/cni/net.d folder. Delete all the files in this directory and restart the stack.

    Warning  FailedScheduling  55s (x3 over 2m12s)  default-scheduler  0/1
    nodes are available: 1 node(s) had taints that the pod didn't tolerate.
    

    In this case, there may be multiple CNI configuration files in the /etc/cni/net.d folder. Delete all the files in this directory, apply a CNI plugin, and restart the stack.

Reference

What is Kubernetes?

Kubernetes (K8s) is an open source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Kubernetes supports using a variety of container runtimes.

What is a Container Network Interface (CNI)?

In Kubernetes, a pod is a group of one or more containers and is the smallest deployable unit of computing in a Kubernetes cluster. Pods have shared storage/network internally but communication between pods requires additional configuration. If you want your pods to be able to communicate with each other you must choose and install a pod network add-on.

Some pod network add-ons enable advanced functionality with physical networks or cloud provider networks.

What is a Container Runtime Interface (CRI)?

Container runtimes are the underlying fabric that pod workloads execute inside of. Different container runtimes offer different balances between features, performance, and security.

Kubernetes allows integration various container runtimes via a container runtime interface (CRI).

What is CRI+O?

CRI+O is a lightweight alternative to using Docker as the runtime for kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods, such as runc and Kata Containers as the container runtimes.

CRI+O allows setting a different runtime per-pod.

What is containerd?

containerd is the runtime that the Docker engine is built on top of.

Kubernetes can use containerd directly instead of going through the Docker engine for increased robustness and performance. See the blog post on kubernetes containerd integration for more details.

containerd allows setting a different runtime per-pod.

What is Docker?

Docker is an engine for running software packaged as functionally complete units, called containers, using the same operating system kernel.

The default built-in runtime provided by Kubernetes is using the system Docker installation via Dockershim and as a result is one of the simplest to use. One limitation of using Dockershim is that all pods on the Kubernetes node will inherit and use the default runtime that Docker is set to use. To be able to specify a container runtime per-Kerbernetes service, use CRI+O or containerd.

What is Kata Containers*?

Kata Containers is an alternative OCI compatible runtime that secures container workloads in a lightweight virtual machine. It provides stronger workloads isolation using hardware virtualization technology as a second layer of defense for untrusted workloads or multi-tenant scenarios.

The Kata Containers (kata-runtime) adheres to OCI guidelines and works seamlessly with Kubernetes through Docker, containerd, or CRI+O.