Migrating from Docker to Containerd

Read More >

The last decade saw Docker and Kubernetes as the preferred choices for running and orchestrating containers.. As the flag-bearer of containerization, Docker had a tremendous role to play in the modern tech landscape, which relied on rapid delivery cycles, enhanced scalability, and agility. 

But time, tide, and technology move faster than one can imagine. With the industry leaning towards standardization of the containerized ecosystem in order to focus on a higher degree of flexibility and interchangeability, organizations started embracing alternatives to the Docker runtime that were compliant with OCI standards

In this article, we learn the basic components of a container-based ecosystem, the reason why organizations should transition from Docker to containerd, the leading container runtime and recent graduate of the CNCF, and the steps involved for a typical migration. 

Basics of a Container-Based Ecosystem

Some of the components of a containerized ecosystem include: 

-Containers consist of lightweight logical packaging of application code, along with its dependencies, that can operate seamlessly across any environment including public/private cloud or on-premises. With a virtualized OS, containers naturally stimulate automation while helping to reduce overhead in the provisioning and management of infrastructure resources. 

-A container runtime is the low-level software component that is responsible for running and maintaining container images on a host operating system (OS). Also known as a container engine, it manages the complete lifecycle of a container. Some popular container runtimes include Docker, runC, containerd, CRI-O, rkt, and Crun.

-Built to work with container runtimes, a container orchestrator relies on declarative configuration files to automate the operations needed to run containerized applications. While container runtimes manage individual containers running within a cluster, a container orchestrator is responsible for managing a cluster of containers. Some popular container orchestration frameworks include Kubernetes, Apache Mesos, Docker Swarm, and HashiCorp Nomad.

-The Container Runtime Interface (CRI) defines a standard containerization format that enables an orchestrator to use more than one container runtime without having to recompile the components of the cluster. For the orchestrator to launch containerized applications, it requires a working runtime on each machine. The interface was developed by Kubernetes to create a plugin that lets its kubelet service communicate with any runtime without requiring direct integration with the Kubernetes source code. 

Benefits of leveraging the CRI include:

-A standard format for accessing container runtimes

-Fostering collaboration between multiple development and infrastructure management teams

-Enhanced productivity by eliminating the need to recompile runtimes

Why Migrate from Docker to containerd?

Docker was developed to enable the packaging of the executable components of an application into containers that can run in any environment. Contrary to popular belief, Docker was never developed as a container runtime; instead, the platform offered a feature-rich interface while leveraging the containerd runtime out of the box. This abstracted construct allowed organizations to migrate existing workloads and  operate seamlessly on containerd node images. 

Docker Deprecation

Since Docker was non-compliant with CRI standards, Kubernetes had to implement an additional translational layer called Dockershim to allow kubelets to interact with Docker. Docker, on the other hand, interfaced with the containerd runtime to execute containers. With the introduction of its CRI plugin, Kubernetes planned to remove built-in support for Dockershim from upcoming versions; it also wanted to expand its horizon to incorporate a broader set of container runtimes, instead of maintaining dedicated support for Docker. 

Eventually, in December 2020, Kubernetes announced that it would deprecate support for Docker as a runtime and focus on working with other container runtimes implementing CRI standards, like containerd. Although this came as an initial setback for organizations that were leveraging Docker as their preferred runtime, they were quickly able to realize the added benefits of enhanced security and resource-efficiency of containerd over Docker.

Benefits of containerd

Containerd was spun out of Docker as a self-contained CRI-compliant container runtime and donated to the Cloud Native Computing Foundation (CNCF). Though the platform is an extraction of Docker’s low-level runtime features, containerd was gradually built up to perform full lifecycle management of containers, including:

-Image transfer

-Low-level storage orchestration

-Network attachments

-Container execution

While the benefits of leveraging containerd as a runtime differ per use case, the following are some of its primary features:

Implements Open Container Initiative (OCI) Specifications

OCI standardizes the building of containers that are accepted by any runtime, orchestrator, or deployment environment. Containerd leverages runC for creating, spawning, and running Open Container Initiative (OCI) compliant containers for building both Docker and non-Docker container solutions.

CRI Compliance

Containerd also integrates seamlessly with the CRI API, which enables the kubelet service to communicate directly with the runtime. The API additionally allows organizations to orchestrate containers built by containerd and seamlessly integrate other runtimes into the Kubernetes ecosystem.

Resource Efficiency

Containerd is considered more resource-efficient and secure than Docker as a runtime. The platform offers superior performance due to lower computational overhead, minimum memory latency, and configurable resource limits.

How to Migrate from Docker to containerd

Quick Note: The following section demonstrates the steps to migrate from Docker to containerd in a self-managed Kubernetes cluster. Managed services such as the Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and AWS Elastic Kubernetes Service (EKS), offer different approaches to expedite the migration. To learn more, you can refer to your cloud provider’s documentation for migrating Docker workloads to containerd. 

Prerequisites

This demo requires the following setup:

-A working Kubernetes cluster

-Nodes with the Docker runtime installed

-A containerd instance 

-Access to the CLI with kubectl (for running commands)

-An installed containerd CLI (for running commands) 

Stage - 1: Getting the Nodes Ready for Migration

As the first step, check the details of the already-operational nodes by running the command:

				
					$ kubectl get nodes -o wide
				
			

This returns the information of all nodes in the cluster, as shown below:

				
					NAME    STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION      CONTAINER-RUNTIME
node1   Ready    control-plane,master   5m41s   v1.20.1   192.168.0.8   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   docker://20.10.1
node2   Ready    <none>                 2m34s   v1.20.1   192.168.0.7   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   docker://20.10.1
node3   Ready    <none>                 2m17s   v1.20.1   192.168.0.6   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   docker://20.10.1

				
			
Cordoning the Node

Run the cordon command to ensure that the control plane does not schedule any further pods on the target node. Commence with the first worker node using the command:

				
					$ kubectl cordon node2
				
			

This returns the below output:

				
					node/node2 cordoned
				
			
Draining the Node

Safely evict all pods from the node, as shown below, to ensure the migration does not affect existing workloads. Also, make sure the eviction ignores DaemonSets so that the node can accept pods from the existing cluster once the node is back up again:

				
					$ kubectl drain node2 --ignore-daemonsets --delete-emptydir-data
				
			

If successful, the terminal will display the following message:

				
					node/node2 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-m6l4j, kube-system/kube-router-nq8rt
node/node2 drained

				
			
Stopping Docker and Kubelet Services

Stop the kubelet and Docker services to keep the node from communicating with the cluster during the switch. Use the following commands:

				
					$ systemctl stop kubelet
$ systemctl stop docker
				
			
Purging Docker

This step is optional, but as the final step of this stage, it is important to remove Docker and its dependencies from the machine, as this frees up resources for other tools. Use the following command to purge Docker associated resources:

				
					$ yum remove docker-ce docker-ce-cli

				
			

Stage - 2: Migrating Workloads from Docker to Containerd

Configuring containerd

To configure containerd as the node’s runtime, enable the CRI interface by commenting out the disabled_plugins line in /etc/containerd/config.toml:

				
					 $ vi /etc/containerd/config.toml
 #disabled_plugins = ["cri"]
				
			

Generate the new default configuration file for containerd if it doesn’t exist:

				
					$ containerd config default > /etc/containerd/config.toml
				
			

Quick note: Skip the above step if containerd has already been installed on the node.

Restart the containerd service by running the command:

				
					$ systemctl restart containerd

				
			
Changing the Container Runtime

To switch the runtime to containerd, open the /var/lib/kubelet/kubeadm-flags.env file and add the containerd runtime flags as shown below:

				
					--container-runtime=remote 
--container-runtimeendpoint=unix:///run/containerd/containerd.sock"
				
			
Restarting the Kublet Service

Restart the kubelet service so the node can begin communicating with the control plane:

				
					$ systemctl start kubelet
				
			

Stage - 3: Testing the Migration

Test whether the worker node uses containerd as its runtime by describing the nodes in the cluster:

				
					$ kubectl get nodes -o wide
				
			

This will return a response similar to:

				
					NAME    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION      CONTAINER-RUNTIME
node1   Ready    control-plane,master   87m   v1.20.1   192.168.0.8   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   docker://20.10.1
node2   Ready,SchedulingDisabled    <none>                 84m   v1.20.1   192.168.0.7   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   containerd://1.4.3
node3   Ready    <none>                 84m   v1.20.1   192.168.0.6   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   docker://20.10.1

				
			

Notice that node2 has its status as Ready,SchedulingDisabled. This implies that the node is up and running but still can’t accept scheduled pods because it was cordoned off in one of our earlier steps. Therefore, you’ll need to enable scheduling by running the uncordon command:

				
					$ kubectl uncordon node2
				
			

This returns the below output:

				
					node/node2 uncordoned
				
			

Repeat the above steps for each node for a cluster-wide migration.

Conclusion

The term container is still synonymous with Docker. Initially, Kubernetes’ announcement of Docker’s deprecation as a container runtime seemed like a sudden disruption, but developers were quick to realize the benefits of containerd and noted that very little had changed with respect to operating a cluster in the cloud. 

Docker’s deprecation is only related to its usage as a container runtime for Kubernetes; organizations can still use Docker for building and maintaining container images. It is, however, crucial for organizations to assess the fallout and possible operational impacts while exploring and migrating to other low-level CRI-compliant runtimes. 

Use Kubernetes? Now you can increase flexibility and elasticity in Kubernetes environments. Meet Zesty Disk for Kubernetes.

Contact one of our cloud experts for more information. 

blogbanner1_320x480_ML