Kubernetes Security 101

Read More >

Kubernetes is the de facto container orchestrator in the market today, with no close competitor. It has radically changed software development and enabled the microservice architecture for an agile and scalable world. However, security is still a drawback for Kubernetes. 

 

There’s not a single product, tool, or platform to make Kubernetes secure because Kubernetes is a melting pot of various things: the complete software development lifecycle, cloud providers, storage and network interfaces, container runtimes, and finally, different teams that include but are not limited to developers, DevOps, or system admins.

 

This blog will discuss the security challenges of Kubernetes and dive into every step of the security lifecycle with best practices you should follow.

Kubernetes Security Challenges

Kubernetes offers flexibility and scalability for applications, but it also comes with several challenges:

  • Node and cluster networking: Kubernetes distributes a workload to worker nodes, which can be a part of cloud providers’ or on-premises data centers with private and public networks. If the rules are not set well between networks, it is possible to reach data-sensitive services running in the clusters.
  • Large scale: Kubernetes is designed to be highly scalable, with up to 5,000 nodes and 300,000 containers. When the number of containers and applications increases, it becomes challenging to maintain a holistic view of the cluster. In addition, security incidents and breaches may pose a greater risk when you scale up.
  • Container images: Kubernetes distributes and runs container images with the application binary, external dependencies, and a base operating system. This means it doesn’t check whether the container image has any vulnerabilities. 
  • Default (and not secure) options: Kubernetes simplifies how to create and deploy applications. Kubernetes API comes with a bunch of default values to easily create and manage resources in the cluster, but most default configuration parameters create an application stack with various potential exposures.
  • Runtime security: Containers run on worker nodes and create a new layer for security teams to watch. For instance, a running malicious code in the container could scan and try to connect other applications running on the node. Therefore, you should enforce containers and their isolation in the runtime stage. 
  • Compliance: Policy analysis and compliance checks become a daily task when it comes to enterprise-level applications. With Kubernetes and its distributed nature, it is not straightforward to achieve easy auditing via conventional approaches. 
  • Easy lateral movement: Kubernetes has multiple layers of tools to run the complete cluster. Every component communicates with each other and takes actions on the nodes, containers, and various cloud resources. If the RBAC rules are not applied well—like using least privilege access—the lateral movement for a malicious actor is fairly easy and dangerous.

 

Kubernetes comes with these—and most likely more soon-to-be-discovered—security challenges. In order to address them, you should integrate a security mindset into every step of your application lifecycle. 

Kubernetes Security Lifecycle and Best Practices

To address the challenges discussed above, there are several recommended practices you can implement for different parts of your Kubernetes lifecycle. 

Infrastructure

Kubernetes clusters live on the infrastructure with their own control plane and worker nodes, with the exception of managed Kubernetes services like EKS, AKS, or GKE. So it’s essential to keep the foundation solid. 

 

Kubernetes is a set of open-source applications running together to create a Kubernetes control plane and API. Like every other open-source project, it is supported with security patches only in the last few versions. This means you should make sure to keep your Kubernetes API machinery up-to-date

 

The Kubernetes API server is the central place where internal and external systems connect, meaning you should avoid unauthenticated or anonymous access without encryption.

 

etcd is the key/value store where Kubernetes stores the specification and state of all the resources in a cluster. Without encryption of the data at rest and secure communication, etcd instances are open to exposure. In order to mitigate the risk, it is suggested to configure and use the encryption capabilities of Kubernetes and integrate the external providers if needed. Kubernetes has an official guide in the docs for encrypting secret data at rest.

 

kubelet is the Kubernetes agent running on the worker nodes. kubelet connects to Kubernetes API to fetch cluster resource definitions, secrets, and config maps. When it is misconfigured—like enabling the –anonymous-auth flag—it is possible to leverage the access of kubelet, leading to breaches. Therefore, you should check all kubelet options for default values and choose secure parameters. If you’re using a managed Kubernetes service, you can rely on the cloud provider since the kubelet is preconfigured following the latest security standards.

 

Kubernetes nodes are the servers that run the container workload. Nodes could be virtual machines or bare metal servers with a container-optimized operating system. In addition, volumes are also attached to these nodes to be accessible inside the containers. Data leaks and leverage attacks are inevitable when the nodes are not secure, so it’s essential to keep nodes lean with a minimal set of libraries, dependencies, users, and access levels.

Container Build

Kubernetes is a container orchestrator platform that distributes, runs, and manages containers over the cluster. But Kubernetes does not check the content of container images before running. It simply downloads them from the specified container registry and runs. This means that even a single container with malicious code could destroy your entire cluster and leak sensitive data to third-party intruders. 

In the container build stage, you should implement the following best practices to ensure that container images are secure.

 

Container images start with a base image similar to an operating system. It is suggested to use minimal base images—like Scratch, Alpine Linux, or other slim images—removing unnecessary components and always using the latest versions.

 

Installing other applications like databases, message queues, or monitoring systems is prevalent in Kubernetes clusters. For all these applications, you will use the prebuilt container images, but you need to check what is installed by these containers since not all the container images in your clusters will be built and managed by you. The well-known approach is to use an image scanner to identify vulnerabilities in the container images and detect faulty base images, libraries, or third-party dependencies. For managed container registries like AWS ECR, GCP Container Registry, JFrog, or Docker Hub, there are already in-house scanners ready to use during pipeline stages and statically for images in the registry.

 

Container build and scanning should be automated and integrated into your CI/CD pipeline to create automatic failure when a high-severity vulnerability is found.

Deployment

All applications are deployed to Kubernetes as a set of resource definitions. When the number of applications and users increases, tracking what, how, and where they are deployed becomes challenging. In addition, a holistic view of access is required for enterprise-level environments. This makes it critical in the deployment phase to have a secure Kubernetes cluster.

 

Kubernetes is a multi-tenant platform with multiple users and applications. The Kubernetes-native isolation method is namespaces where users and applications are separated from each other. Using different namespaces for different teams, applications, and environments is suggested not to interfere with other applications in the cluster. The complete segregation of environments is rendered even more secure by deploying on different clusters on different networks—and cloud accounts.

 

By default, every pod in a Kubernetes cluster can connect to any other pod in the cluster. Network policies control the traffic inside and outside the cluster. These are critical if there are sensitive applications that should not be connected to by other—potentially malicious—pods in the cluster. 

 

Secrets are the Kubernetes resources to store sensitive data like a password, token, or certificate. These data are mounted to the containers as files or environment variables to use in the runtime. Therefore, it is required to deploy and mount only the needed secrets in Kubernetes workloads. In addition, RBAC rules should be configured to protect who can access these secrets. The better solution is to integrate an external secrets manager like Vault into the cluster.

 

Containers run on the worker nodes as Linux processes. In Kubernetes, up to 110 pods—with one or more containers—can run on the node, making the node a shared place that should be well-isolated between containers. It is suggested to avoid using root user, privileged permissions, and host networking. A read-only file system and dropping unnecessary Linux capabilities are also helpful.

 

Kubernetes image scanning and results should be integrated into the deployment phase to enforce policies according to vulnerability analysis.

 

Kubernetes labels and annotations store metadata and are part of every Kubernetes resource. Adding support-related metadata—owner, responsible party, operator—to the resources could easily decrease issue resolution time.

 

Role-based access control (RBAC) is the Kubernetes-native method of regulating access to resources living in the cluster. RBAC policies are highly configurable in the sense that you can define access levels of a single resource. Creating a robust set of RBAC policies will secure your cluster even if there are intruders in your system.

Runtime

Runtime security in Kubernetes focuses on the protection of containers while they’re running in the cluster. It is an overlooked part of security and thus a favorite with intruders. It is highly challenging in Kubernetes to find active threats in runtime due to its high scalability and distributed nature. 

 

With the following best practices, you will ensure that the workload runs securely. You can see that new vulnerabilities are discovered in the applications you have already deployed. So regular checks of the vulnerability scans to the deployed applications are critical to minimize risks. 

 

The Kubernetes network is governed by networking plugins and policies. Comparing the actual traffic in the cluster to the desired—limited— traffic level is helpful to define anomalies and detect malicious containers. You can also establish centralized monitoring of the application health and log collection to support anomaly detection.

 

Multiple environments—development, testing, staging, and production—are part of standard cloud-native software delivery. It is expected for applications to behave similarly in staging and production environments. So, consider comparing and analyzing runtime activities of identical deployments in multiple environments to detect security incidents. Furthermore, design your landscape for separating the environments as much as possible. One way to do this is by using different networks and cloud accounts.

 

You can check our article on hardening Amazon EKS security for examples and hands-on steps to see these best practices in action. 

Conclusion

Kubernetes is the indisputable leader in container orchestration, but holistic security is not within its scope. In other words, it’s not fair to say Kubernetes is secure by default. In addition, it needs meticulous care to integrate security measures into its lifecycle, plus it requires Kubernetes-native tools to work seamlessly and be able to scale. 

 

Zesty offers automated cloud optimization to reduce costs, free resources, and maximize efficiency. With its rich cloud automation features, you can benefit from a robust platform at every stage of the Kubernetes security lifecycle. 

Book a demo now and start running your cloud on autopilot.