Getty Images/iStockphoto

How to secure Kubernetes clusters in 7 steps

Kubernetes security is critical for an enterprise's welfare. To protect Kubernetes clusters, create a plan to implement security protocols and control access.

With the increase in container and container orchestrator adoption from enterprises to medium- and small-scale businesses, the requirement to protect any critical or sensitive infrastructure that runs container workloads has also increased.

As Kubernetes is the most popular container and container orchestration tool, let's discuss security best practices organizations should adopt to secure their Kubernetes clusters.

1. Upgrade Kubernetes to latest version

The most basic and neglected security best practice is to keep Kubernetes environments up to date. Take advantage of new updates and version releases of security features and bug fixes. In addition, use the latest stable version in the test environment before deploying to the production cluster.

2. Secure Kubernetes API server authentication

Kubernetes APIs serve as the primary access point for a Kubernetes cluster. Admins or service accounts can access APIs through the command-line utility kubectl, REST API calls or other client SDKs.

The Kubernetes API server, also known as kube-apiserver, hosts the APIs and forms the core of a Kubernetes cluster. The server grants access and ensures a cluster is up and running.

As a best practice, all API calls in the cluster must use Transport Layer Security, which is encrypted. Adopt an API authentication mechanism for the API servers as per the access requirements.

Common authentication methods include simple certificates or a bearer token. Large-scale, enterprise-level clusters should integrate with third-party OpenID Connect providers or Lightweight Directory Access Protocol servers to segregate users into specific groups and control access. Refer to the official Kubernetes documentation for an overview on how to authenticate users and for authentication strategies.

3. Enable role-based access control authorization

Role-based access control (RBAC) is an access control mechanism that enables users and applications to perform specific actions based on the least-privilege model and enforce required permissions only. This might seem time-consuming -- it does require additional work to set up -- but it's impossible to secure large-scale Kubernetes clusters that run production workloads without implementing RBAC policies.

The following are some Kubernetes RBAC best practices administrators should follow:

  • To enforce RBAC as a standard configuration for cluster security, enable RBAC in an API server by passing the –authorization-mode=RBAC parameter.
  • Use dedicated service accounts per application, and avoid using the default service accounts Kubernetes creates. Dedicated service accounts enable admins to enforce RBAC on a per-application basis and provide better controls for the granular access granted to each application resources.
  • Reduce optional API server flags to reduce the attack surface area on the API server. Each flag enables a certain aspect of cluster management, which can expose the API server. Minimize using these optional flags:
    • -anonymous-auth;
    • -insecure-bind-address; and
    • -insecure-port.
  • For an RBAC system to be effective, enforce least privileges. When the cluster administrators follow the principle of least privilege and assign only the permissions required to a user or application, everyone can perform their job. Do not grant any additional privileges, and avoid wildcard verbs ["*"] or blanket access.
  • Update and continuously adjust the RBAC policies to avoid becoming outdated. Remove any permissions no longer required. This can be tedious, but worth the work to secure production workloads.

4. Control access to the kubelet

The kubelet is an agent that runs on each node of the cluster. It interacts with users through a set of APIs that control the pods running on the nodes and performs specific operations. Unauthorized access to the kubelet gives attackers access to the APIs and can compromise node or cluster security.

Take the following steps to reduce this attack surface and prevent unauthorized access to the APIs through the kubelet:

  1. Disable anonymous access by setting the –anonymous-auth flag to false before starting the kubelet: --anonymous-auth=false.
  2. Start the kube-apiserver --kubelet-client-certificate and --kubelet-client-key flags. This ensures the API server authenticates to the kubelet and stops anonymous calls.
  3. The kubelet provides a read-only API, which admins can access without authentication. This could expose potentially sensitive information about the cluster, so admins should close the read-only ports using the following command: --read-only-port=0.

5. Harden node security

To harden the node security that the pods run on, start with the following:

Configuration standards and benchmarks. Configure the host correctly as per the security recommendations. Validate clusters against Center for Internet Security benchmarks tied to specific Kubernetes releases.

Admin access minimization. Reduce the attack surface area by reducing the administrative access on Kubernetes nodes.

Node isolation and restrictions. Run specific pods on certain nodes or group of nodes. This ensures the pods run on nodes with specific isolation and security configurations.

To control which nodes a pod can access, add labels to node objects to enable pods to target specific nodes:

kubectl label nodes <node name> <label key>=<label value>

Once the node label is applied, add a nodeSelector to the pod deployments so the pod deploys to the selected node, like in the following YAML file:

apiVersion: v1
kind: Pod
  name: nginx
    env: staging
  - name: nginx-staging
    image: nginx

6. Set up namespaces and network policies

Namespaces isolate sensitive workloads from nonsensitive ones. Even though managing multiple namespaces can be complex, it makes it easier to implement security controls like network policies on sensitive workloads to control the traffic flow to and from pods.

7. Enable audit logging

Enable audit logs for Kubernetes clusters, and monitor them for malicious activity and suspicious API calls. Kubernetes can keep granular records of actions performed in the cluster. Audit logs detect potential security issues in almost real time. For example, an attacker trying to brute force a password might generate authentication and authorization related logs. If they are repetitive, it could be a security issue.

Audit logs are disabled by default; to enable them, use the Kubernetes audit policy, which enables admins to define one of the four audit levels:

  1. None. Don't log events that match this rule.
  2. Metadata. Log request metadata, such as requesting user, timestamp, resource and verb.
  3. Request. Log event metadata and request body but not response body. This does not apply for nonresource requests.
  4. RequestResponse. Log event metadata, requests and response bodies. This does not apply for nonresource requests.

Follow the steps below to enable audit logs on Kubernetes clusters:

  1. Start with SSH in the master node.
  2. Create an audit log policy file using the following YAML, and save it as yaml:
    kind: Policy
      - level: Metadata
        - group: ""
          resources: ["pods/log", "pods/status"]
      - level: RequestResponse
        - group: ""
          resources: ["pods"]
  1. Create a new directory on the master node to store the audit logs, for example, at /kube/auditlogs/.
  2. To configure the kube-apiserver to load the audit policy, edit the manifest file at location /etc/kubernetes/manifests/kube-apiserver.yaml, and add the -audit-policy-file flag to the policy YAML created in step 2. Or define -audit-log-path to direct audit logs to a specific file.
  3. Save and exit the file.

Next Steps

Step-by-step guide to working with Crossplane and Kubernetes

Boost cluster security with Kubernetes vulnerability scanning

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
Data Center