The API Server
The API server is the entry point into your cluster, and securing this component properly is critical.
Managed Kubernetes Considerations
If you run a managed Kubernetes service in the cloud, your cloud provider is responsible for isolating and securing the control plane components. This means you won’t have much influence over how the API server is configured. However, there are still a few critical steps you can take:
Don’t Leave the API Server Publicly Exposed
Even if authentication is enabled, exposing the API server to the internet leaves it susceptible to zero-day exploits, port scans, and other attacks. If it’s not absolutely necessary, keep it private.
Use Pod Admission Control
While you might not manage the control plane directly, you do need to secure other parts of the cluster (like the kubelet). Consider third-party solutions such as Kyverno or OPA for robust policy enforcement, or at the very least enable the built-in Pod Security Standards (baseline, restricted or privileged).
Block access to the metadata service
Prevent pods from accessing the cloud provider's metadata service to mitigate the risk of sensitive information (like credentials or API keys) being leaked. Use network policies to block access to the metadata service endpoint unless explicitly needed.Leverage Cloud Identity Management
Configure conditional access and MFA to reduce your attack surface. This ensures only authenticated, authorized users can access the API server.
Set Up IP Allowlisting
If you do need a public endpoint, restrict its reach to known IP ranges or certain geographic areas. This measure helps protect against unsolicited port scans and brute-force attempts.
Use Short-Lived Tokens
If a token is leaked, it remains valid only for a short period, limiting potential damage.
Choose a CNI That Can Handle DDoS Traffic
Plugins like Cilium or Calico can help mitigate DDoS scenarios. Even if you use a cloud-based CDN or load balancer, having a CNI capable of dealing with attacks adds an extra layer of resilience. Most cloud providers utilize Cilium for their recommended CNI.
On-Premises Recommendations
If you’re running Kubernetes on-premises, you should rarely (if ever) need a publicly accessible API server. Treat it like any other internal service.
- mTLS Between API Server and Internal Components: By default, this is enabled, but ensure it remains so for full encryption in transit.
- OIDC for User Access: Simplify user management by integrating with an OIDC provider (e.g., Authentik or Keycloak).
- Enable Audit Logging: Forward logs to a SIEM or centralized log management system for monitoring and incident response.
One last advice:
Using CI/CD solutions like ArgoCD or FluxCD can allow developers to deploy applications without directly interacting with the API server. This reduces potential attack vectors but also introduces new components to secure. For example, as shown in this write-up, vulnerabilities in ArgoCD itself can be exploited. Always audit and secure any new tools you bring into your environment.
The Etcd server
For managed Kubernetes in the cloud, etcd is fully handled by your provider. However, these points apply if you’re running Kubernetes on-premises:
Keep etcd’s Private Key Safe
Only the API server communicates with etcd, so that private key is effectively the single key to the castle. Never allow it to leave the control plane.
Don’t Host Non-Essential Workloads on Control Plane Nodes
etcd runs on the control plane, so keep public-facing or non-critical workloads off these nodes to avoid unnecessary attack surfaces. This is non negotiable.
Enable Encryption at Rest
Use an EncryptionConfiguration to ensure that etcd data (including secrets) is encrypted. Even if the private key to authenticate with Etcd is exposed, encrypted secrets remain protected.
The Node
Kubernetes isn’t some sort of wizardry. Someone once quipped, “Kubernetes is just a Linux distribution platform for micro services.” That’s a bit like saying “an airplane is just a bus with wings”. Technically correct—but it misses all the complexity, layers, and features that make it truly powerful. Still, Kubernetes needs an operating system, hardware, cables, and electricity, just like any other platform.
- Secure the access by not running anything other than the Kubernetes components
- Do not expose any unnecessary ports.
- Keep SSH access to a minimum. At best you should disable traditional remote access as a whole. E.g. Talos Linux uses it’s own API with TLS, defeating the need of using SSH for remote management.
- The CA signing key is normally stored on the control plane node, keep it safe or else attackers can just sign new certificates, set up new control plane or worker nodes if they’d like.
- Never reuse the same CA across multiple clusters—if one cluster is compromised, attackers effectively get a VIP pass to the rest.
- Like any other host, solidify your patching routines and keep the underlying operating system up to date.
Wrapping Up
Securing the control plane is fundamental to protecting your Kubernetes cluster. As the heart of your infrastructure, the control plane's compromise could grant attackers the ability to manipulate and control every aspect of your workloads.
Key takeaways:
- For Managed Kubernetes, focus on reducing your exposure by leveraging cloud provider tools like identity management, pod security policies, and network restrictions.
- For On-premises Kubernetes, ensure strong internal security practices, including encryption, audit logging, and proper management of sensitive keys.
- Across both environments, minimize direct interaction with the API server by using tools like CI/CD systems, and always secure the nodes running your workloads.
The complexity of Kubernetes brings powerful capabilities, but it also introduces unique security challenges. By implementing the practices discussed here, you can significantly reduce your attack surface and build a resilient cluster.
Remember, Kubernetes security is a journey, not a destination. Stay vigilant, keep learning, and regularly audit your environment to ensure it remains secure.
To learn more about our Kubernetes Security offering: https://www.o3c.no/services/kubernetes-security-assessment