April 22nd, 2025 - Chris Aubuchon, Head of Customer Success

The Top 4 Kubernetes Misconfigurations You Can Avoid on Cycle

Most cloud infrastructure and deployment misconfigurations start innocently enough: a dev under pressure to ship quickly tweaks a configuration file or adjusts a permission setting to make something work. It's not malicious and it might even be well thought out, but these small changes can cause a cascade of reactions that bring down production in seconds.

As engineering teams grow, these mistakes tend to multiply and the number of people able to correctly spot very nuanced issues (see bus factor) stays the same. If you add to this increasingly complex and bespoke environments like Kubernetes… the potential for trouble also grows exponentially. Platforms like Kubernetes are known for their vast configurability. It's birth as "the de-facto container orchestration platform" was also a death sentence for many teams adopting it because in order to be a solution that works for EVERYONE it had to also support just about any configuration; causing a monumental shift in overall complexity.

On the other hand, Cycle.io approaches this problem directly. Instead of relying heavily on manual configurations, Cycle is designed from the start to minimize human error by using secure-by-default settings, automated updates, and strict access controls. It recognizes the need for guardrails while still maintaining composability.

In this article I'll reference some of the most common Kubernetes misconfigurations and how Cycle handles things a bit differently. These are based on the OWASP Kubernetes Top 10.

#1 Overly Permissive Role Based Access Controls

OWASP has this to say about the danger of misconfigured RBAC

"Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. A RBAC misconfiguration could allow an attacker to elevate privileges and gain full control of the entire cluster."

Wait "take control of the cluster" from a pod… Lets take a peek at how that can happen.

In Kubernetes each pod gets a service account bound to it and a common pattern is to apply either an overly permissive default or overly permissive admin type service account to a pod. The real thing to think about here is that if someone was able to breach that pod they can do pretty much whatever they want to your cluster.

What does an overly permissive service account look like:

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pwned-binding subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io

This spec would give any pod in this namespace cluster-admin rights. I can see why this would be such a big deal to get wrong. So what are some approaches to mitigate this?

Principle of Least Privilege (PoLP)

Grant only the minimum set of permissions needed for users, pods, or services.

Pros Cons
  • Strongest security model
  • Limits blast radius
  • Reduces lateral movement in compromise
  • Tedious to manage manually
  • Hard to know exact permissions needed
  • Breaks apps if too strict

Use Namespaced Roles Instead of ClusterRoles

Confine roles to specific namespaces rather than cluster-wide.

Pros Cons
  • Easier to scope access
  • Promotes better isolation
  • Safer by default
  • Doesn't help with cluster-scoped resources
  • Can lead to duplicated policies
  • More YAML to manage

Automated RBAC Auditing Tools

Use scanners (like rakkess, kubeaudit, Polaris) to detect over-permissioning.

Pros Cons
  • Detects misconfigurations early
  • Speeds up reviews
  • Helps maintain compliance
  • Adds operational tooling overhead
  • Only alerts, doesn't prevent
  • May generate false positives

I can see how this would be difficult given there are teams running 1k+ Kubernetes clusters throughout a single organization. That's enough to keep you up at night.

So how does Cycle approach this? Well, to get started, containers themselves don't have roles and the only API calls they inherit by default are calls to the internal API. Now the internal API still can do some things, but 85% or more of the calls are reads and the POSTS it can make are bound to the environment that container is in. So if someone was able to get into a container running on Cycle they would at most have access to a purposely limited set of functions that almost all read information about the environment, not to the global scope (which doesn't really exist the way it does in Kubernetes)

In order to use the platform API, a user would have to have an API key or be logged into the portal. Both logged in users and API keys belong to Roles and each role has a combination of 2 things:

  1. Capabilities (what actions can you do in the Hub).
  2. Access Controls (what resources are you able to take those actions on).

So it would require the attacker to either gain a key or be logged into the portal on a high level account for real Hub wide access. Management of access for the keys and portals is also very simple to add and remove. That means any leak of keys, personal login information, and so on can be quickly countered by removing the membership or access of the user or keys.

#2 Misconfigured Cluster Components

Why? Well it might be the fact that the default authentication mechanism for kubelet allows anonymous authentication by default. But beyond that there are plenty of other places in the cluster that require a deep look and understanding to configure correctly.

  • Etcd
  • Kube api-server
  • CoreDNS

Just to list a few.

With Cycle, the user doesn't need to manage anything with the control plane. Cycle manages this for the end user, providing a standardized and secure experience that is (as you'll hear me say a few times today) always up to date, and federated.

#3 Out of Date Kubernetes Version

Now many of you may want to argue that this doesn't really fall into the category of configuration or "mis" configuration because it's not something that can be accidentally toggled on or off so easily as to cause an accidental outage in prod on a whim. However, given the scope of the impact and the fact that it is listed on every single other page I researched as a misconfiguration I'm going to include it.

The simple fact is, nobody seems to want to update their clusters and that's probably because it's so painful to do.

Don't take my word for it:

What you'll eventually find is that most organizations just roll a new cluster to upgrade. And if that's as good as it gets I guess I'm glad that's not something I have to worry about.

How is Cycle different?

On Cycle, your cluster and worker nodes are always up to date. Platform releases come on average a few times a month without any need for input or planning on your end. The platform handles all the logic of updating the control plane and the software running on the worker nodes. This requires no restart of the node itself or the workloads running on top.

#4 Missing Network Segmentation Controls

We recently did a deep dive into this topic in: Examining Network Architectures: Kubernetes and Cycle. So I'll keep this section very high level, almost a TL;DR.

Another configuration mechanic that needs to be taken seriously is network segmentation. The absence of proper segmentation can be disastrous for an organization's security stance. Kubernetes, by default, allows open communication between all pods unless network policies are carefully defined and enforced. Best practices rely on external tooling like CNI's, service meshes, and policy engines to establish meaningful boundaries.

In contrast, Cycle's environment-centric model is secure by default. Every environment is created as an isolated network. Any cross-environment communication must be explicitly declared using Software Defined Networks (SDNs). The result is a system where boundaries are composable but not porous, and where misconfigurations are much harder to introduce accidentally.

Recap

Many of the most common Kubernetes misconfigurations—whether related to cluster components, RBAC, versioning, or network segmentation—stem from a combination of complexity and human error. These issues aren't always the result of poor engineering decisions but often emerge from the realities of operating under pressure in flexible, sprawling systems.

By looking at how Cycle addresses these same concerns—standardizing control planes, restricting default access, automating updates, and enforcing isolation by design—we can see a contrasting model that prioritizes predictability and safety. It's not about choosing sides, but about recognizing that architectural decisions have real consequences—especially when it comes to reducing the blast radius of inevitable mistakes.

💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!