Over the last few weeks I've been talking about the key differences between Amazon EKS and Cycle.
If you happened to miss it and want to catch up before diving into this post you can check it out here:
This post will round out the series by taking a look at how worker nodes are added to a cluster and the major differences between EKS and Cycle there.
EKS worker nodes are the EC2 instances that run your containerized apps. These are nodes that you register with the EKS cluster and they can be used to execute workloads assigned to them through the control plane. Generally a worker node will have:
In most cases, the EKS worker nodes will be a part of a node group. The node group is a collection of worker nodes within an EKS cluster that consists of 1 or more nodes.
To set up a node group, navigate to the EKS cluster and select compute from the horizontal navigation. From there, there will be an option to "Add Node group". Node groups need their own IAM role with the following policies:
Node groups require the user to add a compute configuration. First the user chooses their AMI from a list and then selects the capacity type, instances, and disk size. There are further scaling options available with minimums and desired state. After setting the subnets and security groups, the node group can be created and the minimum nodes will be created.
One of the key pieces of managing an EKS cluster is effectively scaling these worker nodes. EKS allows for both manual and automatic scaling of worker nodes. Manual scaling can be done by adding nodes to the appropriate worker node groups and is best for workloads with predictable usage patterns. Autoscaling on EKS is a whole other story. While the worker nodes can be scaled using the Cluster Autoscaler, organizations will have to work hard to build harmony between cluster and pod autoscaling tools. On top of that you might have to fight Kubernetes natural tendency to kill pods and move them around.
While EKS clusters can add worker nodes from the region they're deployed to, Cycle users can provision from any region or provider with 0 additional configuration. Since we're comparing Cycle to EKS, let's take a look at what it takes to add EC2 worker nodes to a Cycle hub.
It's important to note that while Cycle does offer a fully managed control plane as a service, all worker nodes (networks, disks, etc) are created in the organization's IaaS account. Organization's own all these resources.
Connecting AWS as a provider is straightforward. The user goes to their AWS IAM and creates a new user with a single policy attached: EC2FullAccess. Once the user is created, access keys are added and the access key and secret is integrated with Cycle.
With the AWS provider integrated, worker nodes can be added through the intuitive server deploy wizard or programmatically through the API. Similar to node groups, Cycle uses clusters. Clusters can be created and destroyed without minimums or maximums for the servers in the cluster. For auto scaling, Cycle has auto scale groups which allow users to define granular details about the type, location, number, and events that cause scaling.
One thing that's really different between Cycle and EKS, when it comes to worker nodes, is the fact that with Cycle you'll never need to worry about what AMI is running on your server or if you need to update it. Along with a fully managed control plane, worker nodes also get automatic updates and these updates cause 0 downtime in any of your existing workloads.
Like EKS, Cycle offers both manual and automatic scaling of worker nodes. Manually scaling on Cycle is as simple as dropping new worker nodes into an existing cluster. Once provisioned its ready to receive workloads and it can be deleted at any time. Autoscaling, while inherently more complex than manual scaling, is much more straightforward on Cycle. Auto scale groups define how worker nodes should be added and individual containers have auto scale settings which are part of these groups. If a scaling event happens and there are not worker nodes with the appropriate resources available, the auto scale groups will spin up additional worker nodes. When usage drops, those new nodes will be released.
This series of blog posts has looked at a more nuanced view of Cycle vs EKS. While Kubernetes will always have its place in the world of container orchestration, alternatives like Cycle offer compelling options for organizations seeking simplicity, flexibility, and automation from their container orchestration and infrastructure management platform. Cycle's ability to streamline processes, from adding worker nodes to seamless autoscaling, provides a competitive advantage to users who would rather spend their time building core product features than fighting their orchestrator.
💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!