Nov 13, 2023 - Chris Aubuchon, Head of Customer Success

Overcoming Inconsistent Environments with Cycle

Maintaining a consistent environment from development to production is one of those challenges that's always easier said than done.

It used to be that a small hiccup—like a version mismatch or a misconfigured setting—could have you scrambling to figure out why everything worked perfectly on your local machine but started falling apart elsewhere. But when containers came around that all seemed to be “solved”, a phrase often used in a much more unassuming way than it should be (similar to “defacto”).

Containers really do help standardize libraries, dependencies, and portability. However, it's no secret that environments (dev/staging/prod/etc) you’re running those containers in can have inconsistencies in:

  • Network
  • Services
  • Infrastructure Configuration

So, even though we have a standardization layer in runtime and packaging we still have to guarantee other pieces will be in place the same way between environments.

Not So Simple

​​Picture this scenario: Your development team is in a groove, working seamlessly with a specific version of a service. The tests are all coming back positive in the development environment, and there's a sense of accomplishment in the air.

But then, reality strikes—the same tests that sailed through without a hitch are now inexplicably failing in the production environment. Suddenly, your team is faced with the formidable task of combing through each layer of the technology stack to pinpoint the root cause.

Disturbances like this divert valuable resources from ongoing development efforts. Project timelines start to slip, and before you know it, costs are spiraling upward.

It's not just service versions that can trip you up, either. Imagine your development and production environments are like two houses built from the same blueprint but decorated differently. Configuration settings—be they network protocols, security policies, or even just minor software patches—can act like unique furnishings and finishes that differentiate the two homes.

These subtle differences may seem innocuous but can lead to unexpected incompatibilities. The burden often falls on the DevOps team to manually reconcile these variations, ensuring that the two environments are brought back in sync without causing additional issues. It's like trying to make the two houses look identical again, one piece of furniture at a time, and it adds another layer of complexity and overhead that no one really needs.

Consider Your Users

Inconsistent environments are more than just a technical headache; they're a significant risk to the reliability and trust that organizations strive to build.

This isn't a minor issue; it's a gaping hole between what everyone expects and what actually gets delivered. It can sour relationships across the board, creating tension not just among those in development and operations, but also with business strategy team members.

Ultimately, the users are the ones who experience the fallout when service goes awry or app functionality doesn't live up to expectations. Teams must ensure that the face you present to the world, through your services and applications, is reliable and consistent.

Standardization as a Solution

The first step in overcoming inconsistent environments is standardization. That means looking beyond your container runtime and making sure that networks, services (like load balancers and discovery services) and infrastructure is being treated in an standardized, even documented, way.

But what does enforceable standardization look like?

If you look at Cycle, standardization is enforced by the platform. For networking, you can rely on a simple model that’s always implemented in the same way.

Cycle uses environments as the model for implementing VPC’esque networks and services.

For new users, think of an environment as something similar to a VPC where the infrastructure of the cluster you choose is networked together with a new IPv6 network (default).

When we create the new environment we have that IPv6 /80 network, and, the thing to really hone in on here is that network is created and assigned the exact same way every single time, regardless of the provider(s), location(s), or type of servers that make up the cluster that provide the underlying infrastructure.

So instead of relying on each developer, platform engineer, or ops person to commit to a single “style” of deployment you get enforcement at the platform level itself.

But there's a bit more than just the network to think about. Controlling ingress through the load balancer is also important. I think we all know there’s about a billion different ways to configure ingress.

Cycle doesn’t force users into a single unified load balancing configuration (in fact with the new V1 load balancer we’re exposing more control than ever to the user).

However, the way that each environment is constructed is standardized, and so it becomes much easier to reason about, especially when you have 10s or 100s of environments. The ROI on that standardization becomes life saving when you can reliably deduce possible issues across any environment quickly and easily.

Another thing to consider here is how if the application doesn’t need to make ingress level configuration changes beyond the defaults (which we’ve worked incredibly hard to make sane), they simply don't have to change a thing… it just works.

Infrastructure

Adding and removing compute units is essential to any system. Being able to reliably comprehend the state of those units is paramount. There are many ways to approach this…

Configuration tools like ansible, chef, puppet, etc can be used to automatically install packages. Terraform and IaC promises to codify infrastructure at the repository level, which is a beautiful approach for each repo, but for many organizations it's a ticking time bomb of “who’s going to update all those repo configs?”, followed by the almost guaranteed, “I’m not sure, those were put in place before I got here, can you make a ticket?”.

For Cycle users, reasoning about infrastructure itself (the compute units, a server, a cluster of servers, etc) remain simple. And while automation exists through stacks (and its expanding feature set), it doesn’t require someone with DSL specific experience to update, configure, or just get working.

Out of the Weeds

Given the high nature of developer turnover, which is over 21%, one question should always be prioritized when analyzing environment level controls.

If I have to replace a quarter of my team each year, what is the chance that we lose sight of our best practices and drift toward eventual disaster?

Every team has hero level Sr, lead, and principle engineers they can lean on in times of need, but is that really the work you want your best engineers maintaining, translating, and updating?

The easier it is to reason about the parts, the higher the level of standardization, and the overall necessity for things to be enforced across the organization at scale should push you about as far as you’ll ever need to be toward implementing a strategy to make sure inconsistent environments aren’t a problem that turns into a nightmare.

Platforms like Cycle offer users a way to enforce environment level standardization up front, at the platform level itself. For large teams this can be a way to wrangle enforcement across the organization and for smaller/medium sized teams a way to set things up so that they don't go haywire when you need them the most!

💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!