June 17th, 2025 - Chris Aubuchon, Head of Customer Success

Infrastructure Management: Containers vs Virtual Machines

Trends in tech come and go, but certain underlying primitives stick around forever.

In software, two such primitives are virtual machines and containers.

Virtualization paved the way for the cloud to become massive. Data centers would likely never have been commercially viable without it.

While still relatively new, containerization has already made a serious mark on the software engineering world. Container adoption has brought the rise of DevOps, platform engineering, and a cultural shift geared towards achieving higher velocity.

Cycle users now enjoy environments where containerized workloads and virtual machine workloads can run side by side on the same compute nodes, on the same networks in Cycle environments.

Now more than ever, the choice of whether to run a workload in a container or a vm comes down to user preference. But in a time where many software decisions are driven by hype more than data, we decided to write this article demystifying how to choose.

Decision Drivers: Containers vs Virtual Machines

By default, most new workloads should be geared towards containers. Containers have been proven to be more efficient in tight release cycles, they're more portable, and it's easier to achieve higher usage density on compute nodes using them. So, the angle we will take in deciding when to use containers vs virtual machines is easiest if thought of as "what are the most concrete examples of when I should use a virtual machine and why".

Before diving into the deep end, lets take a high-level look at some of the scenarios we're looking at:

Factor Why It Matters VM-Leaning Signals Container-Leaning Signals
Isolation & Kernel / Driver Control Custom kernels or proprietary drivers can destabilise a shared host; separate kernels shrink blast radius. Windows Server, vendor GPU modules, strict audit scope Vanilla Linux, no special drivers, single-tenant stack
Workload Type Some workloads (legacy apps, heavy hardware) are easier to keep in VMs. Legacy ERP, telco NFV, vertical-scale DB REST API, cron job, micro-service
Data Persistence Snapshot and recovery strategy changes when data is large or mission-critical. Large monolithic DB, need crash-consistent snapshots Replicated stores, small or ephemeral data
Deployment Cadence Faster deploy cycles benefit from lighter images and quicker restarts. Monthly/quarterly releases Daily/continuous releases
Operational Considerations Day-to-day run-time behaviours—load profile, tenant separation, and team capability. Predictable load; external audit / hard tenant isolation; small SRE team or limited container expertise. Spiky or bursty load fast, lightweight scaling helps; tenant isolation handled by runtime policies

Think of each factor as a lens that can be a solid reason to move from containers to vms for a workload. If you're looking at 2 or more factors leaning towards vms, it's almost a guarantee that vms will be the better tech for that workload.

Isolation, Security, Kernel Control and Workload Type

kernel,iso, hardware flowcahrt

Containers run directly on the host's kernel. Virtual machines run next to the host kernel with a kernel of their own. That one design choice can be the settling factor in a majority of containers vs vms debates. It's incredibly relevant if you need to run a Windows or legacy application; which is almost definitely going in a virtual machine. However, if you're running a modern linux workload, it's still worth probing deeper into the following points.

Custom Kernel Modules or Drivers

If the workload demands custom drivers, some non-default module, or a kernel change, forcing things through the container would impact every workload on the host. Generally that's enough to merit a vm, as the effects can cause a waterfall of issues. A vm provides a wall between the host and the low level changes.

Blast Radius Concerns

If kernel changes are needed and those changes result in the kernel going down, how big is the impact? The impact of a kernel issue taking all of your tenants offline would be something to avoid at all costs, but if it's just an academic setup then there's room to take some risks.

Compliance Pressure

In very tight compliance environments, there is a known requirement where the auditors will actually write "own kernel" straight into the control description (PCI-DSS, HIPPA, FedRAMP High). While the implementation on that is up to debate, if you feel as though you need strict kernel level control, a vm might be your best choice.

Data Persistence and Operational Cadence

data and deployment flowchart

There is no doubt that stateful workloads run great in a container. There can be complexity in sharing a volume between containers or even compute nodes, but the tech behind those implementations is well documented, mature, and for the most part… straightforward.

There is something to be said about being able to snapshot an entire virtual machine and roll the state back or forward quickly. For many that choose to run their stateful workloads in a virtual machine instead of a container the decision is heavily influenced by copy semantics and recovery guarantees:

  • VM snapshots are crash-consistent and cheap for block storage arrays; they’re one click to revert, even when the disk image is 2 TB.
  • A container volume lives outside the image.

So the data decision comes down to "is this big enough where copying that data across the network will slow things down and would you rather just rely on hypervisor snapshots?".

When comparing deployment mechanics between vms and containers, containers cut a lot of steps that vms just can't really avoid. Lets take a very basic Docker container vs VM look at this issue:

Stage What Happens with a Container What Happens with a VM
Build docker build: bake app + deps into a small layered image. Packer (or similar) bakes a full OS image—with packages, services, the app, and cloud-init scripts.
Publish Push a ~100 MB image to a registry; layers dedupe. Upload a multi-GB image to a VM library or cloud snapshot store.
Deploy Orchestrator pulls layers it doesn’t have and starts the container—cold start in seconds. Hypervisor streams the whole disk, provisions the guest, and boots an OS—cold start in tens-of-seconds to minutes.
Rollback / Canary Point the deployment at an older image hash or run side-by-side pods. Restore a snapshot or cut DNS/traffic to a new VM, then spin up another guest.
Patch Cycle Ship a new container image; the host kernel stays put. Patch the OS inside every guest and keep a golden image up to date.

This is one of the many big advantages of containers and a stark reminder that if you don't need vms, containers can be a huge efficiency gain.

Operational Considerations

operational considerations flowchart

While audit scope will generally rule a set of choices like this, there are many things to consider here.

Peak to Average Load

This sounds a lot more complex than it is. Basically, if you're torn between running the workload in a container and a virtual machine, scaling containers is much faster and easier. So when you're on the fence and you have a workload that might need to scale quickly… choose containers.

Tenet Isolation per Compliance

This doesn't come up super often outside to the most strictly regulated environments, but, if you have an auditor that demands kernel level compliance, you're almost definitely running your workload in a vm. And if you're not, you're running them in a container on top of a vm, so there's a vm somewhere in the stack.

Team Shape

Containers have come far enough along that almost every non-jr engineer you talk to will have some experience with them. However, if you find yourself on a team of engineers that have never touched a container it could be a consideration to move more slowly into containers. This would be a great opportunity to use a platform like Cycle, where engineers can slowly move workloads from vms to containers without the all or nothing pressure other platforms bring.

Wrapping Up

Containers or VMs

Linux or Mac (or Windows if you have to)

Software engineering and delivery has entered into a golden age of choice where the things that were previously considered sacrilegious to some are now moot points to most.

These days, choices like this are a checklist. If a workload needs its own kernel, heavy snapshots, or strict audit boundaries, spin up that vm. Otherwise, most of the time, you'll want to reach for containers.

Either way, if you're not already deploying to Cycle, you may want to give it a try. Being able to have these workloads run on the same network across an incredibly flexible assortment of architectures (multi-cloud, hybrid-cloud, on-prem, colo, etc) is making developers' lives easier and putting pager companies out of business.

💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!