After months in development, we are thrilled to announce the launch of Cycle.io's support for NVIDIA GPUs (Beta). Combined with an already powerful platform that enables developers to focus on building, rather than managing, the addition of GPUs will further empower the development of accelerated applications which require a higher level of compute power.
Whether you're building an application that deals with AI, machine learning (ML), scientific computing, or a number of other possibilities, Cycle has the necessary tools needed to go from MVP to scale, all in one unified platform.
The migration to cloud services continues at pace, and with it, a growing demand for more compute power to support math-intensive processes. Implementing a reliable process for GPUs is traditionally complicated with other container technologies, given the need to manage packages and dependencies or build deep learning frameworks from the source. These complexities yield extended technical debt and can greatly decrease the efficiency of new code deployments.
In a typical data center, there are servers with CPUs that handle common tasks like serving web pages or lightweight compute workflows. Then there are also servers with GPUs which handle more complex tasks like scientific computing or artificial intelligence workloads where massive parallelization is needed to crunch through even the most basic algorithms. These two types of workloads do not mix well on a single server because the CPU-based applications cannot effectively use the same resources required by the GPU-based applications.
Cycle's mission has always been to simplify the day-to-day efforts required by developers and DevOps teams alike. While we've already built a platform that enables organizations to mix-and-match infrastructure (bare metal vs. virtual machines) or consume multiple cloud/infrastructure providers in parallel, we're still just getting started.
As developers continue to build more complex and power-hungry applications on Cycle, it was obvious that we needed to enable these organizations to utilize GPUs -- especially for machine learning (ML) workloads. While this sounds easy, it's quite an undertaking … especially to do it correctly. For GPUs to meet the quality standards everyone expects from Cycle, we knew our implementation needed to be both super-simple to integrate with and flexible enough to work with different hardware configurations.
Whether you're looking to utilize virtual machines or bare metal, Cycle has fully abstracted away the underlying infrastructure to create one standardized path to deployment. Even more so, this integration supports applications compiled with both gcc (Ubuntu, CentOS, etc) and musl (Alpine), giving developers the flexibility to continue building with the technologies they've come to love.
By adding support for GPUs and, more importantly, making those GPUs easy to consume, regardless of the underlying infrastructure provider, we're empowering developers to focus on building, not managing.
As part of this GPU launch, we're excited to align this effort with Vultr's launch of their GPU-enabled line of cloud VMs dubbed Vultr Talon Cloud GPU. Starting today, users can now deploy servers with NVIDIA A100 Tensor Core GPUs at a price point that makes sense for all use cases. Starting $90 per month for their entry level tier of GPU powered VM's, Vultr is opening the door to developers everywhere that want to experience the power that GPU's can bring. Vultr is also the first cloud to offer virtualization of NVIDIA A100 GPUs to enable GPU sharing, ensuring optimal resource utilization.
💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!