This release adds a new type of access to scoped variables, the File type. Cycle will mount a special read-only volume at /var/run/cycle/variables and the files name will be the scoped variable identifier. The load balancer is also in focus! Load balancers now have an improved syncing mechanism that is more efficient and takes into account new environment variables when deciding when to sync.
Scoped variables can now be injected, as files, into containers.
Users can now set raw scoped variable's to blob, indicating the intention to upload a large amount of text. A blob type will also preserve newlines and spacing.
Load balancers are much more efficient with their updates which leads to less CPU usage during scaling events.
Users can now set the container configuration for CPU resource limits to 0. This will give the container full access to the CPU (unlimited access).
There was a race condition that would occasionally cause globally scoped scoped variables to not be assigned during container create. This issue has been resolved.
An issue existed that allowed a decommissioned node to checkin after decommissioning. This issue has been resolved.
After months of work, we're excited to announce the launch of Cycle's Infrastructure Abstraction Layer functionality. Using the IAL, users can now utilize Cycle to manage infrastructure anywhere. Whether it's a cloud provider Cycle doesn't yet have a native integration with, a rack of servers in a colo data center, or a random server sitting in a closet, organizations can now connect, and manage those servers with Cycle. This feature isn't alone. We're also excited to announce a new and wonderfully intuitive signup process, optimized VXLAN networks, more resilient consoles, comm protocol improvements, and much more.
Users can now implement the infrastructure abstraction layer, unlocking the ability to use Cycle anywhere. From providers that aren't natively supported by the platform to a rack of servers in colo/on-prem, organizations can now connect, and manage those servers with Cycle.
To help educate new users to Cycle, we've rebuilt our signup process to be a highly-interactive wizard which can take a developer from nothing to an active server with live applications.
Adopted a more efficient 'push' model for VXLAN network route updates. Instead of performing network-wide syncs for new container instances, Cycle compute nodes will receive route announcements and directly add routes. Full network syncs will still occur on 'cycles' which occur every 15 minutes as is Cycle-standard as a failsafe.
Under a limited number of scenarios, a console connection would disappear and never re-establish until a container was restarted. We've refactored our console code to automatically reestablish these connections should a connection drop occur.
Using the HA deployment strategy for containers will now include existing instances, and locations, when determining where new instances will be deployed.
While Cycle has always supported IPV6 overlay networks for containers, the underlying hosts needed to use IPV4 to build these networks. Now, these networks can utilize the host's IPV6 address to establish overlay networks.
Using the Integrations configuration on a container, Cycle can pull files via a HTTP request and insert them into a container, from the platform, at start time. You can now append a duration (?cycle-cache=5m) to let Cycle know how long it should cache and utilize a downloaded file.
Cycle uses a custom TCP protocol layer for communicating between the core platform and user compute nodes. In this update, we've added extra logic to ensure any potential 'hanging' listener won't prevent other traffic from successfully being delivered. Additionally, we've made some performance enhancements to handle more capacity per multiplexed TCP session.
Previously, some pages, especially the Environments list, would infinitely hang and never finish loading. All pages now load as expected.
If a deletion attempt is made on a server which results in a failure, the server will revert back to a 'live' state after 30 minutes as that server is technically still usable.
This release features the ability for users to interactively search through the portal. Using CMD/Ctrl K now brings up a search modal allowing users to search for container, DNS, environment, image, server, and stack resources by name or ID. Alongside search, we've improved networking for instances that failed on start and DNS bug fixes adding to the platforms overall stability.
Users will now enjoy an interactive search feature that drastically reduces the time spent navigating the portal. Invoke the modal with CMD/Ctrl + K and search for DNS, environment, image, server, and stack resources by name and ID.
If a container instance failed on start, some VXLAN routes wouldn’t be properly cleaned up causing a pile-up of routes.
Cycle’s public DNS servers now cache zones internally to enable faster response times.
Under certain circumstances, a wildcard renewal would not be applied to wildcard children DNS records. This has been resolved. If you encounter an expired certificate on a child record, recreating the record will resolve the issue.
Going forward, all servers will use chainloading to boot CycleOS. This enhancement allows the Cycle platform to have more flexibility with future updates without requiring changes for our users.
After months in development, we are thrilled to announce the launch of Cycle.io's support for NVIDIA GPUs (Beta). The addition of GPUs will further empower the development of accelerated applications which require a higher level of compute power. Alongside this major release is a fix for an intermittent builder error that would cause "cannot connect to builder" being reported, and more granular control over stateful instance configurations.
Users can now provision GPU powered infrastructure from AWS and Vultr using the API or portal.
A race condition that would occasionally cause the error "cannot connect to builder" has been solved.
Containers, like rabbitmq, that parse the container hostname during their init process were causing an issue where they would fail to initialize due to misreading the hostname of the container because of the number literal placed before the hostname. Stateful hostname's can now be forced to whatever a user wants, fixing this issue.
Added an options struct to stateful instances that gives users the ability to define more granular configuration settings for stateful deployment settings.
Users can now paste in sets of environment variables to be used for a given container configuration, instead of uploading them 1 by 1, adding to workflow efficiency.
Throughout the portal the UI has been cleaned up, removing graphical issues around borders, spacing, and icon sizes - leading to a more pleasant portal experience.
CycleOS now supports custom firmwares and drivers.
This release is just the start of a big summer for Cycle. Today we release our new pricing, making it easier than ever to understand when and why to use free, lite, or business tiers. On top of that, there've been major upgrades throughout the platform, including the ability to restart both a server and compute service right from the API or Portal. Users should also check out the completely redesigned providers page that now includes the ability to verify a provider key or secret before updating their credentials.
A new pricing model has been created that will better meet the needs of users by replacing GB RAM licensing with simple per node pricing, as well as removing all tier hard caps. New pricing maintains the great included resources you're used to with each tier.
The providers page is now more flexible and modular, with significant UI upgrades. Users can also now validate their provider keys and secrets in place.
Users can now restart a server without leaving the portal, or do so programmatically through the API.
Users can now restart the Cycle compute service on a given server through the portal, or programmatically through the API.
Cycle's load balancer now supports HTTP2 giving users more flexibility when routing traffic to their instances.
A bug that would cause the platform to yield a panic when Seccomp was disabled has been fixed.
The CycleOS kernel has been upgraded to 5.15.27 and now has support for more RAID controllers and NIC's.
Implemented a vxlan cache and certificate wrapper to accelerate network route discovery configurations.
Added an instance migrations timeout that will fire after 15m of the start of a write on any instance migration.
Fixed an issue in TLS certificate renewal that would cause renewal to happen prematurely and silently fail.
The platform now supports Equinix Metals metros giving users more flexibility when deploying infrastructure through EM.
Server provision options have been standardized to use storage_size as a provision option for providers that allow flexible disk volumes to be attached to servers, giving users simpler interoperability between calls.
This release will enable seccomp on all containers by default, leading to an overall more secure experience for all users. This feature can be disabled from the container config menu under runtime. The discovery service resolver will now use Cloudflare DNS, OpenDNS, alongside the existing GoogleDNS for lookups.
Seccomp will now be enabled by default on all containers. This improvement will provide users with more control over the system calls their containers are able to make. Users will also be able to disable seccomp on a per container basis.
The discovery service resolver has been improved and will now use both Cloudflare and Open DNS alongside the existing GoogleDNS for lookups.
As we move toward a summer packed with big releases, our team wanted to take a moment to refocus on improvements to the platform and portal. Users can expect to see more verbose activity logs, faster API lookups, and pipeline improvements alongside a slew of fixes that will keep things like stale networks and wildcard DNS issues off your plate.
Containers that failed to start would, in some limited cases, leave behind stale network routes. This issue has been resolved.
Improvements were added to the pipeline which allow for a longer period of time before the pipeline times out and pipelines will now be used as a 'creator' type when creating resoruces.
Environment service containers would no longer be authorized after 365 days and would stop communicating with the platform. We've restructured internal API authorization to be more secure while also ensuring uptime won't affect communication.
Changes to the caching layer have resulted in faster lookups when using the internal API.
An issue preventing certain symlinks from being properly transferred during container instance migration has been resolved.
Users will notices general updates to the UI which include a new environments dashboard view, discovery offline warning panel, more consistent graphical state representations for services, and expired credits verbosity.
Fixed an issue where new DNS records would not properly be associated with a wildcard certificate, in some limited cases.
An issue that would prevent container tags from being applied to new containers created through the container create form has been resolved. Also, tags will now appropriately show in deployment section of the container config.
Users will now see more events in the audit log. New events will cover changes to service containers which were previously unreported.
This release is focused on a few core fixes to the platform that will bring even more stability to users. This includes an authorization issue that prevented CycleOS from downloading during provision events on servers from the provider Vultr and a slew of DNS upgrades.
Solved an authorization issue that could prevent Vultr servers from downloading CycleOS on boot.
Improved support for Wildcard TLS certificates to now support the root domain itself. New subdomains that utilize wildcard TLS certificates will properly associate with the latest certificate instead of the earliest.
Creating a new DNS record that requires a TLS certificate will auto initialize the job to create the certificate. No additional calls to initiate the jobs are now needed.
Fixed an SFTP authorization issue that temporarily would allow an SFTP connection to a server that was no longer hosting a volume that was recently migrated. This improper authorization didn't allow access to any files or data, but simply returned an empty directory rather than erroring out.
Seccomp is making waves in the container space right now and our team wanted to be sure that it wasn't a distraction for Cycle users. In this update, users will be able to enable seccomp using the environment variable ENABLE_SECCOMP. This release also brings major improvements to the websocket connections that build a majority of the data brokering in portal, and, we've made some network recovery fixes to platform that help ensure networks return to a normal state after a network drop.
The services throughout the portal, which rely on websocket connections to provide/update their data, are now more reliable thanks to a rework of the websocket connection layer in the portal. On websocket connection failures, users should expect to see a banner signifying them that the websocket connection was broken or that it failed.
A UI bug, which occasionally lead to a blank screen in portal, has been solved.
New users, creating their accounts, will no longer need to go through signing in after verifying their account, & instead will be logged in directly to the portal after verifying.
Using the environment variable ENABLE_SECCOMP allows users to enable seccomp for that container and all instances. Seccomp is currently disabled by default, however, in a future release the team will move seccomp to enabled by default, bringing it in line with our secure by default philosophy.
Network drops would occasionally cause the platform state to become out of sync. With the latest release, network recovery is now far more resilient and the platform is able to recover from network drops to a normal state.
As we continue on our mission to get developers to writing code, this update provides a few small tweaks to help provide more flexible load balancers while also improving the accuracy of information within the portal.
Users are now able to customize public IPs for load balancers. Cycle will still default to dual stack, it is now possible to configure load balancers for only IPv4 or only IPv6 as well.
The way Cycle's portal consumes and displays state updates from the notification pipeline received phase 1 of a 3 phase update which will lead to incredibly consistent state information on things such as container state, instance counts, and anything that relies on a job "eventually" completing or failing.
Fixed an issue that caused the Vultr server inventory to not properly display at times.
We use cookies to enhance your experience. You can manage your preferences below.