Refactored DNS Service to properly handle long TXT records and added support for CNAME->TXT resolution.
Improvements to the platform orchestration logic has yeilded improvements in delegation time of 70%.
The platform now looks for a file at /etc/hosts.custom and merges it into the existing /etc/hosts file.
Cycle’s Environment Discovery service will now utilize compression on DNS resolution. This compression will be applied to public DNS at a later time.
Previously, unused IPv6 pools on Vultr were not properly cleaned up on server delete. Although there are no negative side effects, this issue is now fixed.
Cycle now supports the s3.xlarge and m3.large Equinix Metal servers.
The API now supports ability to create/delete instances across multiple servers in a single call.
Instances spread across multiple servers weren’t always in sync due to a caching race condition within compute-proxy layers.
Converted necessary API calls, identifiers, and logos to transition to Equinix Metal, per Packet’s acquisition.
Compute now caches instance meta information in RAM, greatly decreasing storage I/O.
Any servers that have more than 2TB of storage on dedicated drives will automatically be provisioned with a “base pool” and a “storage pool”. Storage for instances can be customized per container volume into these pools.
CycleOS has been upgraded to using the 5.6 Linux kernel which contains a number of security patches, new features, and performance optimizations.
Renamed the ‘Owner’ field to ‘Creator’ on all components.
Internal API clients weren’t properly closing request handlers leaving thousands of file descriptors open eventually hitting the ‘max open files’ limit.
This release focuses on optimizing and upgrading existing systems to enhance reliability, security, and portal speed.
The VPN is now stateful and will remember previously generated certificates.
Internal API counts were being improperly kept, this has been resolved and is now functioning correctly.
Container shims now carry their own state, improving reliability through platform updates.
Previously, the compute service would only perform 4 actions concurrently. It can now perform as many concurrent actions as there are cores on the host machine.
Initiating a migration previously had the chance of causing a race condition, this issue has been resolved.
Stateless volumes now appropriately reset volume data between restarts.
RunC has been updated to the latest version and all security patches have been applied.
The connection to instance console has been improved to wait 5 seconds before connecting. This will reduce the overall number of sockets being used.
Server tags will now appear in alphabetical order when accessing them through dropdown menus such as the one on the container deploy wizard.
There are new usage meters available showing image storage and hub usage.
Invitations to new hub members now show and can be managed on the hub members page.
Notifications generated programmatically from the API can now be silenced in account settings.
Container telemetry configuration can now be set through container configuration in the deployment section.
The portal has been optimized to use less API requests.
Containers can now be configured to sample telemetry data at a one second interval. Previous minimum was 10 seconds.
Fixed a bug that would stop users from adding new members to a hub.
Stateful migrations were occasionally failing due to a race condition.
Added the ability to configure RLimits on containers. [Available via Stacks / API, Portal next release]
Fixed a bug that prevented IP pools from being released after they were no longer in use.
Platform now properly handles and escapes dots in OCI labels during image uploads.
Users can now enable/disable node services as needed. This will help encourage Cycle's philosophy of "Secure By Default".
Fixed a bug that prevented new tags from being assigned to servers via the API.
This release focuses on improving the user experience and administrative controls. In addition to controlling a users environment access, activity feeds will allow you to see what actions and events are happening within your hub. We've also modified the container and instance view into a fullscreen modal to improve navigation.
The agent service, which runs on all Cycle servers, has been rebuilt to be more fault-tolerant during updates while also implementing the ability to perform future updates without restarting containers.
The SDN and VXLAN functionality in Cycle’s compute service has been rebuilt to better manage routing tables allowing greater scaling of container instances.
The instance migration process has been rebuilt into a streamable-migration. This has a number of benefits including: faster migrations, more fault-tolerant migrations, and extra disk space is no longer required for migrations.
Fixed a buffer overflow issue within the console service where console output could deadlock causing a node to experience a CPU spike and hang.
Containers can now be marked as ‘deprecated’. Deprecation is useful for keeping a container around but not wanting it to start.
Admins can now limit other hub members to specific environments. Additionally, environment access can be set as ‘view only’ or ‘manage / view’.
The ‘Manual’ deployment strategy prevents Cycle from determining where container instances should exist and instead gives full delegation control to the user. This deployment strategy is particularly useful for API integrations with Cycle.
To streamline easy switching between containers and other management workflows, the portal now uses a full-screen modal for managing containers. This modal enables you to quickly interact with containers without requiring a full page navigation and multiple clicks.
When two containers with multiple instances need to communicate, Cycle will now automatically prioritize destination instances based off their expected latency.
All actions on Cycle now have a corresponding activity event. These events help show what all is happening on your hub and whether that event was initiated by a fellow hub member or the platform itself.
CPU/RAM usage per instance can now be tracked and visualized from the container modal. Additionally, a webhook can be utilized to expand telemetry monitoring into other systems. A future release will have the ability to generate extended telemetry reports.
The process of building a stack and/or image from the API has changed. Before, one would submit a task to generate a stack/image and collect the ID on success. Going forward, integrators must generate a stack/image object first then submit a task to that object to generate the stack build or import an image.
Previously, a container that was scaled up, then scaled down, then scaled up again would experience network drops on newer instances.
When configuring stateful instances, you can now specify first start, auto start, and default start commands and environment variables.
Should an image fail to upload to our image storage backend after building, Cycle will now automatically attempt a re-upload a maximum of 3 times.
When loading tables on the portal, there will be a speed increase on the first and subsequent loads of the table data.
There was an issue that prevented existing compute nodes from learning about other new compute nodes. This has been fixed.
Improvements were made to the instance console providing a more consistent connection to the instance as well as better formatting.
All bulk delete buttons are now hold to click. Additionally, the infrastructure dashboard will now show clusters in alphabetical order and will remember your most recently selected cluster.
Using the prune stack builds button will delete all unused stack builds from a stack that are more than 30 minutes old.
The platform now supports the use of Certification Authority Authorization (CAA) records, which specify certificate authorities that are allowed to issue certificates for a domain.
Fixed a bug that prevented 'first start commands' from running for stateful containers.
Converted Cycle's entire notification system to a publish/subscribe model as opposed to an observer model. This should allow us to handle future scaling with ease.
Starting with v2020.02.25.5, Cycle's default image storage provider is now Backblaze B2. All previously uploaded images have been migrated automatically. This should resolve the slow build and start times that were frequently experienced with Wasabi.
You can now try Cycle risk free for 90 days! In addition to this, we've decided to remove the two-factor authentication requirement when creating your account. If you wish to add 2FA, head to account settings.
Logging in now has the “Remember Me” option, which will keep you logged in on the device for 30 days. Not selecting this option will log you out when you close your browser.
Two-factor auth can now be enabled or disabled from your account in the account security section.
The signup flow has been reworked to no longer require two-factor authorization in order to create an account.
Added a base volume usage meter to the server dashboard. The base volume is where logs, images, and the compute service are stored.
Sorting by cluster on the infrastructure dashboard now only shows the servers that are a part of that cluster.
The entire authorization system across the platform was reworked to be safer, more efficient, and scalable.
Most charts across the portal now allow selecting time ranges, and some have other filtering options. More will be added in the future.
Users new to the platform will now be able to create a hub under a "trial" plan that will allow them to test out Cycle risk free, with no credit card required.
Due to the unnecessary complexity and lack of consistency between providers, we have decided to remove SAN support for volumes. We will be replacing it with a much more streamlined and easy to use object storage implementation in the following weeks.
Rebuilt the sync process on Cycle's agent to ensure that all service updates are fully atomic. This process guarantees that all customer nodes will always be the correct version.
Added a checksum to migrated instance volumes to ensure there was no corruption. If the checksum for the data doesn't match the original once migration is complete, the process is restarted automatically.
This release adds the ability to migrate instances across servers.
The infrastructure dashboard now includes a much better look at total resource utilization per node, including the amount of shares used and available.
Container instances can now be migrated between servers by visiting the instance dashboard and expanding advanced options.
A checkbox has been added to server settings which allows you to enable the overcommitting of resources. This setting doubles the amount of shares the server is permitted to allocate.
The loadbalancer service now supports: Sticky Sessions - for persistent connections. Health Checks - ensure routing to a healthy instance. Automatic Http/Https Redirection - if a cert is detected.
We use cookies to enhance your experience. You can manage your preferences below.