This release brings additional metadata functionality to internal API and expands DNS capabilities with ALIAS record support. We've also fixed an edge case in load balancer configurations applied through stacks where submitting a null config could cause issues.
The internal API now supports ?meta=sdn_pool_ips for containers that belong to SDN networks which utilize their own IP pools.
ALIAS records, which are similar to CNAMEs but utilized for zone origins, are now supported.
If a stack specified a load balancer service but omitted a config, the load balancer's config would be reset on each subsequent deployment. Now, an empty config will no longer reset a previously deployed load balancer.
This release features a set of improvements, fixes, and additions focused on smarter IP handling, easier network scaling, and more reliable load balancer behavior. These changes continue our march toward simplifying operations on the platform while we work toward more monitoring and observability changes in the coming month.
Container instances that are deployed to virtual provider servers now retain their SDN static pool'd IPs as long as the migration is within the same region.
The maximum number of environments that can be added to a layer 3 SDN network has been raised to 15.
Each tier of monitoring now has an increased number of total metrics included in the tier package.
There was a bug that would cause load balancers on virtual provider nodes to not properly initialize when started without IP's. This issue has been resolved. This is valuable for users setting up environments which will exist exclusively on private networks, or behind a Cloudflare Tunnel.
There was an issue with patching raw stacks via API where the platform would not properly handle variables defined in the stack. This issue has been resolved.
A 5 second cache has been added to the load balancer that retains information used to decide destination prioritization, greatly reducing the overall pressure on the load balancer during times of increased traffic.
Image sources with images that are not being used by any container can now be deleted without first needing to delete every image from the source.
In this update, users will see double the write speed performance on all virtual machines and can now utilize a VNC connection for VM interaction! We're also proud to announce that auto-scaling has moved out of beta and now supports a custom webhook for even more granular controls. The portal was improved with new charts, better container logs, and better storage visibility.
We’ve made major improvements to VM storage performance resulting in a doubling of write speeds.
Fixed an issue where VMs weren’t reachable over the VPN. They now route correctly.
Auto-scaling has been stable for a while and we've recently made major improvements to performance, reliability, and responsiveness. The beta tag has been removed.
You can now trigger custom webhooks when scale events occur, giving users full control over scaling logic.
Virtual machines can now utilize a VNC connection for enhanced interaction.
Added a detailed storage breakdown on the server view. Useful for debugging disk issues and tracking down container file sprawl.
Log drain config is now applied at the environment level instead of per container.
Logs now show up with syntax highlighting and color-coded formatting, making them easier to read at a glance.
The DNS lookups chart will now show deeper information on cached and throttled hits including success, fail, and not found data points.
After our last update, this is a small quality of life improvements patch. It's mainly focused on improvements to billing access, container networking visibility, and has a nice fix for custom DNS resolvers.
A bug was uncovered that would cause custom resolvers to only work with CNAME records. This has been resolved.
Users can now download invoices directly from billing emails, forgoing the previous requirement of logging into the portal for the download.
The container instances page now shows all attached networks for a given instance in one succinct view, making it easier to quickly view network details.
The platform now supports downloading VPN configuration files through load balancers that have only IPv6 enabled.
This release marks a new era of hybrid infrastructure orchestration and cements the platform's status as a true alternative to both Kubernetes and VMware. It is easily the biggest release in years for our organization, and we couldn't be more excited to get it into the hands of our users! The biggest piece of this major release is the capability to now run any kind of workload anywhere -- while still maintaining the efficiency, standardization, and automation that the platform brings. We can't wait to see what you're able to build.
Virtual providers makes it simple to add any x86-compatible (Intel, AMD, etc) infrastructure to your Cycle clusters, unlocking the full potential of bare metal and massively reducing the technical lift for on-prem, colo, or non-native bare metal cloud offerings.
For workloads that don't play nicely with containers, we now support running virtual machines alongside your containerized workloads in environments. Great for legacy apps, maintaining hybrid stacks, and even running a full on OS inside the environment.
Scoped variables now support being scoped to deployments. This gives users ultimate flexibility when it comes to certain scoped variables changing per deployment without the headache of making super dynamic on the fly changes to scoped variables within the environment.
The V1 load balancer now supports fixed destination prioritization. This feature will be mostly used alongside source IP routing to further anchor that the same requesting IP will be routed to the same container instance.
Users can now add IP's to virtual provider servers so that containers deployed to them with an L2 network can allocate their own IP's.
The platform now supports Layer 2 software-defined networking via its Networks primitive. This enables L2 connectivity across your infrastructure for more advanced networking needs.
Containers on Cycle can now connect directly to Layer 2 networks, not just at the environment level. This allows for tighter control over how workloads interact with external infrastructure or broadcast domains.
Users can now choose to expose the underlying host's Cgroups to a container. This aids in building things such as monitoring functionality.
Users can now give a container the ability to shut down a server via the internal API through opting into the expose power API.
The Linux kernel used by CycleOS has been upgraded to 6.6.17.
Each server now mounts a 10GB hard capped log volume. This guards against disk pressure caused by containers with uncontrolled log output from filling the servers disk entirely. Once disk usage for this volume hits 90% log retention is reduced from 72 to 48 hours.
An exciting release as we move into the end of April and prepare for an awesome summer of updates. Users can now mark instances to drain traffic, signaling the platform to stop routing new connections to them while existing sessions wind down safely. The V1 load balancer gets some nice flexibility improvements and servers now support nicknaming. New graphs for server telemetry have been added and container instance network telemetry graphs fixed. This release marks the beginning of an impressive schedule of releases we have moving into summer so keep your eyes peeled for changelog updates!
V1 load balancer routers now support source IP routing mode. This allows for more consistent and predictable routing to instances that require more durable sessions.
A new server telemetry graph has been added to the portal that shows transmit and receive bytes for individual nodes.
Container instances can now be marked for traffic draining, informing the platform that traffic should no longer be sent to that instance. For load balancers, the platform will stop traffic to that load balancer making it safe to remove, restart, or reconfigure.
Container instance network telemetry data had an issue where transmit and receive data was flipped. This has been fixed and now shows correctly.
As always, SFTP on any server will go into lockdown after a spike in failed login attempts. Users who were successfully authorized prior to the lockdown can now continue their session uninterrupted.
All servers now support adding a nickname, making it simpler to track individual servers in a cluster and hub.
A button has been added for restarting containers. For containers with multiple instances, the restart stagger will also be automatically applied.
Load balancer IP's on the environment summary now show the exact assigned IP instead of the associated CIDR from which an IP is assigned.
The compute service now tries multiple times to download container images from factory if there is an interruption.
Added support for specifying permissions and UID/GID for injected scoped variable files.
Increased the console buffer on containers making more room for logging during times where the compute services is updating or restarting.
This update brings a focus on flexibility in environments and pipelines. Users will enjoy a new pipeline step (deprecate container) and also new ways to use named resource identifiers. Load balancers can now be run without a public IP's assigned to them opening the door to more dynamic, zero-trust architectures. In the API, filtering got an upgrade with the addition of filtering on deprecated tag for containers. Finally, users who need to take deeper control of IPv6 settings can use the disable_ipv6 for further granularity in networking control on containers.
Load balancers can now be enabled without public IPs. This is valuable for load-balancing private applications within an environment that might not need public internet access -- i.e. Cloudflare tunnels.
We now support arguments like deployment.version and deployment.tag as parameters to a resource identifier in pipelines. With these arguments, teams can build significantly more flexible pipelines furthering automation efforts.
Containers can now be deprecated via pipelines.
The jobs endpoint wasn't properly limited to the expected capability for API keys.
In the API, containers can now be filtered by their deprecation state using ?filter[deprecated]=true/false
While we don't recommend disabling IPv6, there may be a specific reason where it is required. By setting net.ipv6.conf.all.disable_ip6 to 1, Cycle now fully disables IPv6 for a container.
This release brings users a handful of solid improvements and a couple needed fixes. The V1 load balancer routers got a fix to path matching in routes and also a major improvement to the predictability of the router chosen. Now the first match will always win. On the security front, a now deprecated cryptography alorithm was removed and the platform now enforces a higher minimum TLS version. Finally, hub billing now support multiple billing contacts, expanding flexibility on who in an organization receives important emails.
Hub integrations can now be deprecated. This demarkation has no effect on existing integrations but will prevent new additions of that integration from being added.
Removed a now deprecated crypto algorithm and now enforced a minimum TLS version of 1.3
We've refactored how the LB makes routing decisions to eliminate a race condition that existed with path matching. Now, router matching is significantly faster and more predictable, with the first (top) match always winning.
Organizations can now update their billing and tax information within the portal. Additionally, organizations can subscribe additional email addresses to invoice notifications.
We've expanded the functionality of user uploaded certificates to work with wildcard certificates. While we've supported LetsEncrypt wildcards for years, our recently added 'user uploaded certificates' did not support wildcards until now.
This update features a slew of more granular load balancer metrics for Cycle's V1 load balancer. These new metrics also come with additional tooling in the portal that allows users to make specific filters when debugging network traffic. The log drain format can now be customized, offering higher flexibility for integrations with existing services, and the platform received some great fixes that should lead to even more stability.
The format of log output can now be customized via the container config integration.
The V1 load balancer now collects more granular metrics that can be helpful in diagnosing application issues. A restart of the load balancer is required to gain these additional metrics. Additionally, users can now filter load balancer metrics based on domains and HTTP response codes in the load balancer's URLs tab.
We found a bug during our OCI image merging where, under certain conditions, files could lose their respective user/group ownership. This is fixed for all future image builds.
No longer allow an instance to be migrated if an existing migration is already occurring for that instance.
If an instance failed to migrate after 16 attempts, it could cause a node deadlock preventing future actions on that server.
In this release, users will find a wonderful new feature in Stack build logs . These logs will give insight into debugging stack builds that, in the past, have been more cumbersome to unpack. Alongside the stack build logs are a slew of stability improvements including a new agent logging mechanism that will enable even more resiliency to each server node during times of high usage. Additionally, auto-scaling was improved, requiring fewer window intervals before a scaling event can happen, resulting in an even more responsive auto-scaling from the platform.
We noticed a few inconsistencies in the naming conventions for metric/event labels. While they're now fixed, certain graphs within the portal will take a little time to populate with new data.
Stacks now generate build logs that detail the overall build of images, stack parsing, etc, making it significantly easier to debug variable/stack formatting issues.
Although this was beta released in December's build, we've made a number of optimizations to provide more context (via HTTP headers) while also reducing the amount of redundant meta information in the POST body.
If a container was reimaged immediately after a scale event, any deleted instances would be undeleted.
While not accessible yet, we've added the ability to create volumes as raw block devices on compute nodes -- preparing for some soon to be announced features.
Support the ability to move block storage volumes between compute nodes.
We've compiled the required kernel modules into CycleOS to support the generation 3 and 4 (c3, c4, n3, m3) and accelerator (GPU) GCP machine types.
We rebuilt the logic for network / bandwidth scaling thresholds to enable more responsive scaling. Previously, network scaling events required two interval windows to pass before a scaling event could occur.
Servers now report their CycleOS build version during their check-ins. This version is also displayed in the portal.
Following a restart of the server, CycleOS will now build a dedicated 2GB volume for storing logs. By moving logs to their own volume, nodes will no longer deadlock / become unresponsive if disk usage reaches 100%.
Hubs with active servers can no longer be deleted, unless the 'force' flag is specified.
When deploying servers at AWS or GCP, users can customize the size of the underlying block device. This functionality was broken in December's release.
Previously, it was possible to accidentally block VPN configuration via the portal by applying restrictive rules to the WAF (web application firewall). Now, the WAF will automatically detect the necessary Cycle IPs to allow VPN configuration without organizations needing to manage the IP list themselves.
We use cookies to enhance your experience. You can manage your preferences below.