Headlining this release is the ability to mount external file systems (like EFS from AWS) directly to servers on Cycle and then allowing containers to consume those mounts through shared directories. Along with that, the team has also created the Console view on the portal so users can now view direct console output from servers via the portal and API! The load balancer got awesome upgrades to telemetry, while the portal got great upgrades to handling that configuration as well as a refactor for notifications.
Users can now get console output from compute nodes directly in the portal and through the API.
Users can now create host level mounts with external file systems (like EFS from AWS), which can then be mounted in containers on that host.
The V1 load balancer has been improved by optimizing the way telemetry is collected, yielding faster response times.
Users can now configure the granularity and staleness of telemetry data, allowing use-case optimization.
The portal now has cleaner transport extension configuration interactions.
We've rebuilt the notification handler in the portal to be easier to interact with and take up less screen real estate.
Introduced and API call to fetch the active controllers from an LB.
Fixed a panic when updating scoped variables deployed via a stack.
When adding auth to an image source that previously didn't have auth, a crash would occur.
While it's not yet recommended for production use, the V1 load balancer is closer than ever to moving past the beta phase! We're happy to announce the addition to load balancer metrics page where users can gain valuable insights on ingress traffic to their containers. Along with this the team also added awesome new features like path matching, automatic domain sanitization, and users can now opt in to automatic updates for service containers. If that wasn't enough for one release, we've also made improvements to scoped variables adding valuable granular controls.
Cycle's new V1 native load balancer now tracks latency, response times, and more!
The V1 native load balancer now supports path matching on routers as well as automatic domain sanitization (removal of www., etc).
Users can now opt into getting automatic updates to service containers that would otherwise require manual restarts of those services.
A deadlock could occur within telemetry collection under certain conditions.
More granular controls and integrations for how scoped variables should be utilized with containers. Now supports config file injection to defined path and internal API durations.
Certain situations could previously allow an orphaned route to sit around until a Compute service restart, this has been resolved.
A traffic/metrics tab has been added to the load balancer service modal.
All scoped variable forms and dashboards have been completely rebuilt to align with new functionality.
Users will have access to far more flexible caching through the native load balancer configuration and can now change transport level settings without restarting. The VPN configuration settings and keys can now also be reset.
Native load balancer router extension config's are now more flexible than ever with cache settings.
Native load balancer transport configuration changes can now be made without restarting the load balancer.
The platform will no longer automatically attempt a server reprovision.
Fixed a bug that prevented some users from deploying new servers.
Added the ability for users to reset VPN config and keys through the portal.
Many users have requested lower level controls on the host, especially to run agent type services. We're glad to announce that we've expanded the platform to be more flexible than ever at the lowest level with the addition of mountable host proc filesystem and shared directories. The Cycle native load balancer will now also support proxy and cache extensions on the router.
Users can now opt into exposing the host's proc directory to any given container through the container configuration.
Containers can now opt in to read or read/write access for directories directly on the underlying host.
Cycle's native load balancer can now be configured directly through the portal.
Cycle's factory service has been enhanced to better utilize the disk of the underlying host, improving the services ability to handle more and larger builds.
Fixed a bug that would cause AWS servers in EU regions from listing properly on the server deploy form.
Implemented full IPv6 support for Google Cloud Platform.
Infrastructure abstraction layer API calls, referencing provider identifiers have been standardized.
Improvements have been made to the handling and propagation of errors originating from auto-scaling.
Cycle's new native load balancer (beta) now supports proxying and resource caching.
With our first October release, we're happy to share Cycle's auto scaling beta has arrived! This is a massive step forward for the platform and includes both container auto scaling and infrastructure auto scaling mechanisms. We've also included some improvements to resetting password ergonomics as well as some new data types for servers.
Users can now set containers to automatically scale on several different usage thresholds. Mechanics have also been added to automatically provision and remove infrastructure for these events.
The forgot password form has been improved and the process is now simpler.
Early support has been added for short lived (ephemeral) servers. This type of server will be treated differently than normal infrastructure and will have looser restrictions on being deleted/removed.
New 'Evacuate' and 'LatestInstance' flags are now added to servers. These flags will be used in conjunction with the new auto scaling features.
There's a building excitement around our V1 load balancer beta which now supports websockets, streaming requests, and more metric collections. These updates move the load balancer even closer to production ready status! We've also introduced two new container configuration options in update and a new state in healthcheck. The portal sees a nice collection of UI/UX improvements.
The V1 load balancer now supports websockets, streaming requests, and collects a wider range of metrics.
Users can now define a more verbose update strategy that allows for a stagger time to be set, further helping confirm system readiness on reimage.
Cycle now supports a much wider range of AWS EC2 instances.
Platform and Portal websockets have gotten an upgrade to connection status management, resulting in more reliable connections and reconnections to essential notifications.
Reimage container functionality has been reworked so that images are fully downloaded, extracted, and verified before shutdown signal is sent to any instance.
Users can now set a delay for their healthcheck configuration. This delay will wait a duration before healthchecks will start after a container has been started. They also support the health state of unknown, which enables users to implement probe-like patters.
During periods where an extreme number of notifications were being generated, the platform could occasionally deadlock. This issue has been resolved.
The portal will now show usage CPU usage on the server telemetry graph as a percentage.
Users can now manually re-attempt payment of an invoice through the portal.
An issue existed where the current container image being used was not properly showing when interacting with the reimage form. This issue has been resolved.
Users will notices some nice UI improvements in the portal such as: instance health state on container modal and graph font size standardization.
After months of development we've launched a new portal into production. With it comes a swath of new features, enhancements, and fresh design that will bring better information and productivity to our customers. Alongside the portal release, users will also notice meaningful platform updates to stacks, VPN, and image name validation.
A new portal is released featuring a modern style, increased speed, more reliable state and websocket connections, and much more.
Users can now define VPN and load balancer services in stacks, bringing additional flexibility to the format.
Users can now programmatically interact with the VPN configuration.
The notification pipeline has been improved to further handle large spikes in notifications without any reduction in functionality.
Previously, certain images with periods in the image name would fail form validation. This issue has been resolved.
This update will provide meaningful updates to server/host CPU's and the visibility, configuration, and effectiveness of those resources. We've also made a new round of fixes to the v1 Cycle load balancer for those who've opted into its use, and finally a new image source type of OCI Registry has been added.
The platform now tracks CPU utilization for the host itself. Previously, the platform would only track load, RAM, and storage usage.
Users can now designate a core range x-y for pinning a container to CPU cores.
Increased period to match the default which will be a better balance of throughput and latency while decreasing overhead.
Fixed a panic that occurred when the LB failed to properly route traffic to a destination.
Added the ability to add OCI registries as image sources. This is a new feature in Cycle that provides support for any image registry conforming to the OCI spec. As a part of this, you can now natively add AWS ECR registries as sources to the platform.
With this update, we introduce the foundations for the next major phase of our roadmap and mission. From the beta release of Cycle's first native load balancer and parallel pipeline builds to a beta release of our new portal and more, this update contains a number of components that have been in development for the last 6+ months. For those looking to gain access to the beta portal, please reach out to our team!
Pipelines can now run in parallel. For pipelines that depend on the same resource, a new state of "Acquired" has been created that signifies a pipeline is waiting for that resource before starting.
Increased the quota period so that the CFS scheduler creates less overhead when managing a server with high CPU usage.
Users can now opt into using Cycle's newly developed load balancer. For now the Cycle load balancer is meant to be a 1:1 replacement for the HAProxy load balancer that is provisioned by default, but we expect a plethora of additional tooling, observability, deployment mechanics, and scaling tooling to be built onto it in the coming months.
Users can now add additional accounts from the same natively supported provider (AWS, Vultr, Equinix, GCP) through IAL mechanics.
The limit for environments connected to a single SDN has been increased from 5 to 8.
The search indexing, through the platform, now returns more objects faster than it previously did.
Added a field on notifications that is state_previous that tells you what the previous state was for a given resource that has been updated.
Cycle now supports GCP regions Turin, Italy and Dohar, Qatar.
All durations are expressed as strings instead of ints giving the user better readability and simpler mechanics when setting longer timeframes.
This update includes 7 weeks worth of improvements across a number of areas, but most significantly: images. Users now have an image caching layer, support for pushing local images, simplified resource identification, backup retention policy, and container deletion prevention. The team also fit in some wonderful enhancements and needed fixes.
The image caching layer will allow for the use of images as they are built on the factory instead of waiting for the image to be fully uploaded to our backend image storage service. This update should greatly reduce the time it takes larger image to become live and also have a significant impact on reducing errors in uploads to the backend service, which, previously, could cause the entire image import to fail.
A new image source type has been added to the API called the "bucket" type. This type will allow users to upload images from their local machine directly to Cycle without needing to first push that image to a registry.
Environments, hubs, pipelines, and image sources will now support identifiers. Identifiers give users a simpler way to name a given resource without using the ID.
Users can now wrap long blocks of commands in quotes when setting a container override command, giving greater flexibility to this utility.
Backups will now have a retention policy that will default to one year, but that can be set by the user. This policy is how long a backup will be kept for a given container.
Users can now take advantage of the containers "Lock" field, a boolean where true means the container will not be able to be deleted.
An issue that prevented the correct price from being shown on certain GCP GPU's has been resolved.
An issue that would cause stack builds to panic when created as part of a pipeline has been resolved.
Our first release of April brings a fix to VXLan tagging that guarantees uniqueness of the tag among environments. Also, users of the API can now filter by events for security/hub activity.
VXLan tags are now guaranteed to be unique between environments.
Users can now filter API returns for security/hub activity events.
This minor update further optimizes IPSec and other container orchestration features within Cycle. Most notably, this update helps make Cycle networks more adaptive to host network changes.
If a neighbor significantly changes (IP, IPsec config, etc), Cycle will now automatically learn of those changes and rebuild any necessary networks and routes.
Previously, if too many servers were deployed too quickly within a single hub, there was a chance that an ID conflict could exist causing packet loss between servers.
Batch instance deletes have been optimized to occur faster and use less lookup calls.
Our team is thrilled to announce the full native support of IPSec on Cycle! Adding IPSec brings an additional layer of protection for the networks and infrastructure of our users and ensures data in transit is fully encrypted an secured. No additional configurations or changes are necessary to take advantage of this new feature, as it was important to our team to deliver this in a way that wouldn't require additional configuration from our users. Alongside IPSec comes several improvements including simplified filtering and parallel container actions as well as a fix to stateful instance base hostnames.
Users will now enjoy an additional layer of protection for their networks and infrastructure through our native implementation of IPSec. This feature makes sure that sensitive data is fully encrypted and secured, and will also make it easier for businesses to meet compliance requirements and maintain regulatory standards.
There was an issue with the option to configure stateful instances to use their shared base hostname as opposed to unique hostnames. That issue has been resolved.
When using the filter query, users will now enjoy a simpler syntax of filter[range-start] instead of the previously more complex version filter[range][start].
Users will now notice that container reimaging, scaling, starting, and stopping will happen in parallel.
The load balancer service has been upgrade to use HAProxy 2.8. This upgrade requires a restart of the load balancer to take effect.
In this release, environment discovery services get improvements to event broadcasting and reconfiguration, leading to a solid increase to the accuracy of DNS lookups. The team has also made significant improvements to the mechanics used for syncing environment scoped variables to instances and solved a bug that could cause the API to return an error if the billing system was generating an invoice while users interacted with the API.
The platform now uses file mounts for discovery service resolution enabling users to modify / migrate discovery services without having to restart containers.
When an environment discovery service instance is created or deleted, the platform will now immediately broadcast an Environment Services Reconfiguration notification.
The platform will now automatically sync environment scoped variables to the compute server(s) when a new instance, from an environment, is created that requires them.
Fix bug where it was possible for a hub to be temporarily not 'ready' due to the hub invoice for that month being generated.
This release adds support for Google Cloud Platform A2 A100 line of GPU servers. This marks the 3rd natively supported provider, from which, Cycle supports GPU instances. Alongside the added GPU's, the team has also added two new container deployment strategies, node and edge! To round things out, the pubic API now also supports filtering on jobs.
The platform now natively supports the A2 A100 line of GPU servers from Google Cloud Platform. These servers offer configurations from the smaller side all the way up to massive boxes with over 190 vCPUs.
These two new container deployment strategies give users even more options when architecting their deployment. Node will deploy an instance to every server which matches the deployment criteria, while Edge will prioritize geographic distribution of instances.
Fixed an issue in VXLAN discovery that could cause a deadlock to happen.
Users querying jobs can now use the filter query to more granularly control the return data.
Previously, certain AWS servers were not appearing on the deploy infrastructure form. This has been fixed and the servers will now show as they are available.
We're happy to announce the full native support of Google Cloud Platform (GCP) as a provider on Cycle! This marks the 4th native integration available through Cycle and greatly expands the available types, locations, and configurations of compute resources supported. On top of the addition of GCP, the team has made huge strides in networking improvements.
We're happy to announce our official native support of Google Cloud Platform (GCP). Users can now add GCP through the providers interface and deploy GCP infrastructure with ease.
Users can now expect VXLAN networks to be created, configured, and pruned more efficiently, resulting in faster on-demand scaling.
Nodes now support an 'authorizing' state and servers support a 'configuring' state giving users and api integrations more context into what state the resource is in.
The platform now goes through additional logic when a server boots that will ensure IPv6 connectivity exists prior to building all networks. This check will run on nodes where networks are preferred over IPv6 connections.
The task response now includes a Job struct that provides extra context around the acceptance of a scheduled job into a queue.
In this release, we are happy to announce the support of build time args through the API (and soon through the portal as well). Alongside this, users can now also pass reconfiguration settings for servers at provision time.
Users can now supply build args to the image create API call. This allows for build time arguments to be passed to images that might need these args to pass credentials, non-runtime environment variables, or other arguments.
Cycle will now match exact names described in the "Build File" setting. This gives users the flexibility to define multiple Dockerfiles in their repo in the same directory, by matching against the exact name provided.
Users can now describe "reconfigure" settings when provisioning a server, giving more granular control over SFTP settings and base volume size.
This release features solid improvements to our newest API call and now includes the private key for certs generated. Also, we've again increased the scope of the TLS requirements for HAProxy.
Strictness on TLS certificates has again been improved and is now more secure by default.
Improvements were made to the domain TLS API call and users will now receive additional private key information as part of the return.
This first release of the year brings with it a new API endpoint that allows users to export TLS certificates created on Cycle. On top of that, we've added more strict TLS policy on load balancers and made massive improvements to the pipelines UI. Finally, stack build errors now properly show granular errors and logs for images that fail during stack builds.
A new endpoint "/dns/tls/certificates/lookup" was added. This endpoint allows users to fetch TLS certificates for certs generated through Cycles DNS suite.
Previously, images that fail during stack builds would not show granular errors or give access to a build log. More granular error reporting has been put in place and build logs are now available for failed builds.
The pipelines user interface has been reworked so that previously saved step data now shows up when editing steps. Users should expect a far better experience working with existing pipelines.
The load balancer now enforces a stricter TLS policy, making the service more secure by default.
We use cookies to enhance your experience. You can manage your preferences below.