This latest update brings our users several new features and lots of improvements that are aimed at enhancing flexibility, integration, and usability. Servers being deployed through AWS will now have the option to choose individual availability zones per server, in the given region. We've also enabled several new AWS regions to deploy from. Log draining is now natively supported through webhooks, DNS caching has been improved, and load balancer metrics now auto discover ports from container configuration.
While Cycle has always favored multi-region deployments, we now also support multi-availability zone deployments at AWS. This brings our AWS integration inline with our GCP integration.
We realized there were 4 AWS regions which we didn't support due to an API filtering limitation, that has been resolved and we now support all AWS non-GovCloud regions.
On a per-container basis, Cycle can now push your logs to an external source via an HTTPS webhook, enabling better integration with Datadog, Middleware, and other log aggregators.
You can now utilize pipeline variables in your POST/GET webhooks, enabling deeper integrations with your external development / devops tools.
We've tweaked our caching logic to support more generalized matching while still preventing DNS cache poisoning attacks.
Our US clients can now pay their invoices by ACH instead of credit cards.
We now auto-discover controllers based on active ports, making it easier to observe traffic into your load balancer.
Hubs get deeper security options as this update introduces the ability to require two factor authentication for all users to sign in. Monitoring also makes some major leaps forward with new graphs located in several different places throughout the portal, showcasing specific container and instances telemetry. Keep your eyes peeled, the next update will be one of our biggest yet!
Gain more visibility into your top (RAM, CPU, Network) container instances scoped to: containers, environments, and clusters.
Previously, under times of high cpu load/ram usage, Cycle agents would occasionally start failing to checkin. Now, we set aside a small portion of resources to ensure the agent can always maintain communication and control.
Need to use your own TLS certificates for compliance reasons? Now you can upload certificates via the Portal and API.
Improved resource allocation meters on both the cluster and server view using new monitoring data.
Hub owners can now enforce two-factor authentication for all users within a hub.
Git hashes within the non-'main' branch would fail unless they were one of the newest commits. This has been fixed.
When a container no longer has access to a scoped variable, the underlying file is deleted now. Previously, the file would remain with the old value.
While Cycle enables parallel runs of different pipelines, it queues the runs of a single pipeline. With pipeline subqueues, organizations can spin off parallel runs of the same pipeline. Note, this is an advanced feature for specific use cases.
Dozens of smaller UI tweaks within the portal.
The platform has moved from a 12 hour sync to a ten minute sync which should drastically reduce the chance for time drift between nodes.
Image management is in the spotlight with a completely re-written flattener and added caching functionality. These upgrades have lead to 2-3x faster build cycles in our testing. CycleOS has been upgraded to use the latest 6.6.13 kernel as well as cgroups2, stack builds now support more flexible build variables, servers can be evacuated, and pipelines support step reordering through the portal. For the native load balancer, the web application firewall has been improved to support more complex conditions and the proxy feature has been expanded.
Stack variables no longer get injected into the saved 'stack build run', instead they're stored separately and utilized only at runtime. This prevents sensitive variables and secrets from being visible when users lookup previous stack builds.
We fixed a race condition that would occasionally prevent some instances from being unavailable to other instances for roughly 60 seconds immediately following a deployment.
CycleOS has been upgraded to utilize the latest long term support (LTS) Linux Kernel.
With the latest kernel upgrade, we've also updated Cycle to utilize cgroups v2 which enables more efficient, and customizable, resource constraints for containers.
Supporting Image Source Create in pipelines encouraged a Cycle anti-pattern. We have since removed this setup to make it easier to adopt best practices.
CycleOS now supports the 560.35.03 Nvidia drivers. Cycle natively supports GPU passthrough for any datacenter-class Nvidia GPU.
The WAF now supports more complex rule conditionals along with more supported filters (URL Matching + Method Matching) for HTTP transports.
The proxy handler in the Native LB now supports URL rewriting based on the MatchPath for proxy requests.
The Functions Scheduler API now supports /active-instances which is a real-time return of functions that are currently processing workloads.
We've completely rewritten our image packer to handle layer extraction/compression in a parallel manner. This improvement decreases build times by 40% - 60% for most images.
We've added image caching into our build systems for hubs in the Standard or Scale tiers. This is a short-term ephemeral cache that will help with concurrent builds within an activity window. After 6 hours of inactivity, the caches are reset entirely.
Users can now migrate all of the instances from one server to another in a single action. Additionally, this puts the server into an 'evacuated' state which prevents it from accepting new instances. Load balancers must still be moved manually.
We've exposed the ability to increase an instance's volume via both the portal and API. Previously, this action required a Cycle team member to be involved.
Users can now reorder pipeline steps in the portal.
Stacks now support variables that are nested within a string, for example "api-{{version}}"
All pipeline runs will now link to related resources which were created or consumed during the run and will also show build logs for image imports and stack builds.
This release marks the official beginning of Cycle's Logging Alpha. The alpha features the ability to see up to 72 hours of container instance logs, including simple to complex search. Users will also now have access to export their environments to stacks. This means that any environment can now be "exported" into a stack file and redeployed to create a new environment. Finally, a fix has been made to stacks so that when using YAML markup, scoped variables now properly import as expected.
The alpha release of logging is now live. From the container modal, gain access to the last 72 hours worth of logs for your instances. You can also utilize this functionality for simple or complex (regex) search.
Configure an environment to exactly how you want it, then utilize this feature (found within Environment Settings) to export the environment to a stack.
We recently discovered a bug that prevented scoped variables in YAML stacks from being properly imported, this has been resolved.
Deployments wouldn't properly display in the portal for environments that had more than 25 containers, this has been resolved.
In this release users will gain access to magic variables in pipelines. Dynamic pipelines can now take advantage of 4 different magic variables that will inject a special value based on the definition into the field assigned. This will make unique field entries that need to be tied to date like values much simpler to work with. DNS zones can now be imported via file, greatly reducing manual work in migrating to Cycle DNS. Finally, we've fixed an environment scoped variable priority order which could result in certain undefined environment variables not correctly being assigned when defined in both container config and scoped variables.
Users can now match routes to include or exclude specific containers.
A bug that would cause environment variables with blank definitions to not be appropriately overwritten by scoped variables has been resolved.
Users can now take advantage of magic variables in pipelines: {{-date}} {{-date-time}} {{-time}} {{-time-rfc3339}}, which will return these values in place of normal pipeline variables which require an input value on each pipeline run.
Users can now import a DNS Zone file to their zones on Cycle, making transferring domains much simpler.
The discovery service now has a throttle on queries that will protect users from services that would otherwise overwhelm the service with lookup requests.
Added the ability to Overwrite Runtime Config on the pipeline step Reimage Container, giving more flexibility to users looking to mutate the container runtime configuration on deployment.
Users will now notice a Router Response graph located on the load balancer service dashboard for V1 type load balancers. This graph will show the HTTP response code if the load balancer is in HTTP mode.
The team is excited to release the alpha version of our web application firewall, which includes the ability to block or allow traffic based on IP at the load balancer. This first release marks the beginning and the WAF will be expanded to support features like geographic matching, url matching, and http controls over headers and methods. Pipeline parallelism has been improved through the rebuilding of the locking / exclusion mechanisms and the native load balancer now supports rewrite rules that allow for granular URL rewriting.
Using the Native Cycle LB? You can now block IPs and IP ranges at the load balancer. This is the first release with WAF functionality -- much more to come!
Users can now use URL rewriting (with variables!) in the Native Cycle LB. Use regex, with groups, for a path match then utilize a $$# format to insert the corresponding group matches.
Previously non-UTF8 characters would cause the console to crash and reconnect in a loop.
In the previous update, we started tagging servers at infrastructure providers. Under certain conditions, we were tagging servers with a character that wasn't allowed, preventing a provision.
We discovered, and fixed, a race condition that existed when a load balancer was instructed to serve traffic for both a wildcard domain and a non-wildcard domain that would exist underneath the same wildcard.
We rebuilt the locking / exclusion mechanism in pipelines to better determine which resources needed read locks vs write locks. This improvement results in better data integrity while also enabling better performance while running concurrent pipelines.
This update marks the beginning of our march toward native monitoring, laying the groundwork for comprehensive metrics, events, logging, and alerting. Users will notice initial visual enhancements, with a full monitoring dashboard to follow. This release also introduces granular access control lists (ACLs), allowing detailed permission settings for several resource types. Geographic DNS (GeoDNS) has been added to support the most latency-sensitive applications, and other improvements include enhanced cluster management, customizable shared memory per container, a rebuilt autoscaler, and expanded support for scoped variables.
We've officially begun the start of Cycle's native monitoring (metrics, events, logging, and alerts) solution. This release contains the base primitives and data collection that the remainder of the monitoring solution will be built upon. You'll notice some new graphs and visual components in the portal, but a future release will contain a proper monitoring dashboard.
Building off the custom roles functionality we released in our previous update, organizations can now restrict access to clusters, environments, pipelines, and more via granular access control lists.
Deploying a latency sensitive application? Enable GeoDNS on any environment with 3 or more load balancers (entrypoints) to ensure your users get routed to the closest load balancer(s) to your users.
Previously, Clusters were simply an identifier. Now, with Clusters being their own resource, they support ACLs with other new capabilities to come.
Users can now customize the size of the /dev/shm (shared host memory) device that gets attached to container instances.
Since Cycle is IPv6 native, we previously delayed empty IPv4 lookup responses to assist with compatibility for older applications. We no longer enforce that delay for new environments, though it can be set via the discovery settings.
We've rebuilt the autoscaler to use the new monitoring primitives. These metrics enable us to perform autoscaling in a significantly more efficient, and predictable, manner.
Cycle will now generate the directories required to support the injection of scoped variables who utilize the 'file' access type.
Users can now specify container identifiers for limiting scoped variables, making gitops/pipeline automation easier.
When building images via pipelines, users can now pass variables to be injected into a container's build process.
Not a fan of JSON? Stacks can now be defined via YAML.
The container instance console now persists (up to 128kb) through updates and restarts.
The latest release of the Cycle Platform brings powerful support for native gaming and video streaming via UDP, more granular configuration for function containers, and massive performance gains when using Cycle's native load balancer.
Functions can now be triggered via pipeline. Have a migration/init script that needs to run as part of a deployment? This can now easily be accomplished with functions and pipelines.
The Native Load Balancer now supports UDP connections which can be useful for VPNs, video streaming, gaming, etc.
Max Runtime, Max Queue time, and other settings can now be configured for functions within the container config.
The Native Load Balancer now supports, on a per-router basis, latency-based load balancing ensuring that traffic will be routed to the top quartile of instances relative to the ingress load balancer. This can reduce bandwidth costs while also yielding performance increases within environments that have multiple load balancers.
Users can now specify the path to a stack file inside of a repo, enabling teams to have multiple stacks from one repo source.
Functions that terminate will now autorelease from the scheduler so they can be used again.
We've made a few improvements within the Native LB to more quickly adapt to routing changes. Additionally, the Native LB now uses less internal API calls when aligning with container state changes.
We fixed a calculation error in our traffic graphs that was showing false traffic spikes.
Scoped variables, that exist within stacks, can now be configured via pipelines.
We're excited to introduce a beta release of Cycle Functions, bringing functionality to the platform that facilitates running lambda, batch, and serverless like workloads. We've also made improvements and fixes across the board: the Scheduler API Endpoint is now publicly accessible for applications needing external API interaction, we've refined our telemetry caching for LBv1 to prevent potential downtime in rapidly changing environments, and you can now configure load balancers to bind to a server's host IP for more efficient edge/CDN deployments.
Organizations can now run their functions (lambdas / batch jobs / serverless-like microservices) natively on-top of your Cycle infrastructure. This feature release is currently a beta release.
The scheduler can now be publicly accessible if your application needs to utilize the scheduler API from an external endpoint.
In the last update, we introduced a caching layer for LBv1 telemetry information to persist telemetry across LB restarts. Unfortunately, this caching could get out-of-hand for environments that were changing often. This could lead to the load balancer not synchronizing new changes yielding potential downtime for an environment. We've refactored this to be a bit more intelligent about when we should cache telemetry data.
You can now instruct a load balancer to bind to the server's host IP instead of acquiring new IPs. This is helpful in building edge/CDN like deployments where an environment may have dozens, or hundreds, of entrypoints.
Previously, a stack build would occasionally fail to clone via git if you used a custom git branch. This has been fixed.
We're happy to release this update which includes a new and completely reworked role based access controls (RBAC) system. Now users can create custom roles, change the capabilities of default roles, and use existing roles as templates. Along with this major rework we've also released a new integration with Depot.dev, allowing users to choose to use Depot's factory to build their images.
Previously, hubs supported only four default roles. Admins can now create custom roles, with associated capabilities, to further define users. Granular ACLs will be available in the next update.
Cycle has a native integration with Depot.dev to allow faster build times.
Previously, we were too aggressive with our throttles for service containers. Service containers should now see a slight improvement in performance, especially under high traffic demads.
Previously, we would remove the telemetry data associated with a instance once an container was deleted. This lead to an issue where historical data was improper and negatively skewed.
The portal now supports more granular, role based, visibility and blocks per "panel" in the portal.
In this release, users will find a completely rebuilt integrations section under the familiar hub integrations. This rework moves provider, object, TLS, and all integrations on Cycle to the hub scope and unifies them in a single space for a better user experience and better hub management. We've also introduced a brand new healthcheck focused step in pipelines that allows users to wait for a deployment to be healthy before proceeding in the pipeline. The portal now has a great network diagram on the container modal dashboard, which will help users quickly diagnose and solve issues related to publicly accessible containers.
We completely rebuilt the way users and applications can configure external integrations for Cycle. This new approach enables us to roll our new integrations with infrastructure providers, storage providers, and more, in one streamlined approach.
When viewing a container that expects public ingress traffic, Cycle's portal now displays a topology diagram that can help in solving potential configuration issues.
Every container instance gets a .json file mounted into /var/run/cycle/metadata that gives context about the container, and the environment hosting that container.
A new 'Healthcheck' step has been introduced which enables pipelines to wait for a deployment to checkin, and become healthy, prior to continuing.
In the Native V1 Load Balancer, teams can now disable a controller from listening to public traffic -- even if a container desires a port/route to exist.
On the heels of the newly released Deployments feature, we've added new pipeline steps that make deployments more powerful than ever. Along with the new steps, users will find support for using SDN's to communicate between deployments, extended variable support in pipelines, mTLS support for the native load balancer, and optimizations to how the native load balancer treats unreachable destinations.
Added the ability to start/stop environment deployments via pipeline.
More fields now accept variables within pipelines enabling users to build more powerful and dynamic automations.
Containers in deployments can now utilize SDN's to target containers in deployments from other environments.
Cycle's native load balancer now supports mutual TLS on a per-router basis.
If the native load balancer is unable to reach a destination, that destination will be temporarily marked as unavailable to decrease retry attempts on subsequent requests, ensuring lower latency routing.
As of a few hours ago, CVE-2024-21626, CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653 were made public. As reported by SNYK, the first of these vulnerabilities involves an issue with RunC runtime and the other three BuildKit. Now, within just a few hours of notice, we bring our users this update fully patching all of their infrastructure and protecting them from any exposure to these exploits.
A number of vulnerabilities (CVE-2024-21626, CVE-2024-23651, CVE-2024-23652, and CVE-2024-23653), that affect almost all container platforms, was announced on January 31st. This update addresses those vulnerabilities.
Similar to a container runtime override command, or a backup command, health checks now support commands that utilize subshells.
Previously, old routers weren't removed from the native load balancer and could occasionally cause race conditions. The native load balancer is still in beta.
Users can now further customize the granularity/sensitivity of their telemetry collection. Additionally, proxy/forward handlers were improved to handle in-transit content modification.
This latest update introduces Deployments, allowing seamless management of application versions and rainbow deployments. We've also added variable support for both pipelines and stacks, boosting the flexibility of both resources, simplifying management. The new deployments feature brings with it new pipeline steps that make it easier than ever to plug Cycle into your CI/CD workflow.
Teams can now deploy multiple versions of their applications into the same environments and manage which version is production, staging, development, etc. Organizations can then route traffic to specific versions based on a tag enabling zero downtime updates and rollbacks.
Pipelines now support variables for identifiers and deployment version tags allowing teams to build one pipeline that can accomplish many unique tasks.
Users can now denote variables in their stacks and, at build/deploy, specify the values for those variables enabling stacks to be customized on the fly.
Similar to a /etc/hosts file on your machine, Cycle's discovery service now supports custom internal domain resolution for environments.
We've introduced a few new pipeline steps around deployments, web hooks, and image imports to enable better automation when paired with a CI/CD entrypoint.
Introduced a new way of referencing resources/objects within a Cycle hub using a textual string as opposed to requiring an Object ID.
Previously, compute servers opened a couple different ports for compute<->compute communication. We've now consolidated this into a single server/port to make it easier to enforce security policies.
Refactored all 'Add Item' forms to be more predictable and less prone to a user forgetting to add a new item to a given resource.
To prevent confusion, we've consolidated all stack deployment functionality to Stacks/Pipelines as opposed to having nearly half a dozen different ways of deploying a stack.
We use cookies to enhance your experience. You can manage your preferences below.