The HA orchestrator is now more strategic in how it deploys instances across locations ensuring more servers are included in the deployment pool.
Once a stateless container is deployed, you can now scale up/down the number of desired instances using the settings tab.
After some network optimizations in our last update, an issue arose where VPN routes wouldn't always be configured. This has been resolved.
Fixed an issue that prevented inbound IPv6 connections on Vultr.
Created a status page, found by clicking the bell at the top of the portal, which will show platform wide announcements. In the future, this page will also show the status of Cycle’s core services.
The T3, C5, M5, R5, I3 series of AWS instances are now supported.
The logic for the High Availability and Resource Density orchestrators was rebuilt. High Availability is now available for services.
Image builds on Standard, Growth, and Scale tiers now have access to more resources, resulting in much faster build times.
Cycle now support wildcard domains on LINKED record types. For example *.cycle.io would resolve portal.cycle.io, api.cycle.io, etc. and route the traffic to the same container.
CPU limits and reserves are now set as "shares" instead of cores/seconds. 10 shares are equivalent to 1 thread. If you had any CPU limits set, they have automatically been converted to equivalent shares.
Added two new options to server constraints (Infrastructure -> Server -> Settings). If "Allow Pool" is set, containers with no tags are able to be deployed to the server. If "Allow Services" is checked, environment services such as load balancers and discovery can be deployed to the server.
Instead of immediately requesting desktop notifications, Cycle will now wait for a user to sign in the first time and pop up a modal stating why we need notifications before asking.
The load balancer and discovery services can now be deployed in an HA configuration. Cycle will intelligently create and manage multiple instances of these services when enabled. To enable, go to your environment dashboard, and check the HA box next to the desired service, then click save.
Added pagination to relevant data tables across the portal. Use the arrows in the top right corner, or your arrow keys to navigate.
Fixed an issue that would require a user to refresh the portal before new clusters would show up when creating a new environment.
Fixed issue where pointing multiple linked records to the same container caused the container to appear that many times in the edit list
Hub invitations now appear on the dashboard when you receive them.
Added the ability to delete your account. It can be found on the account settings page by opening up your account menu and clicking your name.
When reimaging a container, the container state will now be set to 'reimaging'. Additionally, compute nodes will now pre-start the download of new images before restarting the existing containers offline limiting downtime.
Compute nodes will now perform actions on up to 3 instances of the same container concurrently. This should greatly accelerate start, stop, and deletes.
Image builds that require Docker will now utilize Docker 19.03.
Fixed a race condition that existed with container state updates. Previously, container instances would start and stop properly but the container state as a whole occasionally got stuck on 'stopping' or 'starting'
A VXLAN route wasn't being properly configured for compute nodes running on AWS preventing private networks from establishing connections.
Favorite an environment in the Environment Settings to have it always appear at the top of your environments list.
Container and instance hostnames can now be changed post-creation. Discovery services will automatically be updated with the new hostnames for resolution
Support for AWS had been added to Cycle. AWS support is still beta as of August 8th. During beta only deployments using t2 EC2 instances into N. California (us-west-1) are enabled.
To make debugging easier, the master console window now has a 'clear' button
Changed all graphs and charts to use a cleaner / more flexible graphing library
Containers in different environments can now communicate using DNS (<hostname>.<network-identifier>). Both environments must be in the same multi-environment network to communicate.
Cycle will automatically send out emails for billing, server deployment, and hub invite notifications.
Instances could potentially deadlock by sending too much output to the console and filling the buffer, this has been resolved.
Certain container cachces weren't being properly cleared on DNS updates which could result in load balancers not accepting traffic on new domains.
Previously, ingress console service nodes would all have unique keys which would require users to clear their known_hosts file every SSH login. Now all consoles will communicate and share a common identification key.
Using either the portal or api, create networks that consist of multiple environments. The containers within these environments are now able to talk over their own private/isolated network.
Previously, container instances would discover routes to each other using BUM frames. As more containers were deployed, this discovery process could lead to network floods and instability. Going forward, Cycle automatically builds static L2 networks where all routes are defined by Cycle ahead of time. These static networks should result in faster packet delivery and greatly increased scalability and stability.
Cycle will now automatically determine startup delays for stacks that specify a startup order for containers
Graphs are now optimized to better handle environments that are running 20+ containers
Ability to specify sysctl variables via stacks, the api, and the portal
Tweaked some of the default sysctl values for overlay networks to assist with ipv6 discovery
Invoices can now be paid manually from the invoice page
For any stacks that have file injections, build unique images with those files if needed
From the DNS dashboard, users can now see the status of TLS/SSL certificate generation requests
Stacks that contained multiple Dockerfile builds were experiencing a race condition that prevented some builds from occuring.
Zones that did not contain AAAA records were returning improper answers when resolving AAAA records.
Although the load balancer would start and run properly, a race condition existed that would occasionally show the load balancer as infinitely 'starting' in the portal
When a non-stack container was re-imaged with an image from a stack, that container wasn't properly associated with the stack.
For a small subset of Cycle servers, the compute services didn't properly clean up existing VXLAN neighbor associations which lead to IPV6 router advertisement floods.
Servers can now be grouped into clusters. When creating an environment, you now have the ability to chose which cluster the containers within that environment should be deployed to.
A number of improvements have been made to the container startup proccess: (1) containers now start 3x faster on average (2) users can define the order in which containers in an environment should start (3) a delay can be added to a container starting to ensure a potential dependent container begins first
The compute service was missing an inheritable kernel capability that prevented 'privileged' containers from starting. We do not recommend setting a container to privileged unless there's a very specific reason in doing so as the security risks to your servers greatly increase.
For some users, domain verification using NS records (not TXT records) has been broken since 2019.04.09. NS domain validation should now work for all users again.
A few optimizations were made to the graphs on the portal to decrease render time by over 70%
Using the portal, api, or stacks, users can now configure inner-container health checks to ensure application integrity. Should a health check fail after the specified retry attempts, Cycle will automatically restart the container.
Solved an issue within the portal where users might unexpectedly be logged out if they had 2 or more portal tabs open.
Added more visiblity into error handling and increased the verbosity of instance errors to container event logs
Every thirty minutes a node will now perform a sync with Cycle's core to ensure the proper container instances, networks, and load balancer configurations exist.
Improved / optimized some code around reconnecting to Cycle's core in the event that a node experiences a connection loss
A bug in CycleOS's startup code prevented the OS from rotating logs out of memory. After enough time (weeks to months), the server would run out of usable RAM. Although highly uncommon, this update will require manual restarts of customer nodes. Restarting nodes will automatically pull the latest version of CycleOS.
Cycle's primary DNS servers now download and maintain a cache of all live DNS records. If a record is updated via the portal or API, Cycle's DNS servers will recognize the change and update their caches within 5 seconds. By maintaining local caches, Cycle's DNS should continue to operate normally in the unlikely event that Cycle's core systems are unreachable.
Previously, Cycle's API & Portal did not support the ability to add or modify SRV records, they now do.
Hub invites are now shown directly on user profiles, notification improvements coming soon.
Stack files can now be either JSON or YAML formats
To increase the isolation of container instances, Cgroup namespacing has been enabled by default on all containers
Permissions on overlayfs directories were too strict which prevented container instances from running certain chown/chmod calls on persistent volumes within nested overlays
For any container instance that doesn't have CPU resources defined, Cycle now enforces a default maximum usage of 50% of the total computing compacity. To override this, specify CPU resources as needed.
Containers that have been defined as highly available or stateful now have protection against being terminated by Linux's OOM (out of memory) killer. Should all stateless containers be terminated and a server is still saturated, stateful containers may be automatically terminated at that time.
When creating a container via the portal, the api, or a stack, a container's deployment strategy can be defined. As of 2019.04.01, the options are as follows: first-available, resource-density, and high-availability
When creating a container via the api or a stack, arguments and environment variables can now be specified for targetted instances allowing easier deployment of complex cluster-ready applications
Any load balancer handling TLS/SSL termination will automatically reload certificates when a new one is generated or an existing is renewed.
Added tooltip on hover to container delete button to indicate that it links to the delete form, and doesn't immediately delete the container.
Fixed a bug preventing enviroment variables from the container image to be displayed/edited on the container config form.
After a container or DNS action, there's a delay before load balancers recognizes the change. This is done to prevent unnecessary queries if many containers are started at the same time. This delay has been decreased from 15 seconds to 5 seconds.
Update 2019.03.26.1-angel-lake ensured containers were cleaned up after they stopped unexpectedly but this update also broke restart policies. Container restart policies are now utilized again.
Logging in to a two-way console previously invalidated the token/password used to login. Now, the password can be used any number of times upto 15 minutes. Generating a new token still invalidates all existing tokens.
If an instance unexpectedly stops and no restart policy exists, or a container fails to successfully start within the maximum number of restarts allowed, the instance's state is set to 'failed'
If a container fails to start within 4 minutes, a timeout error will now be thrown rather than an 'unexpected EOF / cannot decode' error
If an instance failed unexpectedly and didn't have a restart policy, Cycle's compute service wouldn't cleanup any log/pid files
To help ensure users have a clear picture of the state of their infrastructure, Cycle's portal dashboard now shows the Compute version running on each server.
After a user/api key has made more than 6,000 requests in one hour, any future requests during that 1 hour window will receive a 429 rate-limited HTTP status.
In attempt to make it easier to see when an instance starts or stops, the compute service now inserts instance state entries into the master console for certain events
Made performance improvements related to how the interface responds to rapid updates over the notification websocket.
The usage counter on the images table will now update properly when containers are created/deleted.
Communication between two containers within the same data center will now occur over the datacenter's physical private OOB network. Using this private network should yield a large network speed performance while also greatly increasing security.
Certain VPN services deployed prior to 2019.03.18.1-angel-lake have failed to authorize users attempting login over the last 3 days.
We use cookies to enhance your experience. You can manage your preferences below.