Exposing HD Configurations in panel
Would be nice to have a feature to verify that RAID configurations were set up properly during deployment
Would be nice to have a feature to verify that RAID configurations were set up properly during deployment
A handy feature would be a BASIC AUTH option on a web end point/load balancer. on nginx you would do something
server {
listen 80;
server_name your_domain.com;
location / {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
Rather than have to deploy nginx into a cycle env and proxy all traffic via it just to put basic auth, it would be nice to have a "not intended for production use" option on an environments load balancer/firewall to do basic auth.
Two choices would be available:
and finally a simple gui to add basic auth users..
Please consider adding a compress option to log drain form in environment settings panel.
Reference documentation here.
From my initial observation, compressed request bodies are unusual in HTTP traffic, but not impossible. When sending the request with compressed body, client must trust that the server is able to decompress the request body data. Server can decompress request body data based on Content-Encoding header sent by client, i.e.: Content-Encoding: gzip
Cycle agent pushes logs in a format, that is highly compressible (NDJSON). Client or, in Cycle case, Agent side compression may reduce network traffic for logs by 10x and more.
Example curl for compressed request body:
# compress json data
gzip -9 -c body.json > body.gz
# send compressed data
curl -v --location 'http://my.endpoint.example.com' --header 'Content-Type: text/plain' --header 'Content-Encoding: gzip' --data-binary @body.gz
If destination server does not support request decompression, apache httpd can do it with the following directives:
LoadModule deflate_module modules/mod_deflate.so
SetInputFilter DEFLATE
ProxyPass "/" "http://upstream/"
Please add auth option for external log drain requests. That way we can protect our log ingest endpoint by allowing only authorized agents.
Reference documentation here.
Proposed solution:
Example:
auth field contains value Basic YWRtaW46cGFzc3dvcmQ=, results in a header Authorization: Basic YWRtaW46cGFzc3dvcmQ=
This also allows for other types of auth, like Bearer and Api-Key tokens.
Please add environment_identifier to exported logs so we can have a name instead of hash for switching between environment log views in our Grafana log dashboard.
Reference documentation here.
Proposed fields:
The value is the same as identifier field in environment settings page.
Example NDJSON raw request body:
{
"time": "2025-08-07T11:11:11.12345678Z",
"source": "stdout",
"message": "some log message",
"instance_id": "instanceid",
"environment_id": "environmentid",
"environment_identifier": "my-environment", <---- please add this
"container_id": "containerid",
"container_identifier": "my-container",
"container_deployment": "my-deployment",
"server_id": "serverid"
}
*the json in example above was formatted for convenience. NDJSON body actually contains one json object per line, each representing a log message.
Hey All! I love the automatic lockdown on SFTP as it seems like bots are crazier than ever these days; however, I'm having trouble seeing when my server is in lockdown and when it's out without reconciling with the activity event log. Would it be possible to make this change to the portal, where it is easy to tell the state of SFTP (Locked Down vs Not Locked Down)?
Hey Everyone!
Many of you have reported hearing that different providers are being affected by outages.
So far we've heard reports of:
These have been corroborated by our team via down detector and other avenues, but other than things like Google Meets not loading we are not hearing of major interruptions to compute nodes running Cycle.
If you are having an issue with your compute, definitely let us know as we want to share that information within our ecosystem as much as possible and help each other.
If you go through this week and haven't even had to think about the word outage, consider posting something on LinkedIN about it and tag our official page.
Note before reading. If you're using a resource that uses SSH auth, the following will not retroactively affect anything. If a key has been working it will continue to work.
The official Cycle documentation suggests generating SSH keys using the following pattern.
ssh-keygen -t ecdsa -b 256 -m PEM -f your-filename.pem
This pattern will generate an ecdsa key, which we've recently found can cause compatibility issues with the Golang x509 package IF the ssh backend is using LibreSSL instead of OpenSSL.
LibreSSL is the default library used by ssh/ssh-keygen on Mac.
The Issue
While the formats are functionally equivalent, they're not always compatible.
If you've created a key and added it to a stack or image source and are getting an x509 error, use the following pattern to check your key.
openssl ec -in YOURKEYFILENAME -text -noout
And check to see if there is a line: ASN1 OID: prime256v1
If you do, the key should work, if its not ping us on Slack or leave a comment here. Its likely something else.
If you do not see that line and want to convert the private key to named curve from explicit parameters use the following pattern:
openssl ec -in YOURKEYFILENAME -out NEWFILENAME -param_enc named_curve
The other option is to use OpenSSL directly on Mac or generate the keys from within a container.
Hey everyone,
The next series of updates are all about improving monitoring across the platform.
To kick things off, we're cleaning up how log drains work. Starting with the next update, log drain configuration is moving from the container level to the environment level.
This means instead of setting log drain up for each container, you'll be able to drop in a single log destination for the entire environment.
Impact:
This will require everyone using log drains to update their config after the release. And if you've been using a custom URL format to tell downstream systems which container is sending logs, you'll need to rework that part a bit.
While this change is minor, it will help us lay the foundation for much stronger monitoring features coming soon.
Just a heads up for those who might run into the same issue:
I have an environment with a redis container named redis. In another container in the same environment, I had a nodejs server trying to connect to the redis server via the ioredis library. Basically I had a file like
import { Redis } from "ioredis";export const redis = new Redis("redis://redis:6379");
On server start, I was seeing a stream of errors along the lines of
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
This was very strange as the two containers were on the same network and redis should have been (and turns out, was) a valid hostname. Furthermore I was able to run via SSH on the node server instance:
# redis-cli -h redis PING
PONG
So the hostname was valid. What was going on?
After some discussion with the cycle team (thanks!!), it turns out the issue is that the internal networking within the environment is all done via IPv6, and by default, the ioredis client doesn't allow DNS resolution into IPv6. For whatever reason 🤷🏻♂️. But the fix was very simple. This works:
import { Redis } from "ioredis";export const redis = new Redis({ host: "redis", family: 6 });
Explicitly tell the client to use IPv6, and it will.
Perhaps this will come in handy for someone later 😄 again, thanks to the cycle team for finding the solution to this!
This release (2025.05.21.02) is a smaller quality of life improvement patch.
It's mainly focused on:
Sometimes invoices need to be collected by users who do not want a Cycle login. We've added the ability to download the invoice directly from the email without the need for the user to log in to do so!
Container instance networking has always been front and center on the containers modal and corresponding instances page, but there were some variables not shown when it came to SDN networks. You'll see that we've added a section under the instance console that shows all attached networks for the container instance.
A bug was found that would cause custom DNS resolvers to only work with CNAME records. This has been resolved.
Our team has always been a proponent of IPv6 adoption and most of the platform is built with an IPv6 native attitude (where possible). There was a case where, if a load balancer only had IPv6 enabled, VPN files could fail to download. So we added some new functionality that allows users to download the VPN config files through load balancers that only have IPv6 enabled. One more step in the right direction!
I have found that 50% of the time I connect to the container SSH endpoint it is to find an IP address on one of the interfaces. Most of my containers don't have the ip command so I have to install that, too. It would be great if we could see all interface IP assignments directly in the portal.
Just a friendly note to the community after talking it through with the support guys - slim-bookworm is having intermittent DNS resolver issues. We've narrowed one of our stack issues down to the actual image itself having issues and wanted to warn the community to save yourself a bit of aggravation. If you were debating moving everything to Alpine; let this be a final kick in that direction.
Our containers are generally built with minimal dependencies so as to minimize the attack surface. This means they don't normally have curl/wget/netcat. There is a funky shell trick, but it's .... ugly. Would it be possible to add a cycle-native HTTP/HTTPS health check?
Ugly Script
exec 3<>/dev/tcp/localhost/5000 && \
echo -e "GET /_ah HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n" >&3 && \
cat <&3 | grep 200
Hey team! A couple of questions about Cycle. We currently had an instance that we were testing in our Staging environment. For some reason that we are trying to figure out, it's running on a 100% CPU and maybe 100% RAM also.
So one of our servers is down in our cluster.
Questions about compute:
Questions about instances:
Some of you may have run into DNS issues when using a Debian based container.
This discussion is a place to discuss
Per my research:
From inside a container on Cycle I ran tcpdump -i any port 53 -vvv. This gave me the following, interesting information.
someother.domain.com resulted in both A and AAAA requests being sent in parallel.So at this point I knew, the internal resolver was working correctly and that the failure was happening inside the container's DNS client logic.
So I dove deeper into some research on glibc and specifically getaddrinfo() since it handles DNS resolution and found that:
And the second part there, where it prematurely fails seems to be the major issue.
Luckily, the Alpine resolver musl libc performs the same actions but serially and predictably, which has so far eliminated any occurrence of this error. So if you're in the position to use Alpine, its more reliable (and generally more secure).
Looking forward to hearing some insights and opinions here!
Hey everyone! We're trying something new for this release, by creating a place for discussion around updates we push out for Cycle. It's a more discussion oriented version of our changelog, where we can engage with all of you about what's new.
This release (2025.04.24.02) is a huge release that has been in the works for nearly a month now. It brings with it a lot of stability and performance improvements, but also tons of feature requests we've received from all of you.
We've added a few new goodies to the platform based on your feedback.
We've added a new graph to the server dashboard, that shows network traffic transmission on a per network interface basis. It's now possible to see data transmitted over the private network, public network, or even SDNs.
Finally, right? Well, you could always stop and start them, but there was one major issue with this method - Cycle would restart all of them at once...
With the new "restart" functionality, the platform will respect the stagger set in the container's configuration,
preventing downtime while your restart is in progress.
Instance states have a 'normal state', such as running, but also have health checks, migration state, traffic draining, and more. One thing we've heard from our users is that sometimes their instance state will still say 'running', but the server that instance was running on went offline. What gives?
Well, the TL;DR is that we don't actually know that it went offline. Cycle relies on checkins from the underlying host to know what state that instance is in, and if it misses a checkin, or the network drops, the instance may still be running, even if Cycle can't prove it.
This led to some uncertainty, but we didn't want to alert people that an instance was offline just because of a network hiccup. In this release, we've tackled the issue by introducing an 'uncertainty' marker on top of container instances where the underlying host has missed a couple checkins.
Now, you'll be alerted that something may be off about an instance even if we're not sure what state it might be in anymore. Here's what that looks like:

Last but certainly not least, we've added the ability to set a custom name on your servers, that will be visible throughout the interface. It will appear anywhere a hostname previously did. If no nickname is set, you'll still see the same hostname from before.
(we wouldn't want to hide Michael T's latest server hostname).
Along with the new features, we've improved a handful of things as well.
We've introduced a new load balancer routing mode, dubbed Source IP. this mode will attempt to provide sticky sessions for all requests coming from a specific IP address.
Cycle has had SFTP lockdown intelligence for over a year now, but some clients would open up dozens of new connections when navigating or transferring files, possibly for better throughput. These clients would quickly put the SFTP connection into lockdown, blocking all new connections.
In this release, we've made it smarter - lockdown will not count new connections from a recently authenticated IP address toward the lockdown criteria. Clients can be greedy with new connections, while bad actors still get locked out.
We've added support for a UID, GID, and file permissions to be set on scoped variable files that are injected into the container. Some applications require specific permissions on files to play nice, and this alleviates the need for any funky workarounds that were previously required.
Prior to this release, the environment dashboard would show the CIDR (the entire address space) allocated to a load balancer instance. While useful in some circumstances, most people (ourselves included) just wanted to see the specific IP attached to that load balancer instance. Now, when you go to an environment dashboard, you'll see the correct IP.
There were quite a few other minor tweaks and bug fixes, along with a LOT of work on something we'll be revealing very soon. Leave a comment with your thoughts on the latest update, questions you may have, or any issues you run into. (You can also message any of our team in slack)
Our next release will be historic...you won't want to miss it.
One of the deployment patterns we have been using from K8S is to generate unique configmaps per deployment of a service so that we can version variables with the code (but outside of the image). We have been able to achieve that using the existing Stack spec (nice work on this, btw), but it would be great if we could clean them up when the deployments get removed in the pipeline step.
What's the worst that could happen?
Microsoft recommends the XFS filesystem for SQL Server on Linux data volumes. Would it be possible to allow us to specify which filesystem should be used when provisioning volumes?
From https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-performance-best-practices?view=sql-server-ver16:
SQL Server supports both ext4 and XFS filesystems to host the database, transaction logs, and additional files such as checkpoint files for in-memory OLTP in SQL Server. Microsoft recommends using XFS filesystem for hosting the SQL Server data and transaction log files.
We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.