critical issue - cannot open Portal login page without errors
Hey all, today we are unable to log into the Cycle Portal, because of an error on the login page. I've attached a screenshot showing 401 HTTP errors and a (maybe resulting)JS error.
Hey all, today we are unable to log into the Cycle Portal, because of an error on the login page. I've attached a screenshot showing 401 HTTP errors and a (maybe resulting)JS error.
Hey team, I'd love to see readiness checks added to stack! While the LBs do a good job of assessing latency for packets; they truly can't tell if a a container is in trouble and 'just needs a moment to process/recover'. A readiness check is a method to tell the deployment manager (don't reboot me, but I need a second, stop talking to me). The readiness check is separate from the health check (which is really a liveness check) - as it purely indicates if the instance can serve traffic at the moment.
We all need a moment to compose ourselves sometimes, so do our instances.. Give them a fighting chance!
For LB containers/instances, please add in the source IP address (seen as CF-CONNECTING-IP) so that we can source the original IP of inbound connections in LB logs. The current logs limit us to seeing a proxy IP address (which is always CloudFlare on certain IPs) and when watching LB logs, it would be nice to see both the proxy IP address as well as the source IP.
See https://developers.cloudflare.com/fundamentals/reference/http-headers/ for more information on CloudFlare headers.
Please add a /health or /status endpoint to the Cycle.io API that returns the operational status of the service. This would enable proper health checking and monitoring for applications that integrate with Cycle.io.
Proposed endpoint: GET https://api.<customer_id>.cycle.io/health
Expected response:
{ "status": "ok", "timestamp": "2025-10-17T17:00:00Z" }
Use case: This endpoint would allow our services to implement readiness probes that verify Cycle.io API availability before accepting traffic, improving reliability and enabling circuit breaker patterns for graceful degradation when the API is unavailable.
HTTP status codes:
Would be nice to have a feature to verify that RAID configurations were set up properly during deployment
A handy feature would be a BASIC AUTH option on a web end point/load balancer. on nginx you would do something
server {
listen 80;
server_name your_domain.com;
location / {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
Rather than have to deploy nginx into a cycle env and proxy all traffic via it just to put basic auth, it would be nice to have a "not intended for production use" option on an environments load balancer/firewall to do basic auth.
Two choices would be available:
and finally a simple gui to add basic auth users..
Please consider adding a compress option to log drain form in environment settings panel.
Reference documentation here.
From my initial observation, compressed request bodies are unusual in HTTP traffic, but not impossible. When sending the request with compressed body, client must trust that the server is able to decompress the request body data. Server can decompress request body data based on Content-Encoding header sent by client, i.e.: Content-Encoding: gzip
Cycle agent pushes logs in a format, that is highly compressible (NDJSON). Client or, in Cycle case, Agent side compression may reduce network traffic for logs by 10x and more.
Example curl for compressed request body:
# compress json data
gzip -9 -c body.json > body.gz
# send compressed data
curl -v --location 'http://my.endpoint.example.com' --header 'Content-Type: text/plain' --header 'Content-Encoding: gzip' --data-binary @body.gz
If destination server does not support request decompression, apache httpd can do it with the following directives:
LoadModule deflate_module modules/mod_deflate.so
SetInputFilter DEFLATE
ProxyPass "/" "http://upstream/"
Please add auth option for external log drain requests. That way we can protect our log ingest endpoint by allowing only authorized agents.
Reference documentation here.
Proposed solution:
Example:
auth field contains value Basic YWRtaW46cGFzc3dvcmQ=, results in a header Authorization: Basic YWRtaW46cGFzc3dvcmQ=
This also allows for other types of auth, like Bearer and Api-Key tokens.
Please add environment_identifier to exported logs so we can have a name instead of hash for switching between environment log views in our Grafana log dashboard.
Reference documentation here.
Proposed fields:
The value is the same as identifier field in environment settings page.
Example NDJSON raw request body:
{
"time": "2025-08-07T11:11:11.12345678Z",
"source": "stdout",
"message": "some log message",
"instance_id": "instanceid",
"environment_id": "environmentid",
"environment_identifier": "my-environment", <---- please add this
"container_id": "containerid",
"container_identifier": "my-container",
"container_deployment": "my-deployment",
"server_id": "serverid"
}
*the json in example above was formatted for convenience. NDJSON body actually contains one json object per line, each representing a log message.
Hey All! I love the automatic lockdown on SFTP as it seems like bots are crazier than ever these days; however, I'm having trouble seeing when my server is in lockdown and when it's out without reconciling with the activity event log. Would it be possible to make this change to the portal, where it is easy to tell the state of SFTP (Locked Down vs Not Locked Down)?
Hey Everyone!
Many of you have reported hearing that different providers are being affected by outages.
So far we've heard reports of:
These have been corroborated by our team via down detector and other avenues, but other than things like Google Meets not loading we are not hearing of major interruptions to compute nodes running Cycle.
If you are having an issue with your compute, definitely let us know as we want to share that information within our ecosystem as much as possible and help each other.
If you go through this week and haven't even had to think about the word outage, consider posting something on LinkedIN about it and tag our official page.
Note before reading. If you're using a resource that uses SSH auth, the following will not retroactively affect anything. If a key has been working it will continue to work.
The official Cycle documentation suggests generating SSH keys using the following pattern.
ssh-keygen -t ecdsa -b 256 -m PEM -f your-filename.pem
This pattern will generate an ecdsa key, which we've recently found can cause compatibility issues with the Golang x509 package IF the ssh backend is using LibreSSL instead of OpenSSL.
LibreSSL is the default library used by ssh/ssh-keygen on Mac.
The Issue
While the formats are functionally equivalent, they're not always compatible.
If you've created a key and added it to a stack or image source and are getting an x509 error, use the following pattern to check your key.
openssl ec -in YOURKEYFILENAME -text -noout
And check to see if there is a line: ASN1 OID: prime256v1
If you do, the key should work, if its not ping us on Slack or leave a comment here. Its likely something else.
If you do not see that line and want to convert the private key to named curve from explicit parameters use the following pattern:
openssl ec -in YOURKEYFILENAME -out NEWFILENAME -param_enc named_curve
The other option is to use OpenSSL directly on Mac or generate the keys from within a container.
Hey everyone,
The next series of updates are all about improving monitoring across the platform.
To kick things off, we're cleaning up how log drains work. Starting with the next update, log drain configuration is moving from the container level to the environment level.
This means instead of setting log drain up for each container, you'll be able to drop in a single log destination for the entire environment.
Impact:
This will require everyone using log drains to update their config after the release. And if you've been using a custom URL format to tell downstream systems which container is sending logs, you'll need to rework that part a bit.
While this change is minor, it will help us lay the foundation for much stronger monitoring features coming soon.
Just a heads up for those who might run into the same issue:
I have an environment with a redis container named redis. In another container in the same environment, I had a nodejs server trying to connect to the redis server via the ioredis library. Basically I had a file like
import { Redis } from "ioredis";export const redis = new Redis("redis://redis:6379");
On server start, I was seeing a stream of errors along the lines of
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
at GetAddrInfoReqWrap.callbackTrampoline (node:internal/async_hooks:130:17)
This was very strange as the two containers were on the same network and redis should have been (and turns out, was) a valid hostname. Furthermore I was able to run via SSH on the node server instance:
# redis-cli -h redis PING
PONG
So the hostname was valid. What was going on?
After some discussion with the cycle team (thanks!!), it turns out the issue is that the internal networking within the environment is all done via IPv6, and by default, the ioredis client doesn't allow DNS resolution into IPv6. For whatever reason 🤷🏻♂️. But the fix was very simple. This works:
import { Redis } from "ioredis";export const redis = new Redis({ host: "redis", family: 6 });
Explicitly tell the client to use IPv6, and it will.
Perhaps this will come in handy for someone later 😄 again, thanks to the cycle team for finding the solution to this!
This release (2025.05.21.02) is a smaller quality of life improvement patch.
It's mainly focused on:
Sometimes invoices need to be collected by users who do not want a Cycle login. We've added the ability to download the invoice directly from the email without the need for the user to log in to do so!
Container instance networking has always been front and center on the containers modal and corresponding instances page, but there were some variables not shown when it came to SDN networks. You'll see that we've added a section under the instance console that shows all attached networks for the container instance.
A bug was found that would cause custom DNS resolvers to only work with CNAME records. This has been resolved.
Our team has always been a proponent of IPv6 adoption and most of the platform is built with an IPv6 native attitude (where possible). There was a case where, if a load balancer only had IPv6 enabled, VPN files could fail to download. So we added some new functionality that allows users to download the VPN config files through load balancers that only have IPv6 enabled. One more step in the right direction!
I have found that 50% of the time I connect to the container SSH endpoint it is to find an IP address on one of the interfaces. Most of my containers don't have the ip command so I have to install that, too. It would be great if we could see all interface IP assignments directly in the portal.
Just a friendly note to the community after talking it through with the support guys - slim-bookworm is having intermittent DNS resolver issues. We've narrowed one of our stack issues down to the actual image itself having issues and wanted to warn the community to save yourself a bit of aggravation. If you were debating moving everything to Alpine; let this be a final kick in that direction.
Our containers are generally built with minimal dependencies so as to minimize the attack surface. This means they don't normally have curl/wget/netcat. There is a funky shell trick, but it's .... ugly. Would it be possible to add a cycle-native HTTP/HTTPS health check?
Ugly Script
exec 3<>/dev/tcp/localhost/5000 && \
echo -e "GET /_ah HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n" >&3 && \
cat <&3 | grep 200
Hey team! A couple of questions about Cycle. We currently had an instance that we were testing in our Staging environment. For some reason that we are trying to figure out, it's running on a 100% CPU and maybe 100% RAM also.
So one of our servers is down in our cluster.
Questions about compute:
Questions about instances:
Some of you may have run into DNS issues when using a Debian based container.
This discussion is a place to discuss
Per my research:
From inside a container on Cycle I ran tcpdump -i any port 53 -vvv. This gave me the following, interesting information.
someother.domain.com resulted in both A and AAAA requests being sent in parallel.So at this point I knew, the internal resolver was working correctly and that the failure was happening inside the container's DNS client logic.
So I dove deeper into some research on glibc and specifically getaddrinfo() since it handles DNS resolution and found that:
And the second part there, where it prematurely fails seems to be the major issue.
Luckily, the Alpine resolver musl libc performs the same actions but serially and predictably, which has so far eliminated any occurrence of this error. So if you're in the position to use Alpine, its more reliable (and generally more secure).
Looking forward to hearing some insights and opinions here!
We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.