Feature Requests

Page 1
feature-request

Volume or Disk level Encryption

Hi Cycle team 👋

We’d love to see support for encryption at rest — either at the server disk level or at the individual volume level.

For teams deploying workloads in third-party virtualized environments, this is becoming a pretty standard requirement.

Why We’re Asking

When running in a virtual provider environment, we don’t physically control the underlying hardware. Even though TLS handles encryption in transit, we still need guarantees around data stored on disk.

For many companies (especially those dealing with customer or regulated data), encryption at rest isn’t optional — it’s table stakes for production.

This impacts things like: • Enterprise security reviews • SOC 2 / ISO 27001 compliance • GDPR / HIPAA workloads • Internal security policies • Risk mitigation around snapshots / host access

Without it, some workloads just can’t move onto the platform.

What Would Help

Any of the following would be great:

1️⃣ Host-Level Disk Encryption • All server disks encrypted by default • Transparent to containers • Configurable per environment if needed

2️⃣ Volume-Level Encryption • Encryption on specific persistent volumes • Visible status in the UI and API • Clear documentation on how it’s implemented

3️⃣ Key Management Options (Stretch Goal) • Bring Your Own Key (BYOK) support • Key rotation visibility

avatar
0
feature-request

Storage configuration

While the current default of RAID1 configuration provides a reliable foundation for data integrity, it would be beneficial to have more flexibility in how Cycle handles storage.

In scenarios where we are utilizing distributed storage solutions like Garage, redundancy is already managed at the application level. In these cases, the ability to prioritize maximum storage capacity over local hardware redundancy would be highly advantageous. Furthermore, providing users with granular control over which specific storage devices are assigned to a container or VM would significantly improve resource optimization and environment customization.

avatar
2
feature-request

Environment overview: Cores service status and updates available

With the new dashboard for environments, it would be super handy to add a dedicated circle for core services (like LBs, VPN, discovery, etc) and also something to signal that a core service has updates available on the cards as well as the dropdown list.

This would make it super easy to spot any updates/issues that are not related the the 100's of containers circle. Also maybe on the dropdown list a smaller range preview of the new Uptime bar.

avatar
1
feature-request

Instance Status on Containers Tab

Not a big deal, but one thing I often find myself annoyed by is when I restart an instance, having to open the instance to watch the instances and wait until they pass the health check and are ready. It would be nice if, for services with defined checks, the Instances column right now that currently shows a count of instances and a ring that indicates how many are running and how many are not could also somehow indicate how many are ready vs just running. Maybe with color-coding? Right now the ring just shows green for running without regard for readiness - maybe add an intermediate different color like blue for running but hasn't passed the health check yet and only go green once the instance is actually ready?

avatar
2
feature-request

Allow grouping of containers in stack definitions for pipelines

Hey all,

This one's a bit in the weeds but here's the context:

  • I use stack definitions for each of our services, and deploy those using a generic pipeline that uses a bunch of variables to determine what environment we're deploying to, what stack definition to use and a couple other things.
  • Each service has a couple of one-off jobs that need to run once, on deploy, and then basically never again. This is hard to model in the current Cycle approach because unless I explicitly stop each container, by name, after it's run once, each of these containers will get a platform health check every 10 minutes and a restart.
  • This is the only thing I can't dynamically control in the pipeline and as such the only thing that's preventing a single pipeline for all deploys.

What I would like:

  • A way to tag or otherwise mark a container in a stack definition such that I can act on all containers with that tag in a pipeline. In my specific case, imagine a tag called migration (the canonical example of this sort of workload is a database migration, hence the tag) where in a pipeline step I can just say "now stop all containers tagged migration. That'd very neatly solve my problem.

In my specific case I could model all of these as function containers and I could also use a 'now stop all function containers' type grouping but I'd imagine that'd be much less broadly useful to others.

avatar
3
feature-request

Rolling Restarts

One feature I'd really love is the ability to execute a restart as a "rolling" restart. Right now, manual restarts (hitting the button, applying a config change, etc) stop all instances at once producing app downtime. And without a defined health check policy there's probably no way around that. But when a health check policy IS defined, I would love to be able to set the default restart method to a rolling restart where each subsequent instance restart does not begin until the previous instance reaches healthy status. That functionality would be incredibly valuable in such a wide variety of situations...

avatar
10
feature-request

Liveness/Readiness checks

Hey team, I'd love to see readiness checks added to stack! While the LBs do a good job of assessing latency for packets; they truly can't tell if a a container is in trouble and 'just needs a moment to process/recover'. A readiness check is a method to tell the deployment manager (don't reboot me, but I need a second, stop talking to me). The readiness check is separate from the health check (which is really a liveness check) - as it purely indicates if the instance can serve traffic at the moment.

We all need a moment to compose ourselves sometimes, so do our instances.. Give them a fighting chance!

avatar
3
feature-request

Add in CloudFlare CF-CONNECTING-IP to LB logs

For LB containers/instances, please add in the source IP address (seen as CF-CONNECTING-IP) so that we can source the original IP of inbound connections in LB logs. The current logs limit us to seeing a proxy IP address (which is always CloudFlare on certain IPs) and when watching LB logs, it would be nice to see both the proxy IP address as well as the source IP.

See https://developers.cloudflare.com/fundamentals/reference/http-headers/ for more information on CloudFlare headers.

avatar
3
feature-request

Health/Status Endpoint for API Monitoring

Please add a /health or /status endpoint to the Cycle.io API that returns the operational status of the service. This would enable proper health checking and monitoring for applications that integrate with Cycle.io.

Proposed endpoint: GET https://api.<customer_id>.cycle.io/health

Expected response:

{
"status": "ok",
"timestamp": "2025-10-17T17:00:00Z"
}

Use case: This endpoint would allow our services to implement readiness probes that verify Cycle.io API availability before accepting traffic, improving reliability and enabling circuit breaker patterns for graceful degradation when the API is unavailable.

HTTP status codes:

  • 200 - Service operational
  • 503 - Service unavailable (optional, for maintenance windows)
avatar
2
feature-request

Exposing HD Configurations in panel

Would be nice to have a feature to verify that RAID configurations were set up properly during deployment

avatar
1
feature-request

BASIC AUTH option on environments/containers

A handy feature would be a BASIC AUTH option on a web end point/load balancer. on nginx you would do something

server {
        listen 80;
        server_name your_domain.com;

        location / {
            auth_basic "Restricted Access";
            auth_basic_user_file /etc/nginx/.htpasswd;
        }
}

Rather than have to deploy nginx into a cycle env and proxy all traffic via it just to put basic auth, it would be nice to have a "not intended for production use" option on an environments load balancer/firewall to do basic auth.

Two choices would be available:

  1. Apply basic auth to the entire env
  2. Apply to selected containers

and finally a simple gui to add basic auth users..

avatar
1
feature-request

Add compression option to external log drain

Please consider adding a compress option to log drain form in environment settings panel.

Reference documentation here.

From my initial observation, compressed request bodies are unusual in HTTP traffic, but not impossible. When sending the request with compressed body, client must trust that the server is able to decompress the request body data. Server can decompress request body data based on Content-Encoding header sent by client, i.e.: Content-Encoding: gzip

Cycle agent pushes logs in a format, that is highly compressible (NDJSON). Client or, in Cycle case, Agent side compression may reduce network traffic for logs by 10x and more.

Example curl for compressed request body:

# compress json data
gzip -9 -c body.json > body.gz
# send compressed data
curl -v --location 'http://my.endpoint.example.com' --header 'Content-Type: text/plain' --header 'Content-Encoding: gzip' --data-binary @body.gz

If destination server does not support request decompression, apache httpd can do it with the following directives:

LoadModule deflate_module modules/mod_deflate.so
SetInputFilter DEFLATE
ProxyPass "/" "http://upstream/"
avatar
1
feature-request

Add auth option to external log drain

Please add auth option for external log drain requests. That way we can protect our log ingest endpoint by allowing only authorized agents.

Reference documentation here.

Proposed solution:

  • Add an optional write-only auth field in the logs config form.
  • If auth field not empty, add Authorization header with the value from auth field on requests to external log drain endpoint.

Example:

auth field contains value Basic YWRtaW46cGFzc3dvcmQ=, results in a header Authorization: Basic YWRtaW46cGFzc3dvcmQ=

This also allows for other types of auth, like Bearer and Api-Key tokens.

avatar
2
feature-request

Expand external log drain with environment identifier

Please add environment_identifier to exported logs so we can have a name instead of hash for switching between environment log views in our Grafana log dashboard.

Reference documentation here.

Proposed fields:

  • If NDJSON - Headers is selected, X-Cycle-Environment-Identifier header is added.
  • If NDJSON - Raw is selected, environment_identifier field is added.

The value is the same as identifier field in environment settings page.

Example NDJSON raw request body:

{
  "time": "2025-08-07T11:11:11.12345678Z",
  "source": "stdout",
  "message": "some log message",
  "instance_id": "instanceid",
  "environment_id": "environmentid",
  "environment_identifier": "my-environment",    <---- please add this
  "container_id": "containerid",
  "container_identifier": "my-container",
  "container_deployment": "my-deployment",
  "server_id": "serverid"
}

*the json in example above was formatted for convenience. NDJSON body actually contains one json object per line, each representing a log message.

avatar
4
feature-request

SFTP Lockdown State in Portal

Hey All! I love the automatic lockdown on SFTP as it seems like bots are crazier than ever these days; however, I'm having trouble seeing when my server is in lockdown and when it's out without reconciling with the activity event log. Would it be possible to make this change to the portal, where it is easy to tell the state of SFTP (Locked Down vs Not Locked Down)?

avatar
1
feature-request

Show All IP Addresses in the portal

I have found that 50% of the time I connect to the container SSH endpoint it is to find an IP address on one of the interfaces. Most of my containers don't have the ip command so I have to install that, too. It would be great if we could see all interface IP assignments directly in the portal.

avatar
3
feature-request

Built in HTTP Health Check

Our containers are generally built with minimal dependencies so as to minimize the attack surface. This means they don't normally have curl/wget/netcat. There is a funky shell trick, but it's .... ugly. Would it be possible to add a cycle-native HTTP/HTTPS health check?

Ugly Script

exec 3<>/dev/tcp/localhost/5000 && \
  echo -e "GET /_ah HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n" >&3 && \
  cat <&3 | grep 200
avatar
5
feature-request

Deployment Scoped Variables

One of the deployment patterns we have been using from K8S is to generate unique configmaps per deployment of a service so that we can version variables with the code (but outside of the image). We have been able to achieve that using the existing Stack spec (nice work on this, btw), but it would be great if we could clean them up when the deployments get removed in the pipeline step.

avatar
1
feature-request

Specify volume filesystem

Microsoft recommends the XFS filesystem for SQL Server on Linux data volumes. Would it be possible to allow us to specify which filesystem should be used when provisioning volumes?

From https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-performance-best-practices?view=sql-server-ver16:

SQL Server supports both ext4 and XFS filesystems to host the database, transaction logs, and additional files such as checkpoint files for in-memory OLTP in SQL Server. Microsoft recommends using XFS filesystem for hosting the SQL Server data and transaction log files.

avatar
1
feature-request

Base image monitoring breakout

The current server view depicts base storage usage and trending over time; however finding out what's consuming that space currently isn't possible. As you reach your threshold, there are no granular views to figure out what might be consuming the space.

Since base storage expansion is possible but decreasing it is not, we're thinking that a way to determine if we need to expand it is necessary to make decisions on whether we have a machine with runaways storage logs (consuming base storage) or too large of images for a given machine.

avatar
1
v2026.02.26.01 © 2024 Petrichor Holdings, Inc.

🍪 Help Us Improve Our Site

We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.