Adding range of ports
I would like to add a range of ports. Is there a way to do this through the portal?
I would like to add a range of ports. Is there a way to do this through the portal?
Hi!
I have a couple of questions:
The container I published earlier is giving an error. I tried simplifying and changing the Dockerfile, but nothing seems to change. The container builds and runs locally, the CI pipeline also passes correctly, but during deployment, it throws the following error: vbnet Copy code [Sep 23 12:23:36.956][ CYCLE COMPUTE] Console attached mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory chmod: missing operand after '755' Try 'chmod --help' for more information. [Sep 23 12:23:37.033][ CYCLE COMPUTE] Console disconnected (77.073087ms) [Sep 23 12:26:49.663][ CYCLE COMPUTE] Console attached mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory chmod: missing operand after '755' Try 'chmod --help' for more information. [Sep 23 12:26:49.794][ CYCLE COMPUTE] Console disconnected (130.256738ms) I even removed everything related to this from the Dockerfile (attached), but the issue persists.
FROM php:8.2-apache
RUN apt-get update && apt-get install -y libzip-dev zip unzip git && docker-php-ext-install zip
RUN a2enmod rewrite RUN a2enmod ssl
ENV APACHE_RUN_USER=www-data ENV APACHE_RUN_GROUP=www-data
COPY --chown=www-data:www-data ./app /var/www/html
RUN rm -rf /var/lib/apt/lists/*
WORKDIR /var/www/html
I created a second server in the Products cluster and increased the number of container instances to 2. The deployment happened, but both containers were placed on the same server. How can I ensure that containers are evenly distributed across servers? Afterward, I stopped the first server with the containers in the AWS console, but the containers didn’t automatically deploy on the second server. So, if the server crashes (not just the container), the application becomes unavailable. How can this issue be resolved?
My team recently encountered an issue with the Discovery Service where we received the following error in the console:
[Resolver Throttle] <ip here> has hit the max hit limit (250) and is being throttled.
After investigating, we discovered that our API was sending an excessive number of requests to a third-party service, which triggered the throttle in the Discovery Service. This throttling then impacted other API requests in our environment.
The Cycle team explained that the throttle is in place "to prevent getting banned from lookup services like Google's domain servers or other public nameservers." The throttle limit resets every five minutes.
We’ve since resolved the issue on our end, but I wanted to share this experience in case anyone else encounters a similar problem. Hopefully, this helps someone avoid the same situation.
Hello,
My team and I encountered an issue this week with containers running the MySQL 5.7 image from Dockerhub. After shutting them down, the containers failed to restart.
We’ve been using this same unmodified image across multiple containers for over a year without issue, but this problem started earlier this week.
This same issue also occurs across different cloud providers in our account. And the problem persists even when deploying the same MySQL 5.7 image to a new container, so this isn’t isolated to a single container.
Here are the errors displayed when attempting to restart the container:
2024-09-20715:38:38.264529Z 0 [Warning] A deprecated TLS version TLSvl. 1 is enabled. Please use TLSv1.2 or higher.
2024-09-20715:38:38.265036Z 0 [Warning] CA certificate ca.pem is self signed.
2024-09-20715:38:38.265070Z 0 [Note] Skipping generation of RSA key pair as key files are present in data directory.
2024-09-20715:38:38.265316Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2024-09-20T15:38:38.265348Z 0 [Note] IPv6 is available.
2024-09-20715:38:38.265360Z 0 [Note]
2024-09-20T15:38:38.265383Z 0 [Note] Server socket created on IP: ' :: '.
2024-09-20715:38:38.265425z 0 [ERROR] Could not create unix socket lock file /var/run/mysqld/mysqld.sock.lock.
2024-09-20715:38:38.265435Z 0 [ERROR] Unable to setup unix socket lock file.
2024-09-20715:38:38.265445Z 0 [ERROR] Aborting
We are able to solve this issue by upgrading to a new MySQL 8 image.
Could the TLS errors be related to CycleOS? Was there an update that potentially disables older versions of TLS?
Any guidance would be greatly appreciated!
Our format for stack variables has been slightly updated to support some new use-cases and to improve security.
Starting today, variables in stack files must be a value to a key (no more free form variables making JSON or YAML invalid). Entire subsections can still be replaced, for example "environment_variables": "{{$variables}}"
.
The $
syntax will allow for a literal string escape, so anywhere where a number, object, or array should be replaced with a value, throw a $
in front of the variable name and Cycle will do a literal replacement during a build. Without the $
, the variable will still be treated as a string "identifier": "{{id}}"
.
Finally, the value of variables will not be embedded into build outputs anymore. This is to ensure that no secrets or keys are inadvertently leaked or made visible where they shouldn't be.
This change is breaking for some stacks
To update your stacks to ensure compatibility for your next build:
$
If you have any questions, please leave a comment here or reach out to our team on slack!
Thanks, have a great weekend.
Docker image has a HEALTHCHECK instruction in it.
HEALTHCHECK --interval=30s --timeout=5s --retries=5 CMD ["grpc_health_probe", "--addr=:9000"]
But seems it is not picked up by cycle to fill in the health check policy for container.
As a workaround, we are explicitly setting the containers.<container>.config.deploy.health_check
in cycle stack json/yaml.
health_check:
command: "grpc_health_probe --addr=:9000"
delay: "30s"
interval: "30s"
restart: true
retries: 5
timeout: "5s"
Hello,
I'm integrating with the pipeline API, and it seems like pipeline runs don't go to a final state assigned when they error out. I’ve created and queried a couple of errored out runs and they all have a status block like this:
"state":
{
"changed": "2024-08-21T04:07:43.786Z",
"error":
{
"message": "could not find cycle.json in repo"
},
"current": "running"
},
Note that current is still running eventhough the run ended hours ago with a pretty permanent error. I would expect the current
status to be something like failed
, so I can pick it up and determine the run is done (and cooked).
How to execute function container (B) from another function container (A)? More details: I want to execute container A function and inside execution process I want to decide (based on some paramaters passed to the function and additional logic) how many function B I want to execute (how many containers with B function I want to start).
I've never liked YAML ... probably for the same reason I've never liked python. Indent-sensitive configs? Gross.
... but I know so many of you love YAML. :(
Sometimes, especially when building new pipelines, I've found it all too easy to trigger a new pipeline run, only to realize shortly afterward that I need to make a minor update before being satisfied with the result.
In these situations, I often wish I could stop the ongoing pipeline run and trigger a fresh one immediately. Currently, I have to wait until the queued pipeline runs finish before I can test the most up-to-date version of the pipeline I'm working on.
Add support for Build Secrets (--secret) when building an image. Not sure how it'd work but currently the only way I can see for how to pass authentication credentials that are usable down-stream within the Dockerfile is to pass build args, which is insecure and emitted in Depot etc
Specific use-case is npm authentication.
I'd like to set up a pipeline which can take the branch name as an argument and spins up a new environment using that branch, however it appears that Magic Variables aren't supported in the "Git Repo Branch" or Ref Type -> "Ref Value"
Minor nitpick here, but it initially lead me down the path of thinking that one couldn't use git as a source for Create Image in Pipelines. Please see attached images :)
If we can edit the original post, it kind of makes sense to me that we should be able to edit replies?
Also, the reply text box cannot be scrolled by swiping up and down on Android, need to select the cursor and drag it up and down 👍
Currently when signing in on Android, the 2FA code cannot be pasted in (only pastes the first number in the first box).
I have a docker builds which takes more than 10 minutes, and it times out on Depot. I'd prefer to use some of the compute I'm already paying for in my worker nodes for building, so I think it'd make sense to be able to select one of the clusters in the hub as a target for where the docker image will build. Also potentially moves some of the load off the built in cycle builder.
Saved Credentials/fields
→ Proposed basis
→ → Can be singular fields, not provider-specific → → Can be used anywhere text can be used → → Maybe two variants, one for cleartext fields and one for sensitive fields.
→ Use case
→ → For configuring multiple entities/objects that utilize the same keys, most providers don’t allow you to view credentials after generating, so would be handy.
I am looking to restart an instance using the Cycle API based on some application logic. I tried going through the documentation but could not find anything to restart an instance through the API. Am I missing something? Any help would be appreciated. Docs I was going through: https://api.docs.cycle.io/tag/Instances Thank you!
I'm using a stack file (cycle.json) in my repo and I have a container for grafana which needs a config file mounted in the container. I know I can update this file in the portal but is there a way to define the file in the stack file so I can just update it in code?
This post is made by a Cycle employee highlighting a commonly asked question. It's being placed here for visibility, questions, feedback, feature requests, and general discussion.
The most direct way to do this is by using the file type scoped variable. This allows a user to mount a file at either a default path, provided by the platform, or at the path of their choosing. For binary file types, there is a base64 decode feature that will automatically decode any base64 encoded file on read.
Another popular way to mount files into container(s) is through shared mounts. This allows a remote filesystem (like EFS) to be mounted to a server and then allows containers on that server to opt into those files being available.
Have a specific use case you can't quite decided which approach is right for? Want some feedback on your implementation?
We'd love to hear your questions or successes here in the replies!