Scheduled Triggers for Functions
One of the most useful features on AWS Lambda is the ability to set a CRONTab schedule for automated triggering of the function. That would be a really handy update to the current Function configuration!
One of the most useful features on AWS Lambda is the ability to set a CRONTab schedule for automated triggering of the function. That would be a really handy update to the current Function configuration!
It would be nice to have an overview of the currently attached scoped variables that apply to this instance. This helps with understanding which variables are / are not attached in the event there's conflicting (or wrongly configured) scoped variables that do not have this container in their scope. This helps even more when changing tags and they no longer apply to containers.
Hey all!
I've got a heterogenous cluster where some nodes are CPU heavy and others memory heavy. I'd like to deploy particular workloads to particular nodes if there's space, but I'd much prefer that deploys didn't fail if there's space on another node, just not the one I want. One way I can imagine this working is assign a list of tags [cpu-pool, pool]
and have the scheduler try to match as many tags as possible, failing only if it can't match a single one. So an ANY
rule in addition to your current ALL
implementation, I suppose?
As it stands I'm a bit nervous to configure my preferred split because breaking deploys is a bigger downside than optimising node usage is an upside.
I would like to submit a request for a feature. VMWare has a feature called DPM (distributed power management), I think a similar feature in Cycle could be very useful. Power up and down hosts then rebalance workloads as needed based on workload resource consumption. Thanks!
Hey all, bit of context: I have a generic pipeline that deploys all my services. This works well, but given only one run of a pipeline can be performed at a time, it leads to sequential runs when they could be parallelised (to give a concrete example: a deploy to service A's staging environment will block a deploy to service B's UAT environment).
I'd like to opt in to some control in the pipeline run and provide a lock token of some kind (which would in my case probably be a string that combines the target environment and target cluster, say, to guarantee only one run can touch service A's staging environment at a time, for example).
Bit of context: I've got a one-size-fits-all pipeline that deploys all our various services, as they're all very similarly shaped. This is working great as it keeps things very nice and standardised, and we're using variables to target specific environments and clusters and tags and such.
There's one small difference for a few services - they have a container that runs as part of the deploy process as essentially a one-off. There's a few containers like that for all services, and I explicitly stop them once the health check has passed and we're good, to avoid them getting kicked back to life every 10 minutes, but I can't do that with this specific container as not all stacks I deploy have something like it.
So for this use case I'd love some kind of "stop this container if it exists", or even a "stop this container, but ignore any errors that might cause for the purposes of this pipeline". There' probably other ways to address this I haven't thought of, as well.
A bit of context - to allow quick rollback to a known good deployment, we tag whatever deployment we're replacing as previous-[environment]
and stop all containers in it as part of the deployment pipeline. That way, to do a fast rollback we'd just need to move the environment tag back to it and start the relevant containers and we're back up and running.
This mostly works very well, but it does mean that all our services essentially seem to consume twice as many CPU allocations as they actually do, as each deployment gets counted twice (once for the live deployment and once for the stopped rollback option). It'd be nice if I could flag the old deployment as deprecated (or whatever you want to call it) to signal to Cycle that it's not going to be turned on in normal use, and it shouldn't be counted in server capacity.
The way we work around this right now is to enable overcommit on our cluster nodes and consider each to have twice the allocations they actually do but a) that's difficult context to keep alive in a team's collective memory and b) throws the cluster capacity graph.
When creating new image sources it would be handy to have a "clone as new" option. Clicking this would open a new source creation modal using the existing source as a template which can be modified before clicking create.
In my experience, most of the time only the name and image/tag are different between many sources, this would just speed up the creation process within the UI.
When working with multiple environments it would be nice to be able to collapse the list to only show the environments currently being worked with.
Similar for stacks and images & sources, pipelines, it would be good to be able to categorise these some way. When running multiple projects where not all the stacks, images are relevant to each project it would be good to have a way to show only what your interested in. I suppose tagging could be one way to achieve this if we can add and filter on tags for these resources.
Would be handy to have a way as an admin to impersonate a role in order to test the access controls applied to that role easily.
Hey team! I can see the load averages on our servers, but I'm not sure what is too high or too low.
In the docs, I can see "Load Averages System load averages over time-frame selected. This indicates how busy the server's CPU is." But the scale of these load averages is 0, 1, 2, etc. And I see for example LOAD 0.74, 0.64, 0.61 in the right hand panel as well as the graph.
Can you give some guidance about what would be too little load (ie too big of a machine, could save $$) or too much load (ie too few CPUs, needs bigger machine), etc? In absence of this, I'm not sure what to do with these numbers.
Thanks!
Hi, we are trying to spin up new containers, we are getting this error. How do we resolve this?
Hey, I just wanted to let you know, that when ports are added to a load balancer via the UI, the created port does not show in the port list after applying the changes with the "UPDATE LOAD BALANCER" button. After refreshing the page, the added port is correctly listed.
I would like to add a range of ports. Is there a way to do this through the portal?
Hi!
I have a couple of questions:
The container I published earlier is giving an error. I tried simplifying and changing the Dockerfile, but nothing seems to change. The container builds and runs locally, the CI pipeline also passes correctly, but during deployment, it throws the following error: vbnet Copy code [Sep 23 12:23:36.956][ CYCLE COMPUTE] Console attached mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory chmod: missing operand after '755' Try 'chmod --help' for more information. [Sep 23 12:23:37.033][ CYCLE COMPUTE] Console disconnected (77.073087ms) [Sep 23 12:26:49.663][ CYCLE COMPUTE] Console attached mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory chmod: missing operand after '755' Try 'chmod --help' for more information. [Sep 23 12:26:49.794][ CYCLE COMPUTE] Console disconnected (130.256738ms) I even removed everything related to this from the Dockerfile (attached), but the issue persists.
FROM php:8.2-apache
RUN apt-get update && apt-get install -y libzip-dev zip unzip git && docker-php-ext-install zip
RUN a2enmod rewrite RUN a2enmod ssl
ENV APACHE_RUN_USER=www-data ENV APACHE_RUN_GROUP=www-data
COPY --chown=www-data:www-data ./app /var/www/html
RUN rm -rf /var/lib/apt/lists/*
WORKDIR /var/www/html
I created a second server in the Products cluster and increased the number of container instances to 2. The deployment happened, but both containers were placed on the same server. How can I ensure that containers are evenly distributed across servers? Afterward, I stopped the first server with the containers in the AWS console, but the containers didn’t automatically deploy on the second server. So, if the server crashes (not just the container), the application becomes unavailable. How can this issue be resolved?
My team recently encountered an issue with the Discovery Service where we received the following error in the console:
[Resolver Throttle] <ip here> has hit the max hit limit (250) and is being throttled.
After investigating, we discovered that our API was sending an excessive number of requests to a third-party service, which triggered the throttle in the Discovery Service. This throttling then impacted other API requests in our environment.
The Cycle team explained that the throttle is in place "to prevent getting banned from lookup services like Google's domain servers or other public nameservers." The throttle limit resets every five minutes.
We’ve since resolved the issue on our end, but I wanted to share this experience in case anyone else encounters a similar problem. Hopefully, this helps someone avoid the same situation.
Hello,
My team and I encountered an issue this week with containers running the MySQL 5.7 image from Dockerhub. After shutting them down, the containers failed to restart.
We’ve been using this same unmodified image across multiple containers for over a year without issue, but this problem started earlier this week.
This same issue also occurs across different cloud providers in our account. And the problem persists even when deploying the same MySQL 5.7 image to a new container, so this isn’t isolated to a single container.
Here are the errors displayed when attempting to restart the container:
2024-09-20715:38:38.264529Z 0 [Warning] A deprecated TLS version TLSvl. 1 is enabled. Please use TLSv1.2 or higher.
2024-09-20715:38:38.265036Z 0 [Warning] CA certificate ca.pem is self signed.
2024-09-20715:38:38.265070Z 0 [Note] Skipping generation of RSA key pair as key files are present in data directory.
2024-09-20715:38:38.265316Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2024-09-20T15:38:38.265348Z 0 [Note] IPv6 is available.
2024-09-20715:38:38.265360Z 0 [Note]
2024-09-20T15:38:38.265383Z 0 [Note] Server socket created on IP: ' :: '.
2024-09-20715:38:38.265425z 0 [ERROR] Could not create unix socket lock file /var/run/mysqld/mysqld.sock.lock.
2024-09-20715:38:38.265435Z 0 [ERROR] Unable to setup unix socket lock file.
2024-09-20715:38:38.265445Z 0 [ERROR] Aborting
We are able to solve this issue by upgrading to a new MySQL 8 image.
Could the TLS errors be related to CycleOS? Was there an update that potentially disables older versions of TLS?
Any guidance would be greatly appreciated!
Our format for stack variables has been slightly updated to support some new use-cases and to improve security.
Starting today, variables in stack files must be a value to a key (no more free form variables making JSON or YAML invalid). Entire subsections can still be replaced, for example "environment_variables": "{{$variables}}"
.
The $
syntax will allow for a literal string escape, so anywhere where a number, object, or array should be replaced with a value, throw a $
in front of the variable name and Cycle will do a literal replacement during a build. Without the $
, the variable will still be treated as a string "identifier": "{{id}}"
.
Finally, the value of variables will not be embedded into build outputs anymore. This is to ensure that no secrets or keys are inadvertently leaked or made visible where they shouldn't be.
This change is breaking for some stacks
To update your stacks to ensure compatibility for your next build:
$
If you have any questions, please leave a comment here or reach out to our team on slack!
Thanks, have a great weekend.
Docker image has a HEALTHCHECK instruction in it.
HEALTHCHECK --interval=30s --timeout=5s --retries=5 CMD ["grpc_health_probe", "--addr=:9000"]
But seems it is not picked up by cycle to fill in the health check policy for container.
As a workaround, we are explicitly setting the containers.<container>.config.deploy.health_check
in cycle stack json/yaml.
health_check:
command: "grpc_health_probe --addr=:9000"
delay: "30s"
interval: "30s"
restart: true
retries: 5
timeout: "5s"
Hello,
I'm integrating with the pipeline API, and it seems like pipeline runs don't go to a final state assigned when they error out. I’ve created and queried a couple of errored out runs and they all have a status block like this:
"state":
{
"changed": "2024-08-21T04:07:43.786Z",
"error":
{
"message": "could not find cycle.json in repo"
},
"current": "running"
},
Note that current is still running eventhough the run ended hours ago with a pretty permanent error. I would expect the current
status to be something like failed
, so I can pick it up and determine the run is done (and cooked).