Scheduled Triggers for Functions
One of the most useful features on AWS Lambda is the ability to set a CRONTab schedule for automated triggering of the function. That would be a really handy update to the current Function configuration!
One of the most useful features on AWS Lambda is the ability to set a CRONTab schedule for automated triggering of the function. That would be a really handy update to the current Function configuration!
Hey all!
I've got a heterogenous cluster where some nodes are CPU heavy and others memory heavy. I'd like to deploy particular workloads to particular nodes if there's space, but I'd much prefer that deploys didn't fail if there's space on another node, just not the one I want. One way I can imagine this working is assign a list of tags [cpu-pool, pool]
and have the scheduler try to match as many tags as possible, failing only if it can't match a single one. So an ANY
rule in addition to your current ALL
implementation, I suppose?
As it stands I'm a bit nervous to configure my preferred split because breaking deploys is a bigger downside than optimising node usage is an upside.
It would be nice to have an overview of the currently attached scoped variables that apply to this instance. This helps with understanding which variables are / are not attached in the event there's conflicting (or wrongly configured) scoped variables that do not have this container in their scope. This helps even more when changing tags and they no longer apply to containers.
I would like to submit a request for a feature. VMWare has a feature called DPM (distributed power management), I think a similar feature in Cycle could be very useful. Power up and down hosts then rebalance workloads as needed based on workload resource consumption. Thanks!
Hey all, bit of context: I have a generic pipeline that deploys all my services. This works well, but given only one run of a pipeline can be performed at a time, it leads to sequential runs when they could be parallelised (to give a concrete example: a deploy to service A's staging environment will block a deploy to service B's UAT environment).
I'd like to opt in to some control in the pipeline run and provide a lock token of some kind (which would in my case probably be a string that combines the target environment and target cluster, say, to guarantee only one run can touch service A's staging environment at a time, for example).
Bit of context: I've got a one-size-fits-all pipeline that deploys all our various services, as they're all very similarly shaped. This is working great as it keeps things very nice and standardised, and we're using variables to target specific environments and clusters and tags and such.
There's one small difference for a few services - they have a container that runs as part of the deploy process as essentially a one-off. There's a few containers like that for all services, and I explicitly stop them once the health check has passed and we're good, to avoid them getting kicked back to life every 10 minutes, but I can't do that with this specific container as not all stacks I deploy have something like it.
So for this use case I'd love some kind of "stop this container if it exists", or even a "stop this container, but ignore any errors that might cause for the purposes of this pipeline". There' probably other ways to address this I haven't thought of, as well.
A bit of context - to allow quick rollback to a known good deployment, we tag whatever deployment we're replacing as previous-[environment]
and stop all containers in it as part of the deployment pipeline. That way, to do a fast rollback we'd just need to move the environment tag back to it and start the relevant containers and we're back up and running.
This mostly works very well, but it does mean that all our services essentially seem to consume twice as many CPU allocations as they actually do, as each deployment gets counted twice (once for the live deployment and once for the stopped rollback option). It'd be nice if I could flag the old deployment as deprecated (or whatever you want to call it) to signal to Cycle that it's not going to be turned on in normal use, and it shouldn't be counted in server capacity.
The way we work around this right now is to enable overcommit on our cluster nodes and consider each to have twice the allocations they actually do but a) that's difficult context to keep alive in a team's collective memory and b) throws the cluster capacity graph.
When creating new image sources it would be handy to have a "clone as new" option. Clicking this would open a new source creation modal using the existing source as a template which can be modified before clicking create.
In my experience, most of the time only the name and image/tag are different between many sources, this would just speed up the creation process within the UI.
When working with multiple environments it would be nice to be able to collapse the list to only show the environments currently being worked with.
Similar for stacks and images & sources, pipelines, it would be good to be able to categorise these some way. When running multiple projects where not all the stacks, images are relevant to each project it would be good to have a way to show only what your interested in. I suppose tagging could be one way to achieve this if we can add and filter on tags for these resources.
Would be handy to have a way as an admin to impersonate a role in order to test the access controls applied to that role easily.
Sometimes, especially when building new pipelines, I've found it all too easy to trigger a new pipeline run, only to realize shortly afterward that I need to make a minor update before being satisfied with the result.
In these situations, I often wish I could stop the ongoing pipeline run and trigger a fresh one immediately. Currently, I have to wait until the queued pipeline runs finish before I can test the most up-to-date version of the pipeline I'm working on.
Add support for Build Secrets (--secret) when building an image. Not sure how it'd work but currently the only way I can see for how to pass authentication credentials that are usable down-stream within the Dockerfile is to pass build args, which is insecure and emitted in Depot etc
Specific use-case is npm authentication.
I'd like to set up a pipeline which can take the branch name as an argument and spins up a new environment using that branch, however it appears that Magic Variables aren't supported in the "Git Repo Branch" or Ref Type -> "Ref Value"
Minor nitpick here, but it initially lead me down the path of thinking that one couldn't use git as a source for Create Image in Pipelines. Please see attached images :)
If we can edit the original post, it kind of makes sense to me that we should be able to edit replies?
Also, the reply text box cannot be scrolled by swiping up and down on Android, need to select the cursor and drag it up and down π
Currently when signing in on Android, the 2FA code cannot be pasted in (only pastes the first number in the first box).
I have a docker builds which takes more than 10 minutes, and it times out on Depot. I'd prefer to use some of the compute I'm already paying for in my worker nodes for building, so I think it'd make sense to be able to select one of the clusters in the hub as a target for where the docker image will build. Also potentially moves some of the load off the built in cycle builder.
Saved Credentials/fields
β Proposed basis
β β Can be singular fields, not provider-specific β β Can be used anywhere text can be used β β Maybe two variants, one for cleartext fields and one for sensitive fields.
β Use case
β β For configuring multiple entities/objects that utilize the same keys, most providers donβt allow you to view credentials after generating, so would be handy.
It is difficult to find the correct container when the number of containers in an environment reaches a certain point. Pagination is implement but you have to sift through page numbers to find the container you need.
Is there a chance to implement a search bar that will allow us to search for specific containers?
It's quite hard to edit pipelines in the UI once you've got it in use (and thus rely on the ID staying the same, etc). Being able to move steps around would allow me to add new types of steps through the UI rather than having to use the API / manually reorder some JSON.
We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.