feature-request

Allow grouping of containers in stack definitions for pipelines

Hey all,

This one's a bit in the weeds but here's the context:

  • I use stack definitions for each of our services, and deploy those using a generic pipeline that uses a bunch of variables to determine what environment we're deploying to, what stack definition to use and a couple other things.
  • Each service has a couple of one-off jobs that need to run once, on deploy, and then basically never again. This is hard to model in the current Cycle approach because unless I explicitly stop each container, by name, after it's run once, each of these containers will get a platform health check every 10 minutes and a restart.
  • This is the only thing I can't dynamically control in the pipeline and as such the only thing that's preventing a single pipeline for all deploys.

What I would like:

  • A way to tag or otherwise mark a container in a stack definition such that I can act on all containers with that tag in a pipeline. In my specific case, imagine a tag called migration (the canonical example of this sort of workload is a database migration, hence the tag) where in a pipeline step I can just say "now stop all containers tagged migration. That'd very neatly solve my problem.

In my specific case I could model all of these as function containers and I could also use a 'now stop all function containers' type grouping but I'd imagine that'd be much less broadly useful to others.

avatar
3
  • Hey Thomas, not sure how this one slipped by without response. Sorry about that.

    This makes sense, I'm not sure where the right place to mark that is. The stack does seem like a logical place.

    Currently you could use container stop with the resource path identifier in the format environment:example,container:container-identifier(deployment.tag=yourtag) or (deployment.version=version) - but you'd have to do this for each container you want to interact with. Your way would allow for less actions on the user side which could be really nice.

    avatar
    platform
  • Looking at your answer I realised I missed the primary reason this would be so useful for me - different services have different migration containers. I have at last count 4 different types of one-off jobs that can run, but most services have one or two of them (and there's several different configurations), to the best of my knowledge Cycle protests quite vigorously if I try to stop a container in a pipeline that doesn't actually exist in the stack it is deploying.

    So to add additional context to my question so it actually makes sense (sorry I missed that!): what I really want is a way to say 'stop this container if it's present but be chill about it if it's not'. I've just been brainstorming various generic ways to express that that might actually be useful to others outside of my specific use case.

    avatar
  • Ahh yes this makes more sense, basically a try / continue instead of a try / error / stop. I actually believe this is in the plan for this year and slated for earlier q2/3 not later, but I will definitely keep you in the loop when we get close to make sure it fits what you're talking about.

    Another thing we're looking at, possibly this year, is conditional steps in pipelines. Thats still in the planning phases though, but I think will be really powerful.

    One thing I'd never thought of before, but just came to mind would be to use the POST webhook to a service in the environment (like a VERY small api). POST to that webhook with the data from the pipeline and you can make a decision on what to do with that step (pass or fail) based on the http response code. This really only works if the rest of the steps after are not dependent on the outcome, BUT I believe what you're saying is they are not.

    That would basically look like:

    • post to webhook
    • parse data at API
    • If the call is successful return a 200 and mark step success
    • if not 200 decide if you want retry (thats part of the step mechanics)
    • rest of pipeline

    then on the api side

    • grab request body
    • make a few simple api calls
    • wait on job response (for example container start stop )
    • if all good move on
    • if fail post to slack or something like that

    Basically this works 99% of the time, but for some reason if a job fails you get a note in slack and someone can manually finish that action off (etc).

    The nice thing here would be, if you get an error from the API on the job because the resource doesn't exist you can safely pass that without impacting the rest of the pipeline.

    avatar
    platform
v2026.02.04.01 © 2024 Petrichor Holdings, Inc.

🍪 Help Us Improve Our Site

We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.