Introduction to Pipelines
Pipelines are an organized group of stages and steps for automating any number of tasks within a hub based on a trigger.
Generally, pipelines are used for automating container deployments, but can be utilized for orchestrating complex sequences that affect entire clusters. This type of automation is generally known as GitOps.
Anatomy of a Pipeline
Pipelines are divided into stages, containing a series of steps.
Pipeline Runs
Each invocation of a pipeline is referred to as a run. The run data encapsulates all pipeline variables and data about the pass/fail for each stage and step.
Pipeline runs also capture the time of the run.
Stages
Stages are intended to be a logical grouping of steps to achieve a goal.
Planned Feature
Pipeline stages are planned to be run concurrently at some point in the future.
Steps
Steps are the specific tasks that the pipeline will execute when run. Steps within a stage are run in series.
For a complete list of all available steps and their details, see here.
Run Components
Each run contains a list of components pointing to the resources that were created or changed during the run.
Trigger Keys
Trigger keys are used to begin a run of a pipeline from outside Cycle:
Example Workflow
If using Github Actions, the action can focus on testing the newly submitted code, building and pushing a docker image to a registry, then calling the trigger key via cURL to start the deployment on Cycle. From there, the pipeline will automate importing the image, and deploying or reimaging a container.
Read more about trigger keys here.
ACLs
Pipelines have full support for role based ACLs.
Locking and Resource Acquisition
Pipelines that access resources for write operations will put a 'lock' on that resource. This simply means that other pipelines
wanting to write to that resource will wait until the first pipeline has finished running before starting. It will have the state of
acquiring
during this period.
Sub-Queueing / Parallelizing Dynamic Pipelines
Advanced Feature
Sub-Queueing isn't often necessary. It's normally utilized when doing generic pipelines with heavy variable usage that can operate on a number of different resources, and will be apparent when it is needed.
When utilizing pipeline variables, Cycle will continue to utilize it's dependency verification to prevent conflicts, but there's an inherent risk with runtime planning for complex pipelines. Cycle will sometimes lack information necessary to know when it can parallelize and when it cannot, and will err on the side of caution.
To get around this, sub-queueing can be opted into during a pipeline trigger to let Cycle know that any pipeline runs with different sub-queues can be parallelized.
Use Caution
The responsibility falls on the user to ensure that when utilizing sub-queues, two runs of a pipeline will not touch the same resource. Failure to do so could result in unintended consequences.
Read how to use sub-queues when running a pipeline with a trigger key here.