Latest Threads

Page 1
question

Load balancer - traffic distribution between containers according to URL.

Hi!

How can I route traffic based on URL?

For example:

https://<my_domain>/ - goes to one container

https://<my_domain>/<path1> - goes to a second container

https://<my_domain>/<path2> - goes to another container

Thanks!

avatar
2
feature-request

Allow enforcement of 2FA on members

Allow the enforcement of 2FA on member invites and accounts. Ensuring that downline devOps and admins have 2FA enforced is mission critical for compliance.

avatar
3
feature-request

Add support for variables in body of Webhooks (Pipeline stage -> 'Post to webhook')

To complete the notification process for internal/external teams on upgrades to environments, it would be nice to be able to use the pipeline variables. Use cases include posting the previous revision/build ID and the upgraded one or to indicate the cluster/environment modified.

avatar
0
question

Server storage

We have a notification on our server saying server storage is almost full.

Server Storage Pool Full There is less than 10% of total storage available on server

But when I look at the Server Details -> base volume on the right it says 16GB/29GB used.

What is the actual usage percentage?

Also, there is an option to increase the storage size. How do I check the maximum storage available for my server?

avatar
1
feature-request

Basic conditional (optional) logic in stacks/pipelines

With stacks/pipelines, you could adding near duplicate pipelines by having conditional logic. Consider this scenario

  • You define a DEBUG_LEVEL scoped variables variable as GLOBALLY scoped.
  • You define a specific workload to override DEBUG level conditionally IF set.
  • You could then conditionally set the debug-level variable on a pipeline, if you happen to need the variable overridden for a given deployment you could set debug level to 'high' and deploy, and override the variable for that deployment.

Currently there's limitations as you need to stop an instance for ENV variables to take effect, and redeploying isnt viable as currently, that always requires the environment variable to be set.

Implementation suggestions (should you permit these :))

  • Mark build variables for stacks as 'optional', as well as in pipelines. If the variable is set, you'll send the value to the pipeline/stack, and then the stack can optionally define the variable and even a fallback if necessary.
avatar
9
question

SSL Certificates

Hello!

How can I upload my SSL certificate (from GoDaddy) for DNS zone?

How can I route traffic based on URL?

For example:

https://<my_domain>/ - goes to one container

https://<my_domain>/<path1> - goes to a second container

https://<my_domain>/<path2> - goes to another container

Thanks!

avatar
10
question

How do I turn on legacy mode (ipv4) for an existing environment?

I'm deploying containers that need to connect to each locally via ipv4. How do I turn on legacy mode for an existing environment?

avatar
1
feature-request

Scheduled Triggers for Functions

One of the most useful features on AWS Lambda is the ability to set a CRONTab schedule for automated triggering of the function. That would be a really handy update to the current Function configuration!

avatar
2
feature-request

Scoped variable overview in container config section

It would be nice to have an overview of the currently attached scoped variables that apply to this instance. This helps with understanding which variables are / are not attached in the event there's conflicting (or wrongly configured) scoped variables that do not have this container in their scope. This helps even more when changing tags and they no longer apply to containers.

avatar
1
feature-request

Allow deployments to prefer a node tag but fall back gracefully.

Hey all!

I've got a heterogenous cluster where some nodes are CPU heavy and others memory heavy. I'd like to deploy particular workloads to particular nodes if there's space, but I'd much prefer that deploys didn't fail if there's space on another node, just not the one I want. One way I can imagine this working is assign a list of tags [cpu-pool, pool] and have the scheduler try to match as many tags as possible, failing only if it can't match a single one. So an ANY rule in addition to your current ALL implementation, I suppose?

As it stands I'm a bit nervous to configure my preferred split because breaking deploys is a bigger downside than optimising node usage is an upside.

avatar
3
feature-request

Automated resource and power management

I would like to submit a request for a feature. VMWare has a feature called DPM (distributed power management), I think a similar feature in Cycle could be very useful. Power up and down hosts then rebalance workloads as needed based on workload resource consumption. Thanks!

avatar
1
feature-request

Subqueues in Pipelines

Hey all, bit of context: I have a generic pipeline that deploys all my services. This works well, but given only one run of a pipeline can be performed at a time, it leads to sequential runs when they could be parallelised (to give a concrete example: a deploy to service A's staging environment will block a deploy to service B's UAT environment).

I'd like to opt in to some control in the pipeline run and provide a lock token of some kind (which would in my case probably be a string that combines the target environment and target cluster, say, to guarantee only one run can touch service A's staging environment at a time, for example).

avatar
3
feature-request

Conditional actions in pipelines

Bit of context: I've got a one-size-fits-all pipeline that deploys all our various services, as they're all very similarly shaped. This is working great as it keeps things very nice and standardised, and we're using variables to target specific environments and clusters and tags and such.

There's one small difference for a few services - they have a container that runs as part of the deploy process as essentially a one-off. There's a few containers like that for all services, and I explicitly stop them once the health check has passed and we're good, to avoid them getting kicked back to life every 10 minutes, but I can't do that with this specific container as not all stacks I deploy have something like it.

So for this use case I'd love some kind of "stop this container if it exists", or even a "stop this container, but ignore any errors that might cause for the purposes of this pipeline". There' probably other ways to address this I haven't thought of, as well.

avatar
3
feature-request

Allow deprecation of a deploy (to avoid double counting allocations on cluster nodes)

A bit of context - to allow quick rollback to a known good deployment, we tag whatever deployment we're replacing as previous-[environment] and stop all containers in it as part of the deployment pipeline. That way, to do a fast rollback we'd just need to move the environment tag back to it and start the relevant containers and we're back up and running.

This mostly works very well, but it does mean that all our services essentially seem to consume twice as many CPU allocations as they actually do, as each deployment gets counted twice (once for the live deployment and once for the stopped rollback option). It'd be nice if I could flag the old deployment as deprecated (or whatever you want to call it) to signal to Cycle that it's not going to be turned on in normal use, and it shouldn't be counted in server capacity.

The way we work around this right now is to enable overcommit on our cluster nodes and consider each to have twice the allocations they actually do but a) that's difficult context to keep alive in a team's collective memory and b) throws the cluster capacity graph.

avatar
1
feature-request

Duplicate and Create new button

When creating new image sources it would be handy to have a "clone as new" option. Clicking this would open a new source creation modal using the existing source as a template which can be modified before clicking create.

In my experience, most of the time only the name and image/tag are different between many sources, this would just speed up the creation process within the UI.

avatar
2
feature-request

Collapse Environemnts list

When working with multiple environments it would be nice to be able to collapse the list to only show the environments currently being worked with.

Similar for stacks and images & sources, pipelines, it would be good to be able to categorise these some way. When running multiple projects where not all the stacks, images are relevant to each project it would be good to have a way to show only what your interested in. I suppose tagging could be one way to achieve this if we can add and filter on tags for these resources.

avatar
1
feature-request

Impersonation views

Would be handy to have a way as an admin to impersonate a role in order to test the access controls applied to that role easily.

avatar
2
question

Load Averages - What are reasonable levels?

Hey team! I can see the load averages on our servers, but I'm not sure what is too high or too low.

In the docs, I can see "Load Averages System load averages over time-frame selected. This indicates how busy the server's CPU is." But the scale of these load averages is 0, 1, 2, etc. And I see for example LOAD 0.74, 0.64, 0.61 in the right hand panel as well as the graph.

Can you give some guidance about what would be too little load (ie too big of a machine, could save $$) or too much load (ie too few CPUs, needs bigger machine), etc? In absence of this, I'm not sure what to do with these numbers.

Thanks!

avatar
2
question

servers matching criteria do not contain enough unallocated resources for deployment

Hi, we are trying to spin up new containers, we are getting this error. How do we resolve this?

avatar
2
feedback

adding port to load balancer via UI

Hey, I just wanted to let you know, that when ports are added to a load balancer via the UI, the created port does not show in the port list after applying the changes with the "UPDATE LOAD BALANCER" button. After refreshing the page, the added port is correctly listed.

avatar
3
v2024.10.16 © 2024 Petrichor Holdings, Inc.