WAF questions
Hi!
Could you provide more details about WAF, as we are experiencing constant malicious activity attempts?
I’m particularly interested in protection against:
Directory Traversal
Code Injection
SQL Injection
XSS
Hi!
Could you provide more details about WAF, as we are experiencing constant malicious activity attempts?
I’m particularly interested in protection against:
Directory Traversal
Code Injection
SQL Injection
XSS
Hi!
I deployed a container with a high availability deployment strategy on two instances, assigning a tag that I previously associated with the servers.
To manage the containers, I created Scoped Variables using Container Identifiers as the binding. However, the containers share the same identifier. How can I use one of the variables with different values depending on the container instance?
Thx
Hey all,
I wanted to let you know that we are experiencing two GUI issues in the load balancer modal.
Best,
Tom
To pull this conversation public, I thought I would throw a question out to the senior staff at Cycle in regards to Kernel/OS updates. Lets kick this thread off with a few questions around server security
I think these type of questions serve as a baseline for determining how folks can address security updates and ensure their servers are kept up to date.
Hello, We are experiencing timeouts on our API calls in our system and we are trying to identify the source. We can see that the load balancer in cycle has multiple timeout fields and are unsure of what needs to be set so that a REST API call timeout can be set to 90 seconds.
We see 2 options:
Which one needs to be set such that our timeouts are 90 seconds for calls on port 443?
Thank you!
Hi!
How can I route traffic based on URL?
For example:
https://<my_domain>/ - goes to one container
https://<my_domain>/<path1> - goes to a second container
https://<my_domain>/<path2> - goes to another container
Thanks!
Allow the enforcement of 2FA on member invites and accounts. Ensuring that downline devOps and admins have 2FA enforced is mission critical for compliance.
To complete the notification process for internal/external teams on upgrades to environments, it would be nice to be able to use the pipeline variables. Use cases include posting the previous revision/build ID and the upgraded one or to indicate the cluster/environment modified.
We have a notification on our server saying server storage is almost full.
Server Storage Pool Full There is less than 10% of total storage available on server
But when I look at the Server Details -> base volume on the right it says 16GB/29GB used.
What is the actual usage percentage?
Also, there is an option to increase the storage size. How do I check the maximum storage available for my server?
With stacks/pipelines, you could adding near duplicate pipelines by having conditional logic. Consider this scenario
Currently there's limitations as you need to stop an instance for ENV variables to take effect, and redeploying isnt viable as currently, that always requires the environment variable to be set.
Implementation suggestions (should you permit these :))
Hello!
How can I upload my SSL certificate (from GoDaddy) for DNS zone?
How can I route traffic based on URL?
For example:
https://<my_domain>/ - goes to one container
https://<my_domain>/<path1> - goes to a second container
https://<my_domain>/<path2> - goes to another container
Thanks!
I'm deploying containers that need to connect to each locally via ipv4. How do I turn on legacy mode for an existing environment?
One of the most useful features on AWS Lambda is the ability to set a CRONTab schedule for automated triggering of the function. That would be a really handy update to the current Function configuration!
It would be nice to have an overview of the currently attached scoped variables that apply to this instance. This helps with understanding which variables are / are not attached in the event there's conflicting (or wrongly configured) scoped variables that do not have this container in their scope. This helps even more when changing tags and they no longer apply to containers.
Hey all!
I've got a heterogenous cluster where some nodes are CPU heavy and others memory heavy. I'd like to deploy particular workloads to particular nodes if there's space, but I'd much prefer that deploys didn't fail if there's space on another node, just not the one I want. One way I can imagine this working is assign a list of tags [cpu-pool, pool]
and have the scheduler try to match as many tags as possible, failing only if it can't match a single one. So an ANY
rule in addition to your current ALL
implementation, I suppose?
As it stands I'm a bit nervous to configure my preferred split because breaking deploys is a bigger downside than optimising node usage is an upside.
I would like to submit a request for a feature. VMWare has a feature called DPM (distributed power management), I think a similar feature in Cycle could be very useful. Power up and down hosts then rebalance workloads as needed based on workload resource consumption. Thanks!
Hey all, bit of context: I have a generic pipeline that deploys all my services. This works well, but given only one run of a pipeline can be performed at a time, it leads to sequential runs when they could be parallelised (to give a concrete example: a deploy to service A's staging environment will block a deploy to service B's UAT environment).
I'd like to opt in to some control in the pipeline run and provide a lock token of some kind (which would in my case probably be a string that combines the target environment and target cluster, say, to guarantee only one run can touch service A's staging environment at a time, for example).
Bit of context: I've got a one-size-fits-all pipeline that deploys all our various services, as they're all very similarly shaped. This is working great as it keeps things very nice and standardised, and we're using variables to target specific environments and clusters and tags and such.
There's one small difference for a few services - they have a container that runs as part of the deploy process as essentially a one-off. There's a few containers like that for all services, and I explicitly stop them once the health check has passed and we're good, to avoid them getting kicked back to life every 10 minutes, but I can't do that with this specific container as not all stacks I deploy have something like it.
So for this use case I'd love some kind of "stop this container if it exists", or even a "stop this container, but ignore any errors that might cause for the purposes of this pipeline". There' probably other ways to address this I haven't thought of, as well.
A bit of context - to allow quick rollback to a known good deployment, we tag whatever deployment we're replacing as previous-[environment]
and stop all containers in it as part of the deployment pipeline. That way, to do a fast rollback we'd just need to move the environment tag back to it and start the relevant containers and we're back up and running.
This mostly works very well, but it does mean that all our services essentially seem to consume twice as many CPU allocations as they actually do, as each deployment gets counted twice (once for the live deployment and once for the stopped rollback option). It'd be nice if I could flag the old deployment as deprecated (or whatever you want to call it) to signal to Cycle that it's not going to be turned on in normal use, and it shouldn't be counted in server capacity.
The way we work around this right now is to enable overcommit on our cluster nodes and consider each to have twice the allocations they actually do but a) that's difficult context to keep alive in a team's collective memory and b) throws the cluster capacity graph.
When creating new image sources it would be handy to have a "clone as new" option. Clicking this would open a new source creation modal using the existing source as a template which can be modified before clicking create.
In my experience, most of the time only the name and image/tag are different between many sources, this would just speed up the creation process within the UI.