Load Balancer Service
Every environment on Cycle comes equipped with a load balancer service for managing ingress traffic, that is automatically deployed as part of creating an environment. Each instance of the load balancer receives a dedicated IPv4 and IPv6 address, and acts as a gateway into the environment.
How the Load Balancer Works
The load balancer is a container that runs on nodes deployed within a hub, and can be configured to run in high availability (HA) mode across multiple nodes, datacenters, or cloud providers, depending on the scale requirements. It is always recommended to run critical services such as the load balancer in HA, to avoid cases where a node fails and brings down the service preventing traffic from reaching containers.
The load balancer can be started manually, or will automatically start when a container with network mode set to enable is started.
Ingress Traffic
When traffic comes from the public internet (ingress), it will always pass through the load balancer of the relevant environment. The load balancer has a dedicated IP address per copy of the service. The load balancer service is responsible for routing traffic to the correct container, and balancing the load between instances of that container.
In addition, the load balancer can be configured to handle certain traffic based on the port, destination, or other criteria in a custom manner, giving full flexibility to the user on how traffic is handled, while maintaining sane default configurations.
Egress Traffic
When traffic leaves a container (egress), the traffic is routed out through the instance's host server IP address, skipping the load balancer.
Routing Traffic to Containers
Cycle vastly simplifies networking, taking advantage of declarative configurations and sensible defaults to reduce the complexity of setting up and managing traffic flow to and between containers. Any changes made to the container's network configuration or LINKED records are automatically picked up by the load balancer without any manual intervention required.
Containers exposing port 80
are automatically configured to run in HTTP mode, allowing a single environment to expose multiple
public containers behind the same IP. In addition, the load balancer supports TLS/SSL termination, simplifying the handling of encrypted
HTTPS traffic.
Connecting Domains
The first step to getting public traffic into a container is to create a LINKED Record, and point it to the desired container or deployment. Cycle will maintain the LINKED Record so that all traffic hitting that domain is routed to the load balancer sitting in front of the target container.
Declarative Port Forwarding
By exposing ports on a container with public networking set to enable using the container network config, the load balancer will automatically forward traffic inbound from a target domain on the specified port(s) to an instance of that container. Any changes made to the container's network configuration will automatically be picked up by the load balancer.
For example, say a container has a LINKED Record pointing to it for the domain cycle.io
, and the container has public networking set to enable,
and the port 80:80 exposed. With this configuration, all traffic hitting http://cycle.io
will be forwarded to an instance of the
load balancer in the environment where that container is deployed. From there, the traffic will be passed from the load balancer (coming in over port 80)
to an instance of the container over the private network behind the load balancer, entering the container over port 80.
If we change the ports to 80:3000, the ingress traffic from http://cycle.io
(over port 80) will be routed to the container
over the private environment network on port 3000.
Features
In addition to the declarative and automated network management, Cycle load balancers provide many additional features with little or no extra configuration required.
Automatic TLS/SSL Termination
For LINKED records with TLS enabled, a TLS/SSL certificate is automatically generated by the platform and injected into the load balancer, where automatic TLS termination is performed for traffic directed to a container with port 443:80 exposed.
This configuration means that any ingress traffic to the load balancer over port 443 (HTTPS) is automatically decrypted and forwarded to the container over port 80 (HTTP), reducing code complexity since the container process does not need to know nor care about HTTPS traffic.
Sticky Sessions
The Cycle load balancer supports sticky sessions, or the ability for the load balancer to create a persistent connection to an instance of the destination container. Sticky sessions ensure that a connection that is in process is not lost as a result of additional requests being routed to a different instance.
Automatic HTTP->HTTPS Redirection
Hosted Zone Required
Automatic redirection to HTTPS is only supported for DNS records under hosted zones.
By default, Cycle configures load balancers to automatically reroute traffic from HTTP->HTTPS if there is a TLS certificate configured for the record. The container will need to expose 443:80 to handle the automatically decrypted traffic.
Destination Pool Management
By utilizing metrics from destination instances, the load balancer will automatically remove unhealthy instances from the pool of instances that it can route traffic to. This ensures all incoming traffic will hit healthy instances.
The load balancer will also optimize routing based on destination latency, prioritizing faster connections. This is configurable in the load balancer settings.
Native Load Balancer(v1) vs HAProxy
Cycle supports two load balancer implementations that can be selected on a per-environment basis.
Native Load Balancer(v1)
The native load balancer (v1) was built by the Cycle team in-house. The native load balancer is designed to integrate with the platform on a very low level in order to provide certain features and functionality that is unavailable with HAProxy.
The native load balancer has been thoroughly tested in production use-cases, often exceeding the performance of HAProxy, while providing advanced features and constant updates and improvements.
Some of the notable features of the native load balancer are:
- Integrated advanced metrics
- mTLS support
- Built-In web application firewall (WAF)
- Zero-downtime route additions
- Support for proxying, caching, and rule-based redirects
- Support for TCP, UDP, and HTTP/S controllers
HAProxy
HAProxy was the original load balancer used by the platform before the native load balancer was released.
HAProxy is still able to be selected as the environment's load balancer, though it is missing many features supported by the native load balancer, and will eventually be phased out. It also has zero support for monitoring or metrics.
One of the main drivers for phasing out HAProxy is that it does not integrate well into the platform, and requires "stop the world" updates when new instances come online, which can lead to a few milliseconds of downtime.
Which Load Balancer Should I Choose?
You should always opt to use the native load balancer (v1). HAProxy is slowly being phased out, while the native load balancer is constantly gaining new functionality and performance enhancements.
Binding to the Host IP
The load balancer can be bound to the underlying server's IP address. This is useful in a few circumstances, namely if it is desired that traffic leave the host from the same IP address that it enters, or if it's not possible to allocate additional floating IPs (such as when dealing with on-premises infrastructure).
When binding a load balancer to the server's IP address, that server will only support one environment.
Configuring Load Balancers on Cycle
Learn how to set up, configure and run load balancers on the Cycle platform.