RabbitMQ is a messaging broker that helps different parts of a software application communicate with each other. Think of it as a middleman that takes care of sending and receiving messages so that everything runs smoothly. Since its release in 2007, it's gained a lot of traction for being reliable and easy to scale. It's a solid choice if you're dealing with complex systems and want to make sure data gets where it needs to go.
If you're taking a look at this article you've probably already decided to use RabbitMQ so we'll make short with the introduction and move into configuration and deployment, but as a final check before moving forward here are the things you'll need to complete this guide:
If you're new to Cycle you may want to keep a link to the
Cycle documentation open. This blog moves
quickly through basic tasks like creating containers and environments as
well as updating configurations. More detailed information and
screenshots can be found in the docs. If you get caught up at any point,
feel free to reach out to us on our public
Slack Channel.
Today we'll walk through both a single node deployment and clustering.
The image we'll be working with is: rabbitmq:3.11-management, which is available from DockerHub. Create an image source from that target and import the image.
To get started on deploying the single node RabbitMQ instace, create an environment to work out of. Then use the “Deploy Container” form to create the stateful (disc) rabbitmq instance. Public Network can remain disabled and the management UI will still be reachable when we're done.
After creating the container, navigate to the container config and move down to the deployment configuration options. Enable the “Stateful Instances” option and check the box under “Use Base Hostname”. This will use the hostname you've assigned to the container instead of prepending it with an integer (as is normally the case with stateful instances on Cycle). This is only needed because rabbitmq actually parses the provided hostname on startup to use during initialization. Make sure to save the configuration changes.
Now that we have our container, we're going to add two scoped variables. Head over to the environment scoped variables and create the following 2 variables.
Identifier | rabbitmq-conf |
---|---|
Source Type | raw |
Source |
management.tcp.port = 15672
management.tcp.ip = ::
|
Scope | Containers > *your container* |
Access | File |
Identifier | RABBITMQ_CONFIG_FILE |
---|---|
Source Type | raw |
Source |
/var/run/cycle/variables/rabbitmq-conf
|
Scope | Containers > *your container* |
Access | Environment Variable |
The first scoped variable here is a simple configuration setting that sets
the management UI port to 15672
and sets it to bind all on that
port within the container. The second variable is just telling rabbitmq
where in the file path to look for the configuration.
With all this in place, head to the environment dashboard and use the button “Start All” to start the containers. This will start the discovery container and the rabbitmq container. We want to be able to interact with the rabbitmq management UI but we also do not want to make the rabbitmq instance publicly available. The solution to this is to enable the VPN container.
From the environment dash (which you should already be on) scroll down a bit till you see a section called “Services”. You will see options to “Manage” the Load Balancer, Discovery, and VPN. In order to enable the VPN the Load Balancer must be running, to manually start the Load Balancer, click “Manage” and then hold down the start button at the top of the modal. Press escape to exit the modal.
Click on the “Manage” button next to VPN. In the modal, click “Enable”, and then from the modal navigation on the left select “Auth”. In the Auth section under “Access Control” you can check the Allow Cycle Logins checkbox and “Update VPN Access”. Navigate back to the modals Dashboard and after about 3-5 minutes you should be able to “Generate VPN files”, and then download them. Once you have them downloaded you can add the connection information to your local VPN client and connect to the VPN.
If you've enabled Cycle logins on the Auth page you can log in using the
same login credentials you used to log into Cycle. Once you have the
connection, navigate to http://*hostname*:15762
(where hostname
is the hostname of your container). There you should find the rabbitmq
management UI login. The default login information is:
username: guest
password: guest
And that's all there is to it for a single node. If you'd like to use the two-way console to log into the instance and play around a bit with the CLI that can be interesting.
Onto the fun part, clustering.
We're going to use similar scoped variables in this setup as we did with the single node setup. For simplicity's sake, create a new environment.
In the new environment, create three new containers using the rabbitmq
management image we imported and used for the single instance. For each
respective container use the hostname mqnode1
,
mqnode2
, mqnode3
. Set all the containers to config
> deployment > stateful > use base hostname.
Create the scoped variables:
Identifier | rabbitmq-conf |
---|---|
Source Type | raw (blob) |
Source |
cluster_formation.peer_discovery_backend = classic_config
cluster_formation.classic_config.nodes.1 = rabbit@mqnode1
cluster_formation.classic_config.nodes.2 = rabbit@mqnode2
cluster_formation.classic_config.nodes.3 = rabbit@mqnode3
management.tcp.port = 15672
management.tcp.ip = ::
|
Scope | Containers > mqnode1, mqnode2, mqnode3 |
Access | File |
Identifier | RABBITMQ_CONFIG_FILE |
Source Type | Raw |
Source | /var/run/cycle/variables/rabbitmq-conf |
Scope | Containers > mqnode1, mqnode2, mqnode3 |
Access | Environment Variable |
Identifier | RABBITMQ_ERLANG_COOKIE |
Source Type | Raw |
Source | G4J9kFQbwE28ClnSZXT9 |
Scope | Containers > mqnode1, mqnode2, mqnode3 |
Access | Environment Variable |
Identifier | RABBITMQ_CTL_ERL_ARGS |
Source Type | Raw |
Source | -proto_dist inet6_tcp |
Scope | Containers > mqnode1, mqnode2, mqnode3 |
Access | Environment Variable |
Identifier | RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS |
Source Type | Raw |
Source |
-kernel inetrc '/etc/rabbitmq/erl_inetrc' -proto_dist inet6_tcp
|
Scope | Containers > mqnode1, mqnode2, mqnode3 |
Access | Environment Variable |
If you remember from the config of the single instance we set bind all for
the management port (listen on all ipv4 and all ipv6). With the clustering
setup, we need to direct the instances to allow for the clustering
mechanisms to work on bind all as well (Cycle's default environment
networking is IPv6!). This is set by the CTL_ERL
and
ADDITIONAL_ERL
args.
The ERLANG_COOKIE
is used for authentication and described
here in
the rabbitmq docs.
If you're paying close attention you'll see that the configuration file has changed. We've added directives on clustering and we've used the backend discovery classic configuration. This is nice because we know the hostname of the nodes in the cluster up front, so it's very easy to define those values ahead of time.
Head to the environment dashboard and start the containers using “Start All”, follow the steps from the previous section to start the Load Balancer and VPN containers, then connect to the VPN. Log into the management interface using the mqnode1 hostname and see the configuration. From here you can even set up quorum queues or other types of additional configurations.
If you want to see that the nodes are clustered you can use the two-way
console to log into any of the instances and run
rabbitmqctl cluster_status
.
And that's all there is to it. After clustering all the nodes and getting everything online, you'll still need to add some configuration settings to tune things to what you need but this should get you 95% of the way there.
If you do take this starting point and tune it further we'd love to hear about your experience and configuration in our public Slack, listed below.
💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!