In the modern web-app space, there's been a trend going around that I like to describe as “getting back to basics”. It seems as though over the years, the tooling and complexity around building web-apps has gotten more and more complex. In that time, we've strayed further from browser primitives into highly abstracted and javascript-heavy solutions to solve problems our browsers solved back in the 90's.
Remix is a framework that prides itself on taking advantage of the things browsers do best, with a bit of modern syntax and quality of life improvements on top. The result is a powerful way to build server-side rendered applications, while taking advantage of things like native browser forms and the standard request-response model. This results in an extremely easy way to produce full-stack web applications that feel polished and work well.
Remix is the perfect framework for small to medium-high complexity web applications. It's a bit overkill if you're just doing a static site, and can be missing some things you'd want in an extremely complex app, particularly around data streaming, but for the 80% use case, it is an ideal choice. I personally use Remix for the vast majority of things I work on.
When paired with containers, Remix apps become self-contained and can be easily scaled. Creating a tiny, optimized container image for a Remix app, however, can be a bit of a challenge, and deploying it to scalable infrastructure, even more so.
Let's learn how to build an optimized container image for a Remix app and get it online in a production environment.
There are two things I reach for when building optimized container images: a
tiny base image, and build stages. In addition, any node application should
be very careful about what dependencies are dev dependencies vs production
dependencies, and the package.json
should reflect that. When
combined with build stages, it ensures only the absolute minimum required
packages are included in our final image.
.dockerignore
Before getting started, let's create a .dockerignore
file in
the root of our project. It ensures that, as we build our image, we don't
copy in any unnecessary files that would increase our build time. Add the
following lines to it:
.cache/
node_modules/
build
We don't want to pull in any cache or build files, and definitely don't want
our local node_modules
.
With that out of the way, let's get started.
base
The base layer will set the common image that will be used in all other layers, but make it convenient so we would only need to change it in one place instead of for each layer if we ever need to upgrade or change it.
As mentioned earlier, a key decision that will affect our final container
image is our choice of base image. The most common choice is a flavor of
linux called alpine
. The
alpine container image is a
bare-bones distro that only has the necessities, and weighs in at just 5MB.
That sounds good, but installing node and everything it needs to run would
be pretty tedious. Luckily for us, there is a
node container image based on
alpine we can build upon. So, without further ado, let's start our
Dockerfile
.
Create a file called Dockerfile
(no extension) in the root
directory of your Remix app, and add the following:
FROM node:20.2.0-alpine3.18 as base
Great job! Wipe the sweat from your brow, and let's break this line down.
The key here is the as base
bit, which creates a new stage for
our build, and allows us to inherit this image in our other stages. In
addition, I've specified the node
image, set it to version
20.2.0
, and specified the alpine flavor, at version
3.18
. This gives us a great starting point to flesh out the
rest of the image.
In this example, I'll be using npm
as the package manager,
since it is the default node package manager. If you're using yarn or pnpm,
you should be able to adjust the instructions in the next section
accordingly to make it work. If you're using Remix + Deno, the image base
will also need to be updated to support Deno.
deps
The ‘deps' stage will set up our node_modules
dependencies. We
take advantage of the second key concept for optimized images - layer
caching.
As I'm sure you're well aware, there is hardly anything more massive than a
node_modules
folder. When building our docker container, the
worst thing that could happen is doing an uncached
npm install
every single time we need to build. Instead, we'll
load in the package.json
and
package-lock.json
files separately and do the
npm install
before copying in the rest of the project. This
way, even if we change our source files, we won't need to reinstall the
node_modules
unless our package.json
file is
modified. Docker is smart enough to reuse the cache if the underlying files
weren't modified.
Paste the following into the Dockerfile
after the previous
stage:
FROM base AS deps
WORKDIR /app
COPY package*.json ./
RUN npm install
We set up our new stage as a layer on top of base
. You'll also
notice that we're setting a working directory under /app
. This
can be anything you want, but it's good practice not to operate in the root
directory of the container. Now with our dev dependencies cached, we can
move on to the next stage.
builder
Next up, we need to build our Remix project so that we get our build directory, and have something to run in the container:
FROM deps AS builder
WORKDIR /app
Just as before, we set our WORKDIR
to /app. This time, however,
we're layering on top of our previous stage, so we inherit everything inside
of it, such as the node_modules
we installed. We'll need to
copy in the remainder of our project files, and run the build script
provided by Remix:
COPY . .
RUN npm run build
This copies all the files in our current directory into the build context,
and runs the Remix build command in the package.json
file
(based on their default template).
At this point, you can test if everything is working correctly by running
docker build -t remix-test .
and checking if an image was
created. Then, modify a source file under /app
. When you run
the above again, you'll notice that docker has cached our
npm install
command, and only executed the COPY and RUN
commands after that.
It should be very speedy to update files and test our container locally now.
prod-deps
This is where things get interesting. If we installed the production dependencies at the end of the previous stage with `npm install –production`, any time a source file was changed, we'd need to reinstall the production deps for every build.
Instead, we're going to build a parallel stage off of deps
, and
install our production dependencies there, so they aren't relying on a
previous COPY step and can be cached so that only changing
package.json
will trigger a reinstall.
FROM deps AS prod-deps
WORKDIR /app
RUN npm i --production
Now, our builder
stage and prod-deps
stage can run
in parallel. Sharing a common base layer allows us to do some neat tricks
like this in the name of speed.
runner
What good is a container that doesn't run? Our final stage will instruct the underlying container runtime what to…well, run.
We're going to build off our very first base
stage, and copy in
only what we need from the other stages, to construct the absolute minimum
needed to run our application.
First things first, let's set up our stage:
FROM base as runner
WORKDIR /app
Again, we pull from the base layer, and set our working directory to
/app
. Now, since this is a production container, we want to set
up another user so we're not running things as root. In alpine linux, that's
achieved with the following:
RUN addgroup --system --gid 1001 remix
RUN adduser --system --uid 1001 remix
USER remix
We've created a new group and a new user, both named remix
. We
then instructed docker to use our new user going forward. The
--system
flag sets up the group & user without a home
directory, and the --gid/–uid
hard code the user and group ids
to > 1000 (which is linux standard for a ‘normal' user account).
Next, we need to pull in our build artifacts from the previous stages. Under the hood, each stage is just another container docker built, so we are able to ‘copy' out what we want from the previous stages and pull it into our current one. The advantage is we only pull out the result of the build, and don't need to install all the dev dependencies into our final stage, significantly reducing the size of the final container.
To achieve this, we use the --from=stage
syntax of the
COPY
command:
COPY --from=prod-deps --chown=remix:remix /app/package*.json ./
COPY --from=prod-deps --chown=remix:remix /app/node_modules ./node_modules
COPY --from=builder --chown=remix:remix /app/build ./build
COPY --from=builder --chown=remix:remix /app/public ./public
This copies our node_modules
folder,
package.json
and package-lock.json
files into our
current working directory from the prod-deps
stage, and the
actual build artifacts in the build
and
public
folder from the builder
stage. These files
are essentially all we need to have a working remix app.
You'll also notice that we're passing the
--chown=remix:remix
flag. This just tells docker to set the
owner and group to our previously created remix
user to prevent
any permission issues.
Finally, we need to set up an entrypoint into the container - basically, what the container process runs when started.
ENTRYPOINT [ "node", "node_modules/.bin/remix-serve",
"build/index.js"]
Normally you could use npm run start
to simplify since that is
the default Remix template, but in general it is good practice to run
directly through node
instead of npm
for the
ENTRYPOINT, because npm
doesn't gracefully handle
SIGTERM,SIGKILL etc which means it's not easy to quit the process. This way,
we ensure that we use the remix-server command built into the .bin folder
when we npm install
, and execute our
build/index.js
file directly using node
.
To give it a run, first rebuild with
docker build -t remix-test .
, and then run
docker run -p 3000:3000 -it --rm remix-test
. Navigate to
localhost:3000
and you should see your application running.
Putting it all together, here's our final Dockerfile
FROM node:20.2.0-alpine3.18 as base
FROM base as deps
WORKDIR /app
COPY package*.json ./
RUN npm install
FROM deps AS builder
WORKDIR /app
COPY . .
RUN npm run build
FROM deps AS prod-deps
WORKDIR /app
RUN npm install --production
FROM base as runner
WORKDIR /app
RUN addgroup --system --gid 1001 remix
RUN adduser --system --uid 1001 remix
USER remix
COPY --from=prod-deps --chown=remix:remix /app/package*.json ./
COPY --from=prod-deps --chown=remix:remix /app/node_modules ./node_modules
COPY --from=builder --chown=remix:remix /app/build ./build
COPY --from=builder --chown=remix:remix /app/public ./public
ENTRYPOINT [ "node", "node_modules/.bin/remix-serve", "build/index.js"]
Congratulations! You've got a working docker build that is optimized for a
production deployment. Now, as you modify your source files, only the
builder
and runner
stages should rebuild, while
the stages installing node_modules
are preserved. This gives
you quick iteration on testing production builds. In addition, the final
image size is relatively small. Mine weighs in at about 253MB. Most of that
overhead is just the node runtime, but an equivalent on an Ubuntu base could
easily scale to over 1GB.
I'm also going to push our image to Docker Hub so that it can be deployed to Cycle. You could set it up to import our Dockerfile from a git repository and do the build there, or push to another registry, but for simplicity, we'll use Docker Hub.
Retag the image so we can push to our Docker Hub account. Replace
my-account
with your username. `docker tag remix-test
my-account/remix-test
docker push my-account/remix-test`
Now that we've got our production container, let's put our app on the internet. I'm going to use https://cycle.io to run it, so if you don't have an account yet be sure to set one up! Cycle is a fantastic choice for running production-ready apps and platforms that need to be reliable and scalable. While our app may be small right now, I'm sure you're doing some big things and need a powerful DevOps platform to run it .
First things first, let's get a server to run our app on. Cycle is multi-cloud, but I'm going to use a Vultr VM to save on cost. You can choose whichever provider(s) you want. Navigate to the providers section under “Infrastructure” and click add, then add your provider's credentials.
Then, select what type of server you'd like to deploy by navigating to “Infrastructure” > “Servers” and clicking “+ Deploy”.
Select a location, then pick a server:
Finally, click and hold “+ Deploy”!
Next, we need to get our image imported into Cycle. After you've logged in, navigate to Images > Sources on the left, and then click the “+ Create” button.
Put in a name, such as Remix Test
, a description if desired,
and select Docker Hub
from the “Type” dropdown, and fill in
my-account/remix-test
for the image name, and
latest
for the tag, being sure to replace
my-account
with your actual username.
Now that our source is set up, let's pull our image in from Docker Hub by clicking and holding the “Import Image” button.
The next step is to create an environment, which is basically everything our container needs to run and be accessible via the internet. Navigate to “Environments” and click “+ Create Environment”.
Add a name, and select the cluster you created when you deployed your server:
Finally, it's time to get our container online. In the environment, click “Deploy Container” and fill out the form like below:
We'll use the platform defaults, and use our recently imported image.
Finally, set networking to ‘Enable', and add the ports as
80:3000
. This will map all normal http traffic to our remix
server running on port 3000
inside the container. You can easily change this later.
Finally, hit “+ Deploy Container” and you'll be taken to the container modal, where we can hit the “start” button in the top right-hand corner.
In a minute or two, the container will be online and we can visit it via the auto-generated “Domain” on the dashboard!
Getting a production-ready Remix App containerized and deployed in a professional capacity is a little bit of upfront work, but will pay off in the long run as your application scales in complexity. Using multi-stage builds, we're able to efficiently cache our image layers for quick rebuilds and testing, and keep our final image as tiny as possible to reduce resource consumption and deployment times.
Using Cycle, we're able to easily deploy our container to infrastructure we own, and have all the underlying complexity of networking handled for us. As our application grows, we'll be able to add a domain name and TLS certificates, add a database and connect it by deploying into the same environment, scale our app across multiple cloud providers, and a lot more. We can set up a pipeline for automated redeployment while we continue to develop as well. Cycle makes the DevOps side easy, so we can focus on building our application - hence the term “LowOps”.
💡 Interested in trying the Cycle platform? Create your account today! Want to drop in and have a chat with the Cycle team? We'd love to have you join our public Cycle Slack community!