A Brief Introduction to Containers
Whether you’re new to development or a seasoned developer, containers have proven to be game-changing in building, testing, and deploying applications.
This article is meant as a quick introduction to the world of containers.
To get started, you’ll need to install Docker to follow along with the examples. If you haven’t installed Docker yet, head over to their website to get it installed. There are free versions available for all major operating systems.
1. What is a Container?
A container is a software package that offers maximum portability while being infrastructure agnostic. By containerizing your application, you’re building a package that holds not only your application, but all of the dependencies needed for that application to run.
From an execution standpoint, containers are a major divergence from traditional virtual machines. Where virtual machines host entire guest operating systems that act like stand-alone servers, containers are simply isolated processes running directly on the host operating system via a shared kernel.
The nature of this packaging allows you to import what is known as a container image and use that to deploy your software. Let’s take a closer look at what a container process looks like running next to a system process.
As you can see the container process acts just like a normal system process, but utilizes its own thin operating system layer (a rootfs), as opposed to the host operating system itself.
2. Creating a Container
The three things you’ll need to know to create a container are:
- What is a container image?
- What is a Dockerfile?
- How do I build the image?
To create a container image, you’ll use a Dockerfile which is a list of instructions used to move files, install dependencies, and attach volume/network meta information. Container images are atomic, which means they will not change between deployments — think of burning an OS image to a physical disk. Each time you start a container, you’ll get the exact same result, regardless of the underlying environment.
What happens when you build an image:
- The set of instructions used to make your image is read and processed.
- After the instruction is processed, the result is stored as a layer.
- Once all instructions have been executed, the resulting layers are merged into a container image.
- This atomic container image can now be deployed to Cycle, hosted locally, or shared with other developers.
In the past, standardizing development, testing, and production environments was a major problem. Now you can be confident that you’ll observe the same behavior on any system and that’s a really fantastic benefit!
The set of instructions we need in order to build the image is called a Dockerfile.
Let’s pretend for a minute you just installed a new virtual machine on your host operating system. You want to see how your application will run on CentOS7 so you log in, upload your applications, install your dependencies, and secure the network endpoints.
With an individual server, the process above is easy — but when you need to reproduce this result thousands of times, host it on multiple infrastructure providers, or share it with other developers, this can become very challenging.
Automation scripts can help but what happens if you have a network timeout while installing dependencies? What if the underlying OS at different infrastructure providers has slightly different versions of its underlying system binaries?
Containers are the answer to the above problems. By writing a Dockerfile with instructions similar to the process above, developers can easily create a fully atomic image that can be deployed with ease.
Docker describes a Dockerfile as, “A text document that contains all the commands a user could call on the command line to assemble an image”.
Below, we’ll dive into a few basic concepts re: Dockerfiles.
Picking a Base Image
A Dockerfile will have a base or parent image, declared by the FROM command. This is like saying, “hey Docker I know I at least want these dependencies, let’s start here”. For example, most Node applications will start with the line:
Starting Your Main Process
The command you use to start your application will be at the end of the Dockerfile. So when the image is executed in the container runtime it knows how to start the process. Following our Node example, someone could use this command to start their node process:
CMD ["node", "index.js"]
Copy, Run, and Other Commands
As you progress on your image building journey, you’ll use other commands such as:
If you want to take a deeper dive into the Dockerfile reference you can access it here. Now that we have a basic Dockerfile we can build our image.
Building Your Image
- Make a new directory with a Dockerfile in it.
- On the first line of the Dockerfile write
- Save the Dockerfile and then run →
docker build -t myfirstimage .
Now you have your first image, it’s that easy. Feel free to modify your new Dockerfile. When you want to rebuild the image use the
docker build command from above. I’ve included a basic example of a Dockerfile below.
There are two ways to approach building different versions of your image.
- Create a different image for each build. 👎
- Tag the image with a container image tag name. 👍
To tag an image, simply append the tag name to the image name while running the
docker build command:
docker build -t myfirstimage:v1
Tagging an image can convey information like the version of the application or the underlying operating system that’s being used. In the example Dockerfile above you can see that the
node base image I’m using is tagged with
alpine. This lets developers know that this
node image was built on top of Alpine Linux, a minimalist distribution known for its small size. Building an image without specifying a tag name will result in a tag of
When you build an image several times, Docker will use cached layers to build your image if possible, this is called the build-cache.
As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image. ~Docker
You can use the build cache to your advantage by copying the files that change the most, closest to the end of your Dockerfile. Then when you go to rebuild your image, Docker will reuse the cache it has of the unchanged layers so you won’t have to wait for them to build again.
For example, when building a Node app copy your package.json files and run
npm install before copying the rest of the app. That way if you need to rebuild due to changes to your source code, your image can reuse all the packages imported via your
npm install again thanks to the cache. This can save a few minutes a build.
This pattern is included in the example Dockerfile above
Next, let’s take a look at how to share our images so others can create containers from them.
Docker registries are awesome! There is a free and easy to join public registry hosted by Docker called DockerHub. After you sign up you can log in through your terminal by:
docker logininto your terminal.
- Enter your username and password (this is the username and password you used to create your DockerHub profile).
Next, we’ll take a look at tagging your images so they can be pushed to DockerHub
Tag Properly for Docker Hub
Earlier we mentioned adding a tag name to a container image. The Docker CLI also includes another version of tagging. In this context, you’ll run
docker tag and then enter an existing
image:tag name followed by a new
image:tag name you wish to convert to. It looks like this:
docker tag image:tag newname:newtag
We need to use this form of tagging to prepare an image for DockerHub. Tag your image using the following format:
Where username is your DockerHub username.
Pushing to Docker Hub
To push a container image to DockerHub simply enter this command:
docker push username/image:tag
If its the first time this image has been pushed to DockerHub, a new repository will be created. Pushing the same image with a new tag name results in DockerHub appending the existing repository with the new information.
If you’d like to have full control over your registry, you may want to consider a self-hosted option. Private registries are more secure and flexible than their public counterparts.
Today we took a very quick look at containers, walking through what you need to create and share them with your peers. I hope you will take this information and continue on your journey towards becoming a container expert. If you’d like to learn more, be sure to check out these two great articles:
- Your First Website on Cycle — Using React and NGNIX to serve a static website
- Stateful and Stateless Containers on Cycle
Still Have Questions?
If you want to dive in and learn more, head over to our slack channel. Our community is growing, and our team hangs out there daily. Feel free to shoot us a message any time with your questions and we’ll be sure to respond!
Of course, for a more in-depth look at how to use Cycle, check out our documentation.