Skip to main content
Krystal
Krystal Blog

Getting started with Docker on a VPS

Tristan P

6 Oct 202210 min read • Devops & Infrastructure, Open Source, Servers, Tutorials, Web Hosting

Chances are you’ve heard of Docker at some point in your professional life. You’re also likely aware that Docker has become an integral part of application development.

If you haven’t, I have great news for you - You’re in the right place!

In this post we’ll cover:

  • What is Docker?
  • What is a container?
  • How does Docker work?
  • Use cases for Docker
  • Installing Docker on your VPS
  • Post installation tips
  • What to learn next
  • Additional resources

We’re going to have a whale of a time!

Getting started

In order to follow along with this post, you'll want to have a fresh VPS that you plan to install docker on.

I would suggest taking some time to look through the Katapult VPS configurator.

Why Katapult?

Well, that's easy!

It's a fully scalable VPS platform that can grow alongside your project and offers the following benefits:

  • Unlimited Bandwidth
  • Free monthly backups
  • Dedicated IP address included
  • PCI-DSS scan compliance
  • Complete flexibility
  • Best-in-class hardware
  • Real-time, automated DDoS protection
  • Super fast 100% NVMe flash storage
  • 60 Day Money back guarantee

Katapult's software-defined storage and enterprise hardware combined with our enterprise level network and unlimited bandwidth makes a Katapult VPS hard to beat!

The wonderful world of Docker

First things first, what is Docker and why should we care?

Docker themselves describe it as follows:

… an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.

Cutting through the jargon, Docker is an open-source container runtime that runs on Linux, Windows and MacOS.

This means you can use docker to package your application and its dependencies into containers (We’ll cover these in more detail shortly).

So why should you care?

The main draw of docker is that it allows you to separate your applications from your infrastructure. Docker virtualizes the operating system of the computer it’s installed on. This means that it is extremely portable.

Don’t just take my word for it either!

Today Docker is a market leader and companies that you interact with everyday incorporate Docker into their production and deployment pipelines including big names like Spotify, Yelp, Sage, Shopify, Uber and eBay just to name a few.

What is a container?

It's a commonly held belief that Docker is the first of its kind, however, this is not true. Containers have actually existed since the late 1970’s!

In its simplest form, a container allows us to isolate kernel processes and trick these processes into believing that they are running on a completely new computer.

Some of you might be wondering if this is not just a virtual machine with extra steps but containers actually differ from virtual machines in one key way - They can share the kernel of the operating system while only having their specific binaries and libraries loaded with them.

What this means is that you don’t have to have a completely different operating system (Otherwise known as a guest OS) installed within your host OS. This makes your containers much smaller, faster and significantly more efficient.

A virtual machine will generally take around a minute to spin up and can be several Gigabytes, whereas a container on average is only between 400 - 600mb and takes seconds to spin up rather than minutes. This is due to them not having to run a guest OS and spin up an entire operating system before running the process you actually intend for it to do!

How does Docker work?

So far we have discussed what Docker is and a general overview of containers however this is where we get a little more technical and discuss specifically how Docker containers work.

Containers:

We already know that containers package all of your application dependencies into a virtual container that you can run on any Linux server. Docker utilises these containers with the following elements:

  • A Daemon - This is used to build, run and manage containers.
  • A high-level API (Application Programming Interface) - This allows the user to communicate with the Daemon with more functionality within single command statements.
  • A CLI (Command Line Interface) - This is the interface we use to make this all available.

Images and Layers

Docker containers package all of your code, libraries and dependencies together. This means that you can have multiple containers running in the same host allowing you to use the hosts resources more efficiently.

This works by running each container as an isolated process in the user space while taking up less space than a traditional VM due to something known as layered architecture.

These layers are known as intermediate images and are created when you run a new command in your Dockerfile.

Below is an example of a Dockerfile with several commands:

# syntax=docker/dockerfile:1
FROM node:18-alpine
COPY . /app
RUN make /app
CMD python /app/app.py

Each instruction creates one layer:

FROM creates a layer from the node:18-alpine Docker image.

COPY adds files from your Docker client’s current directory.

RUN builds your application with make.

CMD specifies what command to run within the container.

In practical terms, this allows Docker to split each command into its own separate part. Eventually when you use this node:18-alpine image again, Docker will not need to pull all of the individual layers as you have already installed this image.

Now this is only just scratching the surface of layers. If you are curious and would like to dig deeper into them, this article gives a lot of information on finding, listing and managing layers.

Use cases for Docker

So now that we know what docker is and how it works, it's time to discuss where this can actually be implemented and why you’d want to do so.

Anyone who has had to do any form of troubleshooting has likely come across the classic phrase “Well it works on my machine”. While this is unhelpful at the time, it asks an interesting question - Why don’t we give that machine to the customer?

That's where Docker comes in.

As mentioned, a Docker container is a packaged collection of your applications, libraries and dependencies that are already pre built and ready to be executed.

Many companies have started to transition away from traditional VM’s and adopt containers instead. This is in part due to how much lighter and faster they are to spin up, but also because of how extremely easy they are to maintain.

This leads us to the reason why Docker has become so popular with large companies - Cost.

Containers are a significantly cheaper alternative to Virtual machines. This is not because the infrastructure or hardware is cheaper, but rather because you will need fewer people to housekeep the containers.

This means that you can better organise your team to focus on developing the product rather than focusing on housekeeping your containers.

Due to how flexible Docker containers are, they can be used for a wide range of tasks. This covers everything from running a simple Golang CLI script to hosting a server for your favourite game to play with your friends.

From an enterprise perspective, Docker can be used to run:

  • Ephemeral databases
  • Persistent databases
  • One-use tools
  • Entire tech stacks

Just to name a few!

Installing Docker on your VPS

Now that you have a good understanding of the technology behind Docker and containers, it's time to take the leap and install Docker on your VPS!

If you haven’t yet purchased your VPS, take a look at our Katapult VPS configuration tool here!

Before we begin, the installation instructions below are specifically for Debian, Ubuntu CentOS and Alma Linux.

Installing docker on a VPS is a very easy process and in this case we will be installing docker using the convenience script. In order to install docker, you will need to connect to your server via SSH with a privileged user.

Docker Convenience script

Before we go any further into the installation process, we first need to cover some information and warnings surrounding using the convenience script.

Docker provides a convenience script at get.docker.com to install Docker into development environments quickly and non-interactively. The convenience script is not recommended for production environments, but can be used as an example to create a provisioning script that is tailored to your needs. The source code for the script is open source, and can be found in the docker-install repository on GitHub.

Always examine scripts downloaded from the internet before running them locally. Before installing, make yourself familiar with potential risks and limitations of the convenience script:

  • The script requires root or sudo privileges to run.
  • The script attempts to detect your Linux distribution and version and configure your package management system for you, and does not allow you to customize most installation parameters.
  • The script installs dependencies and recommendations without asking for confirmation. This may install a large number of packages, depending on the current configuration of your host machine.
  • By default, the script installs the latest stable release of Docker, containerd, and runc. When using this script to provision a machine, this may result in unexpected major version upgrades of Docker. Always test (major) upgrades in a test environment before deploying to your production systems.
  • The script is not designed to upgrade an existing Docker installation. When using the script to update an existing installation, dependencies may not be updated to the expected version, causing outdated versions to be used.

Depending on your environment and use case, the best practice would be to install docker using a repository. For this we suggest using dockers documentation to find the installation instructions based on your chosen Linux distro.

Installing Docker using the convenience script:

The first step is to update the packages on your server. This section has been divided based on your chosen distro:

Note: Upgrading your packages is the intended step only for a fresh VPS. If you are installing docker on an existing VPS, you may wish to skip this step due to potential package incompatibility with your currently installed software.

For Debian/Ubuntu:

$ apt update

This will update the package list for your distro rather than upgrading the packages directly. Once this has been updated, if you are using a fresh VPS, you will then need to upgrade your packages by running the following command:

$ apt upgrade

For CentOS/Alma:

$ yum update

Then run the following command:

$ curl -sS https://get.docker.com/ | sh

This script will automatically detect your operating system and install the needed packages before installing Docker itself.

Note: If there is no curl installed on your server, you can install it using the following command:

For Debian/Ubuntu:

$ apt install curl

For CentOS/Alma:

$ yum install curl

Then run the docker installation command again.

Testing our installation

The team at Docker recommend testing your Docker installation with a simple hello-world image.

To do this, run the following command:

$ docker run hello-world

If everything is installed correctly and working properly, you should see the following output:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete

Digest: sha256:31b9c7d48790f0d8c50ab433d9c3b7e17666d6993084c002c2ff1ca09b96391d
Status: Downloaded newer image for hello-world:latest

 Hello from Docker!
 This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/
...

Congratulations!

You have successfully installed Docker on your VPS and are now ready to start spinning up containers and discovering why Docker is so beloved, however, I would suggest sticking around a little longer as there are a few post-installation tips / best practices that I suggest following before diving head first into Docker.

Post-installation tips

Make docker run at OS startup

By default, if your OS is rebooted, you will need to manually restart Docker and your containers; however in many cases it is more efficient to have Docker start automatically when your OS boots.

You can do this by running the following command:

$ systemctl enable docker

Non-root access to Docker

Provided you have followed the instructions above, you will currently be logged into your server via root and have permission to manage the Docker application.

Generally speaking, it's best practice to manage docker via a non-root user. At the moment this is possible by adding sudo before your commands to escalate the privileges however this can be avoided.

In order to achieve this, you will need to give non-root access to the docker management commands by running the following commands:

$ groupadd docker
$ usermod -aG docker YOUR_USER

Running the above commands will grant your user access to the docker management and there is no need to use sudo every time you wish to make a change to Docker.

What to learn next

Now that you have an overview of Docker, how it works and where it can be used, the best way to learn Docker is to dive head first into building a project that you can add to your professional portfolio as this will give you a chance to gain real world experience in an environment you control.

Before you get started there are a few key portions of Docker that you should learn more about that I have separated by knowledge level to allow you to progress while not getting stuck with concepts that are too advanced:

Beginner:

  • Container lifecycles
  • Docker commands
  • Creating containers
  • Running containers

Intermediate:

  • Docker Compose
  • Portainer

Advanced:

  • Docker swarm
  • Docker volumes and networks

Additional Resources

Now that we have covered the basics of what docker is and how it works, I have curated a few sources of information to allow you to take your Docker expertise to the next level.

First and foremost, I would strongly recommend reading through Docker's own documentation as this will cover more advanced topics than we have done here.

Docker has also created a handy CLI cheat sheet that is an invaluable resource while you’re getting started with Docker that contains the installation instructions and CLI commands to manage Images, Docker hub, containers and a few more general commands.

If you prefer a more traditional learning style, there are many books written by Docker Captains that will explore Docker on a deeper level and help you master this technology:

  • Learn Docker in a Month of Lunches, Elton Stoneman.
  • Docker in Action 2nd Edition, Jeff Nickoloff, Oct 2019
  • Docker Deep Dive, Nigel Poulton, Mar 2018

This list is certainly not exhaustive and thanks to the popularity of Docker, there is a wealth of free learning resources available online for you to deep-dive into any aspect of Docker that particularly interests you.

Community Support

Did you know we have a Krystal community?

This is a community filled with Krystal staff, like minded individuals who use Krystal products and have a wealth of knowledge in a vast range of subjects.

If you would like a place to bounce ideas and speak to others that have been in this position and may be able to share some advice from their experience, I invite you to join the Krystal Community Discord, say hello, and ask away!

Conclusion

Docker has become widely adopted by small start-ups and established technology giants alike who recognise it as a practical solution to the problems of development and deployment and hopefully this article has laid the foundation for you to take your next steps into the world of Docker..

As with most things worth learning about, Docker can have a steep learning curve particularly when you move into much more advanced topics than we have covered in this article.

This initial learning curve, while steep, is heavily outweighed by the potential you unlock in your applications and the cost and time savings you will see once it has been fully implemented and you have already taken the first step towards adopting this fantastic technology and it only gets better from here!

About the author

Tristan P

I'm Tristan! I'm Krystal's Technical Community Manager and self-proclaimed documentation wizard. When I'm not writing, you'll normally find me playing some form of instrument or harnessing the power of the internet to pretend to drive a truck with my little plastic wheel.