Deploy Your Products Easier with Docker

Muhamad Yoga Mahendra
6 min readMay 24, 2021


Picture from: Deploy -Asmir Mustafic (

In software development, after a long process (sometimes very long) of design, implementation, testing and fixes, your software product is now ready to be used. If you’re building a SaaS (Software as a Service) products then you should be familiar with the term of Deploy and Deployment. Deployment is the activity where a software is “deployed” to a special environment (i.e. remote servers or machines).

This deployment process is u usually done in three part (or stages): Preparation, Installation, and Running. Preparation is the stage where everything the software needs in order to run properly is prepared (downloaded and installed). Installation is where the software is unpacked, loaded, and checked for integrity failure. Testing is also usually done in this stage. Then the third and final stage is actually running the software inside the environment. Once the deployment is finished, the software is practically ready to use from anywhere, at anytime.

Since this process can take a long time (it’s a tedious activity), automatic deployment is a thing now. With CI/CD (Continuous Integration/Continuous Deployment. GitLab CI, Jenkins CI and Travis CI for example) we can integrate our updates, and deploy it automatically.

However, before we actually deploy our software, we need to know in what specific environment it’s going to run in. Since development and deployment environment can be different by a lot — let’s say you developed your app in Windows OS. But the server your sysdev just brought for the software are running on Alpine Linux. Without proper configuration and environment setup the software will fail to run. Most people knows that making two identical app that runs on different OS is pain. And Docker is here to save your day!

What is Docker?

Docker is a tool used to create “Containers”, a special place where OS type and version doesn’t matter and will ensure your software can run in any os — as long as Docker is installed. Docker is designed to make software creation, development, deployment and running more easier. Remember, no matter the OS type and version, your app can run. Containerization enables the creation of a place that contains your software, along with all its requirements and dependencies, all in one place. It will run isolated from everything else, just like a Virtual Machine — but better.

Docker vs Virtual Machine

Image from

The above image explains the difference between VM and Docker. One of the main features of Docker is that it only requires one OS. Unlike VMs where you need to create N amount of VM for N software, you only need to create 1 OS with N container for N software. While the containers are isolated, it is lighter than running VMs (some images like Alpine Linux image is less than 20 mb upon download), and it also shares the same libraries/binary programs/other stuff with the OS. Many different types of app can work in sync via text communication inside a single machine.

However, the use of both VM and Docker Containers can provide even more flexibility in deployment and management of applications.

How does Docker work?

To use Docker, we need to know the following concepts and terms first:

  • Docker Image. It’s an image file that contains everything needed for the software to run (the software itself included, too). Each image is differentiated by unique tags and IDs.
  • Docker Container. It’s the virtualized runtime environment where Docker users can use a Docker Image to run softwares, isolated from the rest of the system. The Docker Container is very lightweight, portable and can be deployed in a short time, quick and easy.
  • Docker Repositories. It’s an online repository where docker containers/images are stored and hosted. All basic images can be downloaded from here.
  • Docker Commands. It’s a set of commands used to tell Docker to do it’s job.

Start using Docker

Before using Docker, we need to install it first. Here’s how:

  1. Identify your host’s OS. Note that this is different from the image used to run softwares. It is the software that runs docker.
  2. Download Docker that matches your OS. You can download it from here: Install Docker Engine | Docker Documentation
  3. Install the Docker. Note that different OS has different installation step, and you need to check it manually, depending on your OS.
  4. After installing, you can try running the following command. If it’s successfully installed, it should show something like this:
$ docker — version
Docker version 20.10.5, build 55c4c88

Here’s some basic commands that should get you started right away:

  • To download an existing image (or a basic image) from the repository, run:
docker pull <image name>
  • To create and run a new container from an image, run:
docker run <image name>
  • To create an image manually (must create a Dockerfile beforehand, explained later), run:
docker build -f <path/to/Dockerfile> -t <tag name> <build context>
# -f tells the docker to use a specific Dockerfile for it's configuration.
# It's optional.
# -t tells the docker that this container will have the specified tags.
# build context means the location of your software context.
# It's all the files needed to run, etc)
  • To start an existing container, run:
docker run <container id or name>
  • To stop a running container, run:
docker stop <container id or name>

Use of Docker in PPL

In my PPL Project, we develop a web app called Crowd+, a marketplace for data annotation services. We use both Docker and GitLab’s CI/CD feature for deployment.

Before we start deploying, firstly we need to create a Dockerfile. It is a file specifically made to both automate and make the building container task easier. The Dockerfile contains all commands needed to build an image. In this case, I develop the Back End side of the service using Python, thus I need to include database, and other libraries as needed. After installing the dependencies, then the software requirements is installed using pip. The resulting Dockerfile looks like below:

FROM python:alpineARG BUILD_ENV=stagingWORKDIR /opt/appCOPY . .ENV APP_ENV=$BUILD_ENVRUN apk add -u --no-cache tzdata gcc musl-dev linux-headers\libffi-dev postgresql-dev jpeg-dev zlib-dev && \pip install -r requirements.txtENV PORT=8080ENTRYPOINT ["/bin/sh",""]

After the image is built, the image is passed to GitLab’s CI/CD automations to automatically deploy and run the software. To do this, we need to create a new file named .gitlab-ci.yml. This file includes all instructions that the CI/CD will do, from the stages name, image to use, before-script, script, and others. The CI/CD file should looks like this (It’s only a snippet since it’s quite lengthy):

stages:  - Build
- Release
Build-Staging: stage: Build image: name: entrypoint: [ "" ] script: - mkdir -p /kaniko/.docker - echo "{\"auths\":{\"\":
{\"username\":\"_\",\"password\":\"$HEROKU_TOKEN\"}}}" >
- |- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile
$CI_PROJECT_DIR/Dockerfile --destination
only: - stagingRelease-Staging: stage: Release image: ubuntu:latest before_script: - apt-get update && apt-get upgrade -y && apt-get install gpg wget curl -y - wget -qO- | sh script: - export HEROKU_API_KEY=$HEROKU_APIKEY - |- heroku container:release -a datalyst-be web only: - staging

In the GitLab CI/CD’s build stage, we use kaniko to build and push the docker image heroku registry (our hosting destination) and in the release stage, heroku is told to use the latest image only.