This article will help readers understand what Docker is, why it is used, and provide resources on how to start using it. Docker is used by developers for many reasons, most commonly for building, deploying, and sharing applications quickly. Docker packages your application into a container, which is OS-agnostic, allowing developers on Mac, Windows, and Linux to share their code without conflicts. For more information, check out Amazon's Intro to Docker.
- Container: A package of code bundled by Docker that runs as an isolated process from your machine. The package of code can be pretty much anything, a single Python file, an API, a full-stack web application, etc. A container is also referred to as a containerized application.
- Image: A template with a set of instructions for creating a container. Think of it as a blueprint from which multiple containers can be instantiated. Images are built from Dockerfiles and are essential for running your applications in Docker.
- Dockerfile: A text document that contains all the commands a user could call on the command line to assemble an image. It's a recipe for creating Docker images.
For more detailed explanations, you can refer to Docker's own resources here.
To start using Docker you will have to download Docker Engine. This is automatically downloaded alongside Docker Desktop (A Graphical User Interface for Docker) which I strongly recommend for Windows and macOS users. Download Docker here
For detailed installation instructions based on specific operating systems click here: Mac, Windows, Linux
Once you've installed Docker, to see it in action you can follow any one of these quick tutorials on creating a Dockerfile that builds a Docker image:
- Dockerizing a React App (This simple and quick tutorial for containerizing a React App, contains explanations when needed. I recommend this if you want to quickly get something running plus see what the next steps look like)
- Dockerize a Flask App (Super detailed step-by-step explanations tutorial for containerizing a flask app. I recommend this if you want to understand the process in detail)
- Docker's official tutorial for containerizing an application (Can't go wrong with the official tutorial.)
An alternative to manually creating images is to use existing images on 'Docker Hub'. The chances are for any purpose there is a docker image for you. From database images such as 'MySQL', 'MongoDB', 'PostgreSQL' and more, to usable out the box images such as 'Wordpress' for a Wordpress website and 'NextCloud' for a Google drive-esque cloud storage system. If you are unsure about creating your own image, you can always check out ones that other people have published.
For a quick tutorial on how to use an image:
Since Docker is widely used, there is a lot of Dockerfile-related knowledge in ChatGPT's training data, and the AI is capable of generating Dockerfiles for most software architectures. If you want to easily containerize your app, you can use OpenAI's ChatGPT 3.5-turbo to generate the Dockerfile for you. To do this, you first need to gather a tree of your project directory for ChatGPT to better understand your project architecture (On Linux/macOS, run tree -I node_modules
in your project directory). Then, you can ask ChatGPT using something similar to the following prompt:
Please write a Dockerfile for my project. I use the command `python3 -m projectname` to start my app. My project file structure is specified by the tree below. Please make sure that the Dockerfile is optimized with best practices from the industry and that the image size is as small as possible.
.
├── README.md
├── backend
│ ├── __init__.py
│ ├── database
│ │ ├── __init__.py
│ ├── flow.py
│ ├── rapidpro.py
│ └── user.py
├── poetry.lock
├── pyproject.toml
└── tests
├── RapidProAPI_test.py
├── __init__.py
└── flow_test.py
I have the following runtime dependencies that might require APT packages: psycopg2
This method will generate something that's much more optimized than any beginner can write. For example, it will clear the APT cache for dependency installation, and use separate builder and runtime images to reduce image size, which involves understanding the intricate Docker image layering mechanism. You can learn a lot from reading and understanding the generated Dockerfile.
When you start using Docker, you'll come across two key terms: Dockerfile and Docker image. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Essentially, it's a set of instructions for Docker to build the image.
A Docker image, on the other hand, is an executable package that includes everything needed to run an application - the code, a runtime, libraries, environment variables, and config files. You can think of an image as a blueprint for a container. Docker builds an image based on the instructions provided in a Dockerfile. Once the image is built, Docker can create a container from this image.
Congratulations! You have successfully learned how to Dockerize an app. In the process, you have learnt what is a Dockerfile
, how to create a Dockerfile
, how to build a Docker Container Image and how to start a Docker Container. Now what's next?
Now you might want a React Container to communicate with a containerized Flask API. How do we do this? This is where Docker Compose comes in. It allows you to define, and control multiple containers at once. Your next goal should be defining a docker-compose.yml
for your project and see if you can get multiple services/containers to successfully communicate.
You can use various CI (Continuous Integration) tools to automatically build, push, and deploy your Docker Images. While the official Docker Hub is a great place to store your images for free, the automated builds are only available for paid accounts. Here, we will guide you on how to use GitHub Actions to automatically build and push your Docker Images to Docker Hub.
- A Docker Hub account
- A GitHub account and a repository
- A Dockerfile
- First, you need to create a Docker Hub token to allow GitHub Actions to push to your Docker Hub repository. To do this, go to your Docker Hub Account Settings and click on "New Access Token". Give it a name and click on "Create". (Don't close this page yet, you will need it later)
- Open your GitHub repository and go to the "Settings" tab. Click on "Secrets" and then "New repository secret". Give it a name (e.g.
DOCKERHUB_TOKEN
) and paste the token you just created in the previous step. Click on "Add secret". - Do the same for
DOCKERHUB_USERNAME
- In your GitHub repository, create a new folder called
.github/workflows
- Paste the following file under
.github/workflows/docker.yml
(Make sure to replaceYOUR_IMAGE_NAME_HERE
with your image name) - Commit and push your changes
- Profit!
name: Build and Push Docker Image
on:
push:
branches:
- main # Change this to your default branch
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/YOUR_IMAGE_NAME_HERE:latest
Note: This workflow will automatically build and push after each commit to the main
branch. This is ideal for development, assuming that your main branch is the staging branch. However, you might want to change it or create a separate workflow with a separate image name to only build on tags (releases) for production so that the deployment is more controlled.
Now that you have a Docker Image on Docker Hub, you can deploy it to a server. There are many ways and platforms that allow you to do this. You can rent a minimal Linux VPS server or a Docker server for $4-6 per month on various platforms. One platform I recommend is DigitalOcean, as they have a very intuitive web interface and very good documentation for beginners. you can click the referral link (icon) below to get a free $200 credit for 60 days, what a deal!
Once you have a Linux server, the easiest way to deploy a docker image is to use Docker Compose. You can define your services in a docker-compose.yml
file and then run docker-compose up -d
to start the containers in the background.
- Install Docker Core on your server (official guide for Ubuntu)
- Install docker-compose plugin (official guide).
- Create a
docker-compose.yml
file on your server and paste the following (Make sure to replaceYOUR_IMAGE_NAME_HERE
with your image name) - Run
docker-compose up -d
to start the containers in the background
version: "3.9"
services:
app:
image: YOUR_IMAGE_NAME_HERE:latest
restart: always
ports:
- 80:80 # Expose any ports you need
If you need a database, you can add it to the composition as well! For example, if you want to use PostgreSQL, you can add the following to your docker-compose.yml
file:
version: "3.9"
services:
app:
image: YOUR_IMAGE_NAME_HERE:latest
restart: always
ports:
- 80:80 # Expose any ports you need
depends_on:
- db
environment: # In your program, use these environment variables to connect to the database
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: postgres
DB_NAME: postgres
db:
image: postgres:13
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
For an alternative, you can check out another example using MongoDB.
Since the database is contained within the docker-compose network, it is perfectly secure to use the default postgres
user and password, since it cannot be accessed through the wider internet. However, if you want to expose your database (which is not recommended), you can add the port 5432:5432
to the db
service and use a stronger password.
If you are using any other database, you can find the docker image on Docker Hub and follow the instructions there. If you want a quick start tutorial, you can check out the tutorial for MongoDB above. Please be sure to read the docker container's documentation carefully! Most questions regarding database images can be answered by reading the documentation.
To automatically update your deployment when you push a new image to Docker Hub, you can use Watchtower. It is a simple container that monitors your other containers and updates them when a new image is available. You can add it to your docker-compose.yml
file like this:
services:
# ...
watchtower:
image: containrrr/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --interval 30
This will check for updates every 30 seconds. You can change this to whatever you want. You can also add a --cleanup
flag to remove old images after updating.
If you have multiple services and want to deploy them on the same server with different domain names or set up SSL to make your services secure from MITM (man-in-the-middle) attacks, you can use Traefik or Nginx as a reverse proxy. This is a more advanced topic and is out of the scope of this article. However, you can find many tutorials online on how to do this, such as this DigitalOcean article: How To Use Traefik v2 as a Reverse Proxy for Docker Containers on Ubuntu 20.04
Here's a cheat sheet of all useful Docker CLI commands and here's a cheat sheet for docker-compose which should help you in your future endeavours. All the best!