Ultimate Docker Guide

Docker is easy to use and helpful to get things up and going fast. If you aren't using docker and you want to, then this guide will get you started.
Ultimate Docker Guide


In a world where data privacy and security are becoming increasingly important, many individuals and organizations are turning to self-hosting as a way to take control of their online presence. With the rise of containerization technologies like Docker, setting up a self-hosted environment has become easier than ever.

In this article, we'll walk through the basics of setting up a server and using Docker containers to create a self-hosted environment. Whether you're looking to host your own personal blog, collaborate on a project with a small team, or build a fully functional web application, self-hosting with Docker can be a great way to achieve your goals.

We will be using Ubuntu Server 22.04 LTS. All references will be made accordingly and any applicable documentation will be noted as we go along. This should work similarly to other server distributions, however, the commands may be different.

You can choose to use Docker Desktop, however, please read up on the documentation before going down that route. Make sure that you have a compatible system and access to the internet.

Brief Explanation of Docker Commands

In the following paragraphs below, you are going to encounter code. It could be a little bit confusing so here is a brief block explanation.

docker run -d \
# docker is the command, run launches a container, and -d runs it unattended so you don't need the console open. If the console closes then the container stops.
  --name mariadb_server \
# every \ is like a space, using this format, we can easily see the container we are building and the CLI will accept when we paste it in.
# --name is the container name
-p 3306:3306 \
# unless you specify a network, the container will always run in bridge mode so you need to bind host ports to the container ports. If the correct ports aren't being used by the host, you can use them, if not change them.
  --restart always \
# -restart always forces the container to reboot on failure and also to come back up if the system was powered off.
  -e MYSQL_ROOT_PASSWORD=rootpassword \
# this is an environment variable, you can use this to configure the container and the environment. In this case we are specifying the root password for MySQL.
  -v mariadb_storage:/var/lib/mysql \
# this is a storage variable, we can use a storage container with Docker or we can bind it to the local host. This is very useful if you need persistant storage, otherwise the container will purge data when it's powered off.
# this is the last command used. You need to specify the container image from the repo, and the version. In this case we are using latest because we want the current.

Initialize Docker

Docker Install

Let's begin by installing Docker, and getting the right tools with permissions installed alongside.

sudo apt-get install docker.io
sudo systemctl enable docker

Next, we need to set the account we are using for Docker implementation access to Docker without Sudo Privileges. This could be a security issue so proceed if you want to. You can use multiple accounts as well.

sudo adduser dockeruser
sudo usermod -aG docker dockeruser
sudo reboot

Now that we have Docker installed, we need to install Docker Compose. This allows you to make and spin up your own custom containers using your own configuration. Please visit this website to check the current version in the instructions. At the time of this document, the latest is 2.16.0.

curl -SL https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose

We need to modify the permissions of the file to execute the install.

sudo chmod +x /usr/local/bin/docker-compose

To test the installation you can run the following:

docker-compose --version

If it fails then you can run this command:

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Docker Maintenance

Unfortunately, Docker doesn't maintain itself. If you don't keep an eye on it, it can fill up your storage fast. I learned this the hard way.

Image Maintenance

docker image prune will remove all images not being used and not attached to containers.

Volume Maintenance

docker volume prune will remove all volumes not being used and not attached to containers.

Network Maintenance

docker network prune will remove all unused networks.

Container Maintenance

docker container prune will remove all unused containers. Be careful with this one.

Docker Uninstall

Uninstalling Docker is pretty straightforward. You will need to run the following command to figure out what Docker packages are installed:

dpkg -l | grep -i docker

Next is to run the following to uninstall:

sudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli
sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce
sudo rm -rf /var/lib/docker /etc/docker
sudo rm /etc/apparmor.d/docker
sudo groupdel docker
sudo rm -rf /var/run/docker.sock
sudo rm /usr/local/bin/docker-compose

Data Folder Creation

In the root directory, it would be best to create a folder that you can use for data and such. You should create a group as well to give access to this folder. It would be best practice to create another account to maintain permissions if you are going to be using FTP.

sudo addgroup data
sudo adduser username data
cd /
sudo mkdir data
sudo chown username:data /data
sudo chmod -R 775 /data


WatchTower is a Docker container that will monitor all images and update containers periodically. This is meant to be a zero-touch solution. For this to work, we will need to mount the home folder of the user that is using docker.

With that being said, it will wipe out non-persistent data, so it would be wise to make sure you have the proper storage solutions in place for your Docker containers.

docker run -d --name watchtower -v /var/run/docker.sock:/var/run/docker.sock --restart always containrrr/watchtower

NGINX (engine-x) Proxy Manager or NPM

This is a container running NGINX in a reverse proxy stance with a graphical interface to manage. It also includes Let's Encrypt SSL certificates.

Spin Up Container

# create data folder for npm
sudo mkdir -p /data/npm

# install npm with ports
docker run -d --name npm --restart always -p 80:80 -p 81:81 -p 443:443 -v /data/npm/data:/data -v /data/npm/letsencrypt:/etc/letsencrypt jc21/nginx-proxy-manager:latest

# install npm attached to host
docker run -d --name npm --restart always --network host -v /data/npm/data:/data -v /data/npm/letsencrypt:/etc/letsencrypt jc21/nginx-proxy-manager:latest

Make sure you allow 80, 81, and 443 through your firewall on your machine if running a firewall. You can log in using the IP of the machine or localhost. http://192.168.1.#:81 or http://localhost:81 depending on your setup. The default login will be admin@example.com and the password is changeme. After logging into the system it will allow you to change it.

Add a Wildcard Certificate

Once you log in, you will need to go to SSL Certificates. Create a new one and for the domain names use this format: *.example.com example.com. Test server reachability to make sure and then fill in the rest. You will need to select your DNS provider and grab the token in order to challenge it. This proves you own the domain and will allow you to grab certs on the fly.


Add New Website

You can add a new Proxy Host by going to the Proxy Hosts menu.


Click on Add Proxy Host


Custom Locations

If you have a path that is being directed elsewhere then you can specify it inside Custom Locations in NGINX Proxy Manager.


For example, a Ghost CMS blog redirection:

location ^~ /blog {
	proxy_pass http://localhost:2369;
	proxy_set_header Host $http_host;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_set_header X-Forwarded-Proto $scheme;
	proxy_redirect off;
	proxy_read_timeout 2000;
	proxy_send_timeout 2000;
	proxy_connect_timeout 600;
	client_max_body_size 1024m;
	client_body_buffer_size 512m;


Run the following and this will start an instance of portainer.

docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest

You will need to navigate to http://ipaddress:9443 to finish the setup.

NGINX Proxy Manager Setup

Create a new site with the subdomain you want like portainer.example.com and point it to the server that is hosting it with the 9443 port. Set your SSL and save.

Portainer Edge Endpoint Setup

If you are going to use docker on other servers, you can set them up as endpoints so you only have to log in to one portal and get access to all your docker containers with their statuses. NGINX Proxy to IP address no port, if you are using Cloudflare make sure you are not proxying. If you are using it locally, you should just be able to skip this part and just use the IP of the server for the connection instead.

Portainer Server URL is the URL of the main server

  • Login to your portainer panel.
  • Go to Endpoints.
  • Click Add.
  • Click Edge Agent.

Public IP is the URL of the endpoint you want to manage.


Make sure Docker Standalone is selected. Then press the copy button to copy the command and paste it on the server.


Update the public IP with the address of the server.



Having an SQL database is useful for so many applications. It's easy to get one up and running. In addition to setting up an SQL database, we will be also launching a modifier like Phpmyadmin. You can choose to skip this and use something else. I like to use DBeaver.

MariaDB container

docker run -d \
  --name mariadb_server \
  -p 3306:3306 \
  --restart always \
  -e MYSQL_ROOT_PASSWORD=rootpassword \
  -v mariadb_storage:/var/lib/mysql \

phpmyadmin container

docker run -d \
  --name pma \
  -e PMA_ABSOLUTE_URI=https://pma.example.com/ \
  -e PMA_HOST= \
  -e PMA_PORT=3306 \
  --restart always \
  -p 8080:80 \

Reverse Proxy Config

Using your reverse proxy manager, you shouldn't need to do anything special. Use the IP Address and the port. Make sure you use your SSL certificate and be good to go.


Docker is a powerful tool that simplifies the process of creating and managing software applications. This grants the ability to self-host applications across various platforms, making it an ideal choice for those who need flexibility and ease of use. There is a vast library of pre-built images and strong community support for Docker and it has become a popular choice for developers and organizations alike. It also provides a great way to start an online presence, enabling users to quickly deploy and scale applications with minimal effort. Overall, Docker is beneficial in numerous ways and containerization may be our generation's industrial revolution.

Full Disclosure

All information and images have been provided by Docker. I am using the provided press kit for this article. Their website and my use of their programs are conveyed in this article. Most of this article is comprised of facts and opinions. The featured background image was created by andyoneru and is available on Unsplash. I added a blur and a gradient overlay with the program logo for this blog post. The rest of the images are from screenshots documenting relative processes.

Subscribe to Hi! I'm Harley newsletter and stay updated.

Don't miss anything. Get all the latest posts delivered straight to your inbox. It's free!
Great! Check your inbox and click the link to confirm your subscription.
Error! Please enter a valid email address!