Deeper understanding of containers
We've focused on using Docker as a tool to solve various types of problems. Meanwhile we have decided to push some of the issues until later and completely ignored others.
The goal for this part is to look into the best practices and improve our processes.
In part 1 we talked about how alpine can be a lot smaller than Ubuntu but didn't really care about why we'd choose one above the other. On top of that, we have been running the applications as root, which is potentially dangerous. In addition, we're still restricting ourselves to one physical computer. Unfortunately, the last problem is out of the scope of this course. But we will get to learn about different solutions.
Let's look into the ubuntu image on Docker Hub
The description/readme says:
What's in this image? This image is built from official rootfs tarballs provided by Canonical (specifically, https://partner-images.canonical.com/core/).
From the links in the Docker Hub page we can guess (not truly know) that the image is built from https://github.com/tianon/docker-brew-ubuntu-core - So from a repository owned by a person named "Tianon Gravi".
In that git repository's README as step 7 it says:
Some more Jenkins happens
This step implies that somewhere there is a Jenkins server that runs this script, builds the image, and publishes the image to the registry - we have no way of knowing if this is true or not.
The first line states that the image starts FROM a special image "scratch" that is just empty. Then a file
ubuntu-bionic-core-cloudimg-amd64-root.tar.gz is added to the root from the same directory.
This file should be the "..official rootfs tarballs provided by Canonical" mentioned earlier, but it's not actually coming from canonical, it is copied from repo owned by "tianon". We could verify the checksums of the file if we were interested.
Notice how the file is not extracted at any point. The
ADD instruction documentation states that "If src is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory. "
Before getting stressed by the potential security problems with this, we have to remind ourselves:
"You can't trust code that you did not totally create yourself." - Ken Thompson (1984, Reflections on Trusting Trust).
However, we will assume that the
ubuntu:18.04 that we downloaded is this image. The command
image history supports us:
$ docker image history --no-trunc ubuntu:18.04
The output from image history matches with the directives specified in the Dockerfile. In case this isn't enough, we could also build the image ourselves. The build process is, as we saw, truly open, and there is nothing that makes the "official" image special.