Part 1

This part introduces containerization with Docker and relevant concepts such as image and volume. By the end of this part you are able to:

What is DevOps?

Before we get started with Docker let’s lay the groundwork for learning the right mindset. Defining DevOps is not a trivial task but the term itself consists of two parts, Dev and Ops. Dev refers to the development of software and Ops to operations. Simple definition for DevOps would be that it means the release, configuring, and monitoring of software is in the hands of the very people who develop it.

A more formal definition is offered by Jabbari et al.: “DevOps is a development methodology aimed at bridging the gap between Development and Operations, emphasizing communication and collaboration, continuous integration, quality assurance and delivery with automated deployment utilizing a set of development practices”.

Image of DevOps toolchain by Kharnagy from wikipedia

Sometimes DevOps is regarded as a role that one person or a team can fill. Here’s some external motivation to learn DevOps skills: Salary by Developer Type in StackOverflow survey. You will not become a DevOps specialist solely from this course, but you will get the skills to help you navigate in the increasingly containerized world.

During this course we will focus mainly on the packaging, releasing and configuring of the applications. You will not be asked to plan or create new software. We will go over Docker and a few technologies that you may see in your daily life, these include e.g. Redis and Postgres. See StackOverflow survey on how closely they correlate these technologies.

What is Docker?

“Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.” - from Wikipedia.

So stripping the jargon we get two definitions:

  1. Docker is a set of tools to deliver software in containers.
  2. Containers are packages of software.

The above image illustrates how containers include the application and its dependencies. These containers are isolated so that they don’t interfere with each other or the software running outside of the containers. In case you need to interact with them or enable interactions between them, Docker offers tools to do so.

Benefits from containers

Containers package applications. Sounds simple, right? To illustrate the potential benefits let’s talk about different scenarios.

Scenario 1: Works on my machine

Let’s first take a closer look into what happens in web development without containers following the chain above starting from “Plan”.

First you plan an application. Then your team of 1-n developers create the software. It works on your computer. It may even go through a testing pipeline working perfectly. You send it to the server and…

…it does not work.

This is known as the “works on my machine” problem. The only way to solve this is by finding out what in tarnation the developer had installed on their machine that made the application work.

Containers solve this problem by allowing the developer to personally run the application inside a container, which then includes all of the dependencies required for the app to work.

You may still occasionally hear about “works in my container” issues - these are often just usage errors.

Scenario 2: Isolated environments

You have 5 different Python applications. You need to deploy them to a server that already has an application requiring Python 2.7 and of course none of your applications are 2.7. What do you do?

Since containers package the software with all of its dependencies, you package the existing app and all 5 new ones with their respective python versions and that’s it.

I can only imagine the disaster that would result if you try to run them side by side on the same machine without isolating the environments. It sounds more like a time bomb. Sometimes different parts of a system may change over time, possibly leading to the application not working. These changes may be anything from an operating system update to changes in dependencies.

Scenario 3: Development

You are brought into a dev team. They run a web app that uses other services when running: a postgres database, mongodb, redis and a number of others. Simple enough, you install whatever is required to run the application and all of the applications that it depends on…

What a headache to start installing and then managing the development databases on your own machine.

Thankfully, by the time you are told to do that you are already a docker expert. With one command you get an isolated application, like postgres or mongo, running in your machine.

Scenario 4: Scaling

Starting and stopping a docker container has little overhead. But when you run your own Netflix or Facebook, you want to meet the changing demand. With some advanced tooling that we will learn about in parts 2 and 3, we can spin up multiple containers instantly and load balance traffic between them.

Container orchestration will be discussed in parts 2 and 3. But the simplest example: what happens when one application dies? The orchestration system notices it, splits traffic between the working replicas, and spins up a new container to replace the dead one.

Virtual machines

Isn’t there already a solution for this? Virtual Machines are not the same as Containers - they solve different problems. We will not be looking into Virtual Machines in this course. However, here’s a diagram to give you a rough idea of the difference.

The difference between a virtual machine and docker solutions after moving Application A to an incompatible system “Operating System B”. Running software on top of containers is almost as efficient as running it “natively” outside containers, at least when compared to virtual machines.

Running containers

You already have Docker installed so let’s run our first container!

The hello-world is a simple application that outputs “Hello from Docker!” and some additional info.

Simply run docker container run hello-world, the output will be the following:

$ docker container run hello-world
  Unable to find image 'hello-world:latest' locally
  latest: Pulling from library/hello-world
  b8dfde127a29: Pull complete
  Digest: sha256:308866a43596e83578c7dfa15e27a73011bdd402185a84c5cd7f32a88b501a24
  Status: Downloaded newer image for hello-world:latest
  Hello from Docker!
  This message shows that your installation appears to be working correctly.
  To generate this message, Docker took the following steps:
   1. The Docker client contacted the Docker daemon.
   2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
   3. The Docker daemon created a new container from that image which runs the
      executable that produces the output you are currently reading.
   4. The Docker daemon streamed that output to the Docker client, which sent it
      to your terminal.
  To try something more ambitious, you can run an Ubuntu container with:
   $ docker run -it ubuntu bash
  Share images, automate workflows, and more with a free Docker ID:
  For more examples and ideas, visit:

If you already ran hello-world previously it will skip the first 5 lines. The first 5 lines tell that an image “hello-world:latest” wasn’t found and it was downloaded. Try it again:

$ docker container run hello-world
  Hello from Docker!

It found the image locally so it skipped right to running the hello-world. So that’s an image?

Image and containers

Since we already know what containers are it’s easier to explain images through them: Containers are instances of images. A basic mistake is to confuse images and containers.

Cooking metaphor:

  • Image is pre-cooked, frozen treat.
  • Container is the delicious treat.

With the cooking metaphor, the difficult task is creating the frozen treat while warming it up is relatively easy. Images require some work and knowledge to be created, but when you know how to create images you can leverage the power of containers in your own projects.


A Docker image is a file. An image never changes; you can not edit an existing file. Creating a new image happens by starting from a base image and adding new layers to it. We will talk about layers later, but you should think of images as immutable, they can not be changed after they are created.

List all your images with docker image ls

$ docker image ls
  REPOSITORY      TAG      IMAGE ID       CREATED         SIZE
  hello-world     latest   d1165f221234   9 days ago      13.3kB

Containers are created from images, so when we ran hello-world twice we downloaded one image and created two of them from the single image.

Well then, if images are used to create containers, where do images come from? This image file is built from an instructional file named Dockerfile that is parsed when you run docker image build.

Dockerfile is a file named Dockerfile, that looks something like this


FROM <image>:<tag>

RUN <install some dependencies>

CMD <command that is executed on `docker container run`>

and is the instruction set for building an image. We will look into Dockerfiles later when we get to build our own image.

If we go back to the cooking metaphor, Dockerfile is the recipe.


Containers only contain that which is required to execute an application; and you can start, stop and interact with them. They are isolated environments in the host machine with the ability to interact with each other and the host machine itself via defined methods (TCP/UDP).

List all your containers with docker container ls

$ docker container ls

Without -a flag it will only print running containers. The hello-worlds we ran already exited.

$ docker container ls -a
  CONTAINER ID   IMAGE           COMMAND      CREATED          STATUS                      PORTS     NAMES
  b7a53260b513   hello-world     "/hello"     5 minutes ago    Exited (0) 5 minutes ago              brave_bhabha
  1cd4cb01482d   hello-world     "/hello"     8 minutes ago    Exited (0) 8 minutes ago              vibrant_bell

Docker CLI basics

We are using the command line to interact with the “Docker Engine” that is made up of 3 parts: CLI, a REST API and docker daemon. When you run a command, e.g. docker container run, behind the scenes the client sends a request through the REST API to the docker daemon which takes care of images, containers and other resources.

You can read the docs for more information. But even though you will find over 50 commands in the documentation, only a handful of them is needed for general use. There’s a list of the most commonly used basic commands at the end of this section.

One of them is already familiar: docker container run <image>, which instructs daemon to create a container from the image and downloading the image if it is not available.

Let’s remove the image since we will not need it anymore, docker image rm hello-world sounds about right. However, this should fail with the following error:

$ docker image rm hello-world 
  Error response from daemon: conflict: unable to remove repository reference "hello-world" (must force) - container <container ID> is using its referenced image <image ID>

This means that a container that was created from the image hello-world still exists and that removing hello-world could have consequences. So before removing images, you should have the referencing container removed first. Forcing is usually a bad idea, especially as we are still learning.

Run docker container ls -a to list all containers again.

$ docker container ls -a
  CONTAINER ID   IMAGE           COMMAND        CREATED          STATUS                      PORTS     NAMES
  b7a53260b513   hello-world     "/hello"       35 minutes ago   Exited (0) 35 minutes ago             brave_bhabha
  1cd4cb01482d   hello-world     "/hello"       41 minutes ago   Exited (0) 41 minutes ago             vibrant_bell

Notice that containers have a CONTAINER ID and NAME. The names are currently autogenerated. When we have a lot of different containers, we can use grep (or another similar utility) to filter the list:

$ docker container ls -a | grep hello-world

Let’s remove the container with docker container rm command. It accepts a container’s name or ID as its arguments.

Notice that the command also works with the first few characters of an ID. For example, if a container’s ID is 3d4bab29dd67, you can use docker container rm 3d to delete it. Using the shorthand for the ID will not delete multiple containers, so if you have two IDs starting with 3d, a warning will be printed, and neither will be deleted. You can also use multiple arguments: docker container rm id1 id2 id3

If you have hundreds of stopped containers and you wish to delete them all, you should use docker container prune. Prune can also be used to remove “dangling” images with docker image prune. Dangling images are images that do not have a name and are not used. They can be created manually and are automatically generated during build. Removing them just saves some space.

And finally you can use docker system prune to clear almost everything. We aren’t yet familiar with the exceptions that docker system prune does not remove.

After removing all of the hello-world containers, run docker image rm hello-world to delete the image. You can use docker image ls to confirm that the image is not listed.

You can also use the image pull command to download images without running them: docker image pull hello-world

Let’s try starting a new container:

$ docker container run nginx

Notice how the command line appears to freeze after pulling and starting the container. This is because Nginx is now running in the current terminal, blocking the input. You can observe this with docker container ls from another terminal. Let’s exit by pressing control + c and try again with the -d flag.

$ docker container run -d nginx

The -d flag starts a container detached, meaning that it runs in the background. The container can be seen with

$ docker container ls
  CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
  c7749cf989f6        nginx               "nginx -g 'daemon of…"   35 seconds ago      Up 34 seconds       80/tcp              blissful_wright

Now if we try to remove it, it will fail:

$ docker container rm blissful_wright
  Error response from daemon: You cannot remove a running container c7749cf989f61353c1d433466d9ed6c45458291106e8131391af972c287fb0e5. Stop the container before attempting removal or force remove 

We should first stop the container using docker container stop blissful_wright, and then use rm.

Forcing is also a possibility and we can use docker container rm --force blissful_wright safely in this case. Again for both of them instead of name we could have used the ID or parts of it, e.g. c77.

It’s common for the docker daemon to become clogged over time with old images and containers.

Most used commands

command explain shorthand
docker image ls Lists all images docker images
docker image rm <image> Removes an image docker rmi
docker image pull <image> Pulls image from a docker registry docker pull
docker container ls -a Lists all containers docker ps -a
docker container run <image> Runs a container from an image docker run
docker container rm <container> Removes a container docker rm
docker container stop <container> Stops a container docker stop
docker container exec <container> Executes a command inside the container  docker exec

For all of them container can be either the container id or the container name. Same for images. In the future we may use the shorthands in the material.

Some of the shorthands are legacy version of doing the same thing. You can use either.

1.1: Getting started

Since we already did “Hello, World!” in the material let’s do something else.

Start 3 containers from image that does not automatically exit, such as nginx, detached.

Stop 2 of the containers leaving 1 up.

Submit the output for docker ps -a which shows 2 stopped containers and one running.

1.2: Cleanup

We’ve left containers and a image that won’t be used anymore and are taking space, as docker ps -as and docker images will reveal.

Clean the docker daemon from all images and containers.

Submit the output for docker ps -a and docker images

Running and stopping containers

Next we will start using a more useful image than hello-world. We can run ubuntu just with docker run ubuntu.

$ docker run ubuntu
  Unable to find image 'ubuntu:latest' locally
  latest: Pulling from library/ubuntu
  83ee3a23efb7: Pull complete 
  db98fc6f11f0: Pull complete 
  f611acd52c6c: Pull complete 
  Digest: sha256:703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715
  Status: Downloaded newer image for ubuntu:latest

Anticlimactic as nothing really happened. The image was downloaded and ran and that was the end of that. It actually tried to open a shell but we will need to add a few flags to interact with it. -t will create a tty.

$ docker run -t ubuntu

Now we’re inside the container and if we input ls and press enter… nothing happens. Because our terminal is not sending the messages into the container. The -i flag will instruct to pass the STDIN to the container. If you’re stuck with the other terminal you can just stop the container.

$ docker run -it ubuntu
  root@2eb70ecf5789:/# ls
  bin  boot  dev  etc  home  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

Great! Now we know at least 3 useful flags. -i (interactive), -t (tty) and -d (detached).

Let’s throw in a few more and run a container in the background:

$ docker run -d -it --name looper ubuntu sh -c 'while true; do date; sleep 1; done'

If you are command prompt (Windows) user you must use double quotes around the script i.e. docker run -d -it --name looper ubuntu sh -c "while true; do date; sleep 1; done". The quote or double-quote may haunt you later during the course.

  • The first part, docker run -d. Should be familiar by now, run container detached.

  • Followed by -it is short for -i and -t. Also familiar, -it allows you to interact with the container by using the command line.

  • Because we ran the container with --name looper, we can now reference it easily.

  • The image is ubuntu and what follows it is the command given to the container.

And to check that it’s running, run docker container ls

Let’s follow -f the output of logs with

$ docker logs -f looper
  Thu Feb  4 15:51:29 UTC 2021
  Thu Feb  4 15:51:30 UTC 2021
  Thu Feb  4 15:51:31 UTC 2021

Let’s test pausing the looper without exiting or stopping it. In another terminal run docker pause looper. Notice how the logs output has paused in the first terminal. To unpause run docker unpause looper.

Keep the logs open and attach to the running container from the second terminal using ‘attach’:

$ docker attach looper 
  Mon Jan 15 19:26:54 UTC 2018 
  Mon Jan 15 19:26:55 UTC 2018 

Now you have process logs (STDOUT) running in two terminals. Now press control+c in the attached window. The container is stopped because the process is no longer running.

If we want to attach to a container while making sure we don’t close it from the other terminal we can specify to not attach STDIN with --no-stdin option. Let’s start the stopped container with docker start looper and attach to it with --no-stdin.

Then try control+c.

$ docker start looper 

$ docker attach --no-stdin looper 
  Mon Jan 15 19:27:54 UTC 2018 
  Mon Jan 15 19:27:55 UTC 2018 

The container will continue running. Control+c now only disconnects you from the STDOUT.

To enter a container, we can start a new process in it.

$ docker exec -it looper bash 

  root@2a49df3ba735:/# ps aux 

  root         1  0.2  0.0   2612  1512 pts/0    Ss+  12:36   0:00 sh -c while true; do date; sleep 1; done
  root        64  1.5  0.0   4112  3460 pts/1    Ss   12:36   0:00 bash
  root        79  0.0  0.0   2512   584 pts/0    S+   12:36   0:00 sleep 1
  root        80  0.0  0.0   5900  2844 pts/1    R+   12:36   0:00 ps aux

From the ps aux listing we can see that our bash process got PID (process ID) of 64.

Now that we’re inside the container it behaves as you’d expect from ubuntu, and we can exit the container with exit and then either kill or stop the container.

Our looper won’t stop for a SIGTERM signal sent by a stop command. To terminate the process, stop follows the SIGTERM with a SIGKILL after a grace period. In this case, it’s simply faster to use kill.

$ docker kill looper 
$ docker rm looper 

Running the previous two commands is basically equivalent to running docker rm --force looper

Let’s start another process with -it and add --rm in order to remove it automatically after it has exited. The --rm ensures that there are no garbage containers left behind. It also means that docker start can not be used to start the container after it has exited.

$ docker run -d --rm -it --name looper-it ubuntu sh -c 'while true; do date; sleep 1; done' 

Now let’s attach to the container and hit control+p, control+q to detach us from the STDOUT.

$ docker attach looper-it 

  Mon Jan 15 19:50:42 UTC 2018 
  Mon Jan 15 19:50:43 UTC 2018 
  ^P^Qread escape sequence

Instead, if we had used ctrl+c, it would have sent a kill signal followed by removing the container as we specified –rm in docker run command.

1.3: Secret message

Now that we’ve warmed up it’s time to get inside a container while it’s running!

Image devopsdockeruh/simple-web-service:ubuntu will start a container that outputs logs into a file. Go inside the container and use tail -f ./text.log to follow the logs. Every 10 seconds the clock will send you a “secret message”.

Submit the secret message and command(s) given as your answer.

1.4: Missing dependencies

Start a ubuntu image with the process sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'

You will notice that a few things required for proper execution are missing. Be sure to remind yourself which flags to use so that the read actually waits for input.

Note also that curl is NOT installed in the container yet. You will have to install it from inside of the container.

Test inputting into the application. It should respond with something like


  <title>301 Moved Permanently</title>

  <h1>Moved Permanently</h1>
  <p>The document has moved <a href="">here</a>.</p>


This time return the command you used to start process and the command(s) you used to fix the ensuing problems.

This exercise has multiple solutions, if the curl for works then it’s done. Can you figure out other (smart) solutions?

In depth dive to images

Images are the basic building blocks for containers and other images. When you “containerize” an application you work towards creating the image.

By learning what images are and how to create them you are ready to start utilizing containers in your own projects.

Where do the images come from?

When running a command such as docker run hello-world, Docker will automatically search Docker Hub for the image if it is not found locally.

This means that we can pull and run any public image from Docker’s servers. For example‚ if we wanted to start an instance of the PostgreSQL database, we could just run docker run postgres, which would pull and run

We can search for images in the Docker Hub with docker search. Try running docker search hello-world.

The search finds plenty of results, and prints each image’s name, short description, amount of stars, and “official” and “automated” statuses.

$ docker search hello-world

  NAME                         DESCRIPTION    STARS   OFFICIAL   AUTOMATED
  hello-world                  Hello World!…  699     [OK]
  kitematic/hello-world-nginx  A light-weig…  112
  tutum/hello-world            Image to tes…  56                 [OK]

Let’s examine the list.

The first result, hello-world, is an official image. Official images are curated and reviewed by Docker, Inc. and are usually actively maintained by the authors. They are built from repositories in the docker-library.

When browsing the CLI’s search results, you can recognize an official image from the “[OK]” in the “OFFICIAL” column and also from the fact that the image’s name has no prefix (aka organization/user). When browsing Docker Hub, the page will show “Docker Official Images” as the repository, instead of a user or organization. For example, see the Docker Hub page of the hello-world image.

The third result, tutum/hello-world, is marked as “automated”. This means that the image is automatically built from the source repository. Its Docker Hub page shows its previous “Builds” and a link to the image’s “Source Repository” (in this case, to GitHub) from which Docker Hub builds the image.

The second result, kitematic/hello-world-nginx, is neither an official nor an automated image. We can’t know what the image is built from, since its Docker Hub page has no links to any repositories. The only thing its Docker Hub page reveals is that the image is 6 years old. Even if the image’s “Overview” section had links to a repository, we would have no guarantees that the published image was built from that source.

There are also other Docker registries competing with Docker Hub, such as quay. However, docker search will only search from Docker Hub, so we will need to use the registry’s web pages to search for images. Take a look at the page of the nordstrom/hello-world image on quay. The page shows the command to use to pull the image, which reveals that we can also pull images from hosts other than Docker Hub:

docker pull

So, if the host’s name (here: is omitted, it will pull from Docker Hub by default.

NOTE: Trying above command may fail giving manifest errors as default tag latest is not present in image. Specifying correct tag for image will pull image without any errors, for ex. docker pull

A detailed look into an image

Let’s go back to a more relevant image than ‘hello-world’, the ubuntu image is one of the most common Docker images to use as a base for your own image.

Let’s pull Ubuntu and look at the first lines:

$ docker pull ubuntu
  Using default tag: latest
  latest: Pulling from library/ubuntu

Since we didn’t specify a tag, Docker defaulted to latest, which is usually the latest image built and pushed to the registry. However, in this case, the repository’s README says that the ubuntu:latest tag points to the “latest LTS” instead since that’s the version recommended for general use.

Images can be tagged to save different versions of the same image. You define an image’s tag by adding :<tag> after the image’s name.

Ubuntu’s Docker Hub page reveals that there’s a tag named 18.04 which promises us that the image is based on Ubuntu 18.04. Let’s pull that as well:

$ docker pull ubuntu:18.04 

  18.04: Pulling from library/ubuntu 
  c2ca09a1934b: Downloading [============================================>      ]  34.25MB/38.64MB 
  d6c3619d2153: Download complete 
  0efe07335a04: Download complete 
  6b1bb01b3a3b: Download complete 
  43a98c187399: Download complete 

Images are composed of different layers that are downloaded in parallel to speed up the download. Images being made of layers also have other aspects and we will talk about them in part 3.

We can also tag images locally for convenience, for example, docker tag ubuntu:18.04 ubuntu:bionic creates the tag ubuntu:bionic which refers to ubuntu:18.04.

Tagging is also a way to “rename” images. Run docker tag ubuntu:18.04 fav_distro:bionic and check docker images to see what effects the command had.

To summarize, an image name may consist of 3 parts plus a tag. Usually like the following: registry/organisation/image:tag. But may be as short as ubuntu, then the registry will default to docker hub, organisation to library and tag to latest. The organisation may also be an user, but calling it an organisation may be more clear.

1.5: Sizes of images

In a previous exercise we used devopsdockeruh/simple-web-service:ubuntu.

Here is the same application but instead of ubuntu is using alpine: devopsdockeruh/simple-web-service:alpine.

Pull both images and compare the image sizes. Go inside the alpine container and make sure the secret message functionality is the same. Alpine version doesn’t have bash but it has sh.

1.6: Hello Docker Hub

Run docker run -it devopsdockeruh/pull_exercise.

It will wait for your input. Navigate through docker hub to find the docs and Dockerfile that was used to create the image.

Read the Dockerfile and/or docs to learn what input will get the application to answer a “secret message”.

Submit the secret message and command(s) given to get it as your answer.

Building images

Finally, we get to build our own images and get to talk about Dockerfile and why it’s so great.

Dockerfile is simply a file that contains the build instructions for an image. You define what should be included in the image with different instructions. We’ll learn about the best practices here by creating one.

Let’s take a most simple application and containerize it first. Here is a script called “”


echo "Hello, docker!"

First, we will test that it even works. Create the file, add execution permissions and run it:

$ chmod +x

$ ./
  Hello, docker!

If you’re using windows you can skip these two and add chmod +x to the Dockerfile.

And now to create an image from it. We’ll have to create the Dockerfile that declares all of the required dependencies. At least it depends on something that can run shell scripts. So I will choose alpine, it is a small Linux distribution and often used to create small images.

Even though we’re using alpine here, you can use ubuntu during exercises. Ubuntu images by default contain more tools to debug what is wrong when something doesn’t work. In part 3 we will talk more about why small images are important.

We will choose exactly which version of a given image we want to use. This makes it so that we don’t accidentally update through a breaking change, and we know which images need updating when there are known security vulnerabilities in old images.

Now create a file and name it “Dockerfile” and lets put the following instructions inside it:


# Start from the alpine image that is smaller but no fancy tools
FROM alpine:3.13

# Use /usr/src/app as our workdir. The following instructions will be executed in this location.
WORKDIR /usr/src/app

# Copy the file from this location to /usr/src/app/ creating /usr/src/app/

# Alternatively, if we skipped chmod earlier, we can add execution permissions during the build.
# RUN chmod +x

# When running docker run the command will be ./
CMD ./

If you’re now getting “/bin/sh: ./ Permission denied” it’s because the chmod +x was skipped earlier. You can simply uncomment the RUN instruction between COPY and CMD instructions

Great! By default docker build will look for a file named Dockerfile. Now we can run docker build with instructions where to build (.) and give it a name (-t <name>):

$ docker build . -t hello-docker
  Sending build context to Docker daemon  54.78kB
  Step 1/4 : FROM alpine:3.13
   ---> d6e46aa2470d
  Step 2/4 : WORKDIR /usr/src/app
   ---> Running in bd0b4e349cb4
  Removing intermediate container bd0b4e349cb4
   ---> b382ca27c182
  Step 3/4 : COPY .
   ---> 7fbc1b6e45ab
  Step 4/4 : CMD ./
   ---> Running in 24f28f026b3f
  Removing intermediate container 24f28f026b3f
   ---> 444f21cf7bd5
  Successfully built 444f21cf7bd5
  Successfully tagged hello-docker:latest

$ docker images
  REPOSITORY            TAG          IMAGE ID       CREATED         SIZE
  hello-docker          latest       444f21cf7bd5   2 minutes ago   5.57MB

Now executing the application is as simple as running docker run hello-docker. Try it! During the build we see that there are multiple steps with hashes and intermediate containers. The steps here represent the layers so that each step is a new layer to the image.

The layers have multiple functions. We often try to limit the number of layers to save on storage space but layers can work as a cache during build time. If we just edit the last lines of Dockerfile the build command can start from the previous layer and skip straight to the section that has changed. COPY automatically detects changes in the files, so if we change the it’ll run from step 3/4, skipping 1 and 2. This can be used to create faster build pipelines. We’ll talk more about optimization in part 3.

The intermediate containers are containers created from the image in which the command is executed. Then the changed state is stored into an image. We can do similiar task and a new layer manually. Create a new file called additional.txt and let’s copy it inside the container and learn new trick while we’re at it! We’ll need two terminals so I will label the lines with 1 and 2 representing the two.

1 $ docker run -it hello-docker sh
1 /usr/src/app # 

Now we’re inside of the container. We replaced the CMD we defined earlier with sh and used -i and -t to start the container so that we can interact with it. In the second terminal we will copy the file here.

2 $ docker ps
    9c06b95e3e85   hello-docker   "sh"      4 minutes ago   Up 4 minutes             zen_rosalind

2 $ touch additional.txt
2 $ docker cp ./additional.txt zen_rosalind:/usr/src/app/

I created the file with touch right before copying it in. Now it’s there and we can confirm that with ls:

1 /usr/src/app # ls
1 additional.txt

Great! Now we’ve made a change to the container. We can use diff to check what has changed

2 $ docker diff zen_rosalind
    C /usr
    C /usr/src
    C /usr/src/app
    A /usr/src/app/additional.txt
    C /root
    A /root/.ash_history

The character in front of the file name indicates the type of the change in the container’s filesystem: A = added, D = deleted, C = changed. The additional.txt was created and our ls created .ash_history. Next we will save the changes as a new layer!

2 $ docker commit zen_rosalind hello-docker-additional
2 $ docker images
    REPOSITORY                   TAG          IMAGE ID       CREATED          SIZE
    hello-docker-additional      latest       2f63baa355ce   3 seconds ago    5.57MB
    hello-docker                 latest       444f21cf7bd5   31 minutes ago   5.57MB

We will actually never use docker commit again. This is because defining the changes to the Dockerfile is much more sustainable method of managing changes. No magic actions or scripts, just a Dockerfile that can be version controlled.

Let’s do just that and create hello-docker with v2 tag that includes additional.txt.


# Start from the alpine image
FROM alpine:3.13

# Use /usr/src/app as our workdir. The following instructions will be executed in this location.
WORKDIR /usr/src/app

# Copy the file from this location to /usr/src/app/ creating /usr/src/app/

# Execute a command with `/bin/sh -c` prefix.
RUN touch additional.txt

# When running docker run the command will be ./
CMD ./

Build it with docker build . -t hello-docker:v2 and we are done! Let’s compare the output of ls:

$ docker run hello-docker-additional ls

$ docker run hello-docker:v2 ls                            

Now we know that all instructions in a Dockerfile except CMD (and one other that we will learn about soon) are executed during build time. CMD is executed when we call docker run, unless we overwrite it.

1.7: Two line Dockerfile

By default our devopsdockeruh/simple-web-service:alpine doesn’t have a CMD. It instead uses ENTRYPOINT to declare which application is run.

We’ll talk more about ENTRYPOINT in the next section, but you already know that the last argument in docker run can be used to give command.

As you might’ve noticed it doesn’t start the web service even though the name is “simple-web-service”. A command is needed to start the server!

Try docker run devopsdockeruh/simple-web-service:alpine hello. The application reads the argument but will inform that hello isn’t accepted.

In this exercise create a Dockerfile and use FROM and CMD to create a brand new image that automatically runs the server. Tag the new image as “web-server”

Return the Dockerfile and the command you used to run the container.

Running the built “web-server” image should look like this:

$ docker run web-server
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /*path                    --> server.Start.func1 (3 handlers)
[GIN-debug] Listening and serving HTTP on :8080

We don’t have any method of accessing the web service yet. As such confirming that the console output is the same will suffice.

1.8: Image for script

Now that we know how to create and build Dockerfiles we can improve previous works.

Create a Dockerfile for a new image that starts from ubuntu:18.04.

Make a script file on you local machine with such content as echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;. Transfer this file to an image and run it inside the container using CMD. Build the image with tag “curler”.

Run the new curler image with the correct flags and input into it. Output should match the 1.4 one.

Return both Dockerfile and the command you used to run the container.

More complex image

Next, we will start moving towards a more meaningful image. youtube-dl is a program that downloads youtube videos Let’s add it to an image - but this time, we will change our process. Instead of our current process where we add things to the Dockerfile hope it works, let’s try another approach. This time we will open up an interactive session and test stuff before “storing” it in our Dockerfile. By following the youtube-dl install instructions we will see that:

$ docker run -it ubuntu:18.04
  root@8c587232a608:/# curl -L -o /usr/local/bin/youtube-dl 
  bash: curl: command not found 

..and, as we already know, curl is not installed - let’s add curl with apt-get again.

$ apt-get update && apt-get install -y curl 
$ curl -L -o /usr/local/bin/youtube-dl 

At some point, you may have noticed that sudo is not installed either, but since we are root we don’t need it.

Next, we will add permissions and run it:

$ chmod a+rx /usr/local/bin/youtube-dl 
$ youtube-dl
  /usr/bin/env: 'python': No such file or directory 

Okay - On the top of the youtube-dl download page we notice this message:

Remember youtube-dl requires Python version 2.6, 2.7, or 3.2+ to work except for Windows exe.

So we will add python

$ apt-get install -y python

And let’s run it again

$ youtube-dl 

  WARNING: Assuming --restrict-filenames since file system encoding cannot encode all characters. Set the LC_ALL environment variable to fix this. 
  Usage: youtube-dl [OPTIONS] URL [URL...] 

  youtube-dl: error: You must provide at least one URL. 
  Type youtube-dl --help to see a list of all options. 

It works (we just need to give an URL), but we notice that it outputs a warning about LC_ALL. In a regular Ubuntu desktop/server install the localization settings are (usually) set, but in this image they are not set, as we can see by running env in our container. To fix this without installing additional locales, see this:

$ LC_ALL=C.UTF-8 youtube-dl 

And it works! Let’s persist it for our session and try downloading a video:

$ export LC_ALL=C.UTF-8 
$ youtube-dl

So now when we know exactly what we need. Starting FROM ubuntu:18.04, add these to our Dockerfile. We should always try to keep the most prone to change rows at the bottom, by adding the instructions to the bottom we can preserve our cached layers - this is handy practise to speed up creating the initial version of a Dockerfile when it has time consuming operations like downloads. Also added WORKDIR, this will ensure the videos will be downloaded there.

FROM ubuntu:18.04

WORKDIR /mydir

RUN apt-get update && apt-get install -y curl python 
RUN curl -L -o /usr/local/bin/youtube-dl 
RUN chmod a+x /usr/local/bin/youtube-dl 


CMD ["/usr/local/bin/youtube-dl"] 
  • Instead of using RUN export LC_ALL=C.UTF-8 we store the environment directly in the image with ENV

  • We also override bash as our image command (set on the base image) with youtube-dl itself. This will not work, but let’s see why.

When we build this as youtube-dl and run it:

$ docker build -t youtube-dl . 

$ docker run youtube-dl 

  Usage: youtube-dl [OPTIONS] URL [URL...] 

  youtube-dl: error: You must provide at least one URL. 

  Type youtube-dl --help to see a list of all options. 

So far so good, but now the natural way to use this image would be to give the URL as an argument:

$ docker run youtube-dl 

  /usr/local/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"\": stat no such file or directory": unknown. 

  ERRO[0001] error waiting for container: context canceled 

As we now know, the argument we gave it is replacing the command or CMD. We need a way to have something before the command. Luckily we have a way to do this: we can use ENTRYPOINT to define the main executable and then docker will combine our run arguments for it.

FROM ubuntu:18.04

WORKDIR /mydir

RUN apt-get update && apt-get install -y curl python 
RUN curl -L -o /usr/local/bin/youtube-dl 
RUN chmod a+x /usr/local/bin/youtube-dl 


# Replacing CMD with ENTRYPOINT
ENTRYPOINT ["/usr/local/bin/youtube-dl"] 

And now it works like it should:

$ docker build -t youtube-dl . 
$ docker run youtube-dl

  [Imgur] JY5tHqr: Downloading webpage
  [download] Destination: Imgur-JY5tHqr.mp4
  [download] 100% of 190.20KiB in 00:0044MiB/s ETA 00:000

With ENTRYPOINT docker run now executed the combined /usr/local/bin/youtube-dl inside the container with that command!

ENTRYPOINT vs CMD can be confusing - in a properly set up image such as our youtube-dl the command represents an argument list for the entrypoint. By default entrypoint is set as /bin/sh and this is passed if no entrypoint is set. This is why giving path to a script file as CMD works: you’re giving the file as a parameter to /bin/sh.

In addition, there are two ways to set them: exec form and shell form. We’ve been using the exec form where the command itself is executed. In shell form the command that is executed is wrapped with /bin/sh -c - it’s useful when you need to evaluate environment variables in the command like $MYSQL_PASSWORD or similar.

In the shell form the command is provided as a string without brackets. In the exec form the command and it’s arguments are provided as a list (with brackets), see the table below:

Dockerfile Resulting command
ENTRYPOINT /bin/ping -c 3
CMD localhost
/bin/sh -c ‘/bin/ping -c 3’ /bin/sh -c localhost
ENTRYPOINT [“/bin/ping”,”-c”,”3”]
CMD localhost
/bin/ping -c 3 /bin/sh -c localhost
ENTRYPOINT /bin/ping -c 3
CMD [“localhost”]
/bin/sh -c ‘/bin/ping -c 3’ localhost
ENTRYPOINT [“/bin/ping”,”-c”,”3”]
CMD [“localhost”]
/bin/ping -c 3 localhost

As the command at the end of docker run will be the CMD we want to use ENTRYPOINT to specify what to run, and CMD to specify which command (in our case url) to run.

Most of the time we can ignore ENTRYPOINT when building our images and only use CMD. For example, ubuntu image defaults the ENTRYPOINT to bash so we do not have to worry about it. And it gives us the convenience of allowing us to overwrite the CMD easily, for example, with bash to go inside the container.

We can test how some other projects do this. Let’s try python:

$ docker pull python:3.8
$ docker run -it python:3.8
  Python 3.8.2 (default, Mar 31 2020, 15:23:55)
  [GCC 8.3.0] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> print("Hello, World!")
  Hello, World!
  >>> exit()

$ docker run -it python:3.8 --version
  docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "--version": executable file not found in $PATH: unknown.

$ docker run -it python:3.8 bash

From this experimentation we learned that they have ENTRYPOINT as something other than python, but the CMD is python and we can overwrite it, here with bash. If they had ENTRYPOINT as python we’d be able to run –version. We can create our own image for personal use as we did in a previous exercise with a new Dockerfile.

FROM python:3.8
ENTRYPOINT ["python3"]
CMD ["--help"]

The result is an image that has python as ENTRYPOINT and you can add the commands at the end, for example –version to see the version. Without overwriting the command, it will output the help.

Now we have two problems with the youtube-dl project:

  • Major: The downloaded files stay in the container

  • Minor: Our container build process creates many layers resulting in increased image size

We will fix the major issue first. The minor issue will get our attention in part 3.

By inspecting docker container ls -a we can see all our previous runs. When we filter this list with

$ docker container ls -a --last 3 

  CONTAINER ID        IMAGE               COMMAND                   CREATED                  STATUS                          PORTS               NAMES 
  be9fdbcafb23        youtube-dl          "/usr/local/bin/yout…"    Less than a second ago   Exited (0) About a minute ago                       determined_elion 
  b61e4029f997        f2210c2591a1        "/bin/sh -c \"/usr/lo…"   Less than a second ago   Exited (2) About a minute ago                       vigorous_bardeen 
  326bb4f5af1e        f2210c2591a1        "/bin/sh -c \"/usr/lo…"   About a minute ago       Exited (2) 3 minutes ago                            hardcore_carson 

We see that the last container was be9fdbcafb23 or determined_elion for us humans.

$ docker diff determined_elion 

  C /mydir 
  A /mydir/Imgur-JY5tHqr.mp4 

Let’s try docker cp command to copy the file. We can use quotes if the filename has spaces.

$ docker cp "determined_elion://mydir/Imgur-JY5tHqr.mp4" . 

And now we have our file locally. Sadly, this is not sufficient to fix our issue, so let’s continue:

Volumes: bind mount

By bind mounting a host (our machine) folder to the container we can get the file directly to our machine. Let’s start another run with -v option, that requires an absolute path. We mount our current folder as /mydir in our container, overwriting everything that we have put in that folder in our Dockerfile.

$ docker run -v "$(pwd):/mydir" youtube-dl

So a volume is simply a folder (or a file) that is shared between the host machine and the container. If a file in volume is modified by a program that’s running inside the container the changes are also saved from destruction when the container is shut down as the file exists on the host machine. This is the main use for volumes as otherwise all of the files wouldn’t be accessible when restarting the container. Volumes also can be used to share files between containers and run programs that are able to load changed files.

In our youtube-dl we wanted to mount the whole directory since the files are fairly randomly named. If we wish to create a volume with only a single file we could also do that by pointing to it. For example -v $(pwd)/ this way we could edit the locally and have it change in the container (and vice versa). Note also that -v creates a directory if the file does not exist.

1.9: Volumes

In this exercise we won’t create a new Dockerfile.

Image devopsdockeruh/simple-web-service creates a timestamp every two seconds to /usr/src/app/text.log when it’s not given a command. Start the container with bind mount so that the logs are created into your filesystem.

Submit the command you used to complete the exercise.

Allowing external connections into containers

The details on how programs communicate are not detailed in this course. Courses on Operating Systems and the Networking courses explain subjects in detail. In this course you only need to know the following simplified basics:

  • Sending messages: Programs can send messages to URL addresses such as this: where http is the protocol, is a ip address, and and 3000 is a port. Note the ip part could also be a hostname: is also called localhost so instead you could use http://localhost:3000.

  • Receiving messages: Programs can be assigned to listen to any available port. If a program is listening for traffic on port 3000, and a message is sent to that port, it will receive it (and possibly process it).

The address and hostname localhost are special ones, they refer to the machine or container itself, so if you are on a container and send message to localhost, the target is the same container. Similarly, if you are sending the request from outside of a container to localhost, the target is your machine.

You can map your host machine port to a container port.

Opening a connection from outside world to a docker container happens in two steps:

  • Exposing port

  • Publishing port

Exposing a container port means that you tell Docker that the container listens to a certain port. This doesn’t actually do much except helps us humans with the configuration.

Publishing a port means that Docker will map host ports to the container ports.

To expose a port, add line EXPOSE <port> in your Dockerfile

To publish a port, run the container with -p <host-port>:<container-port>

If you leave out the host port and only specify the container port, docker will automatically choose a free port as the host port:

$ docker run -p 4567 app-in-port

We could also limit connections to certain protocol only, e.g. udp by adding the protocol at the end: EXPOSE <port>/udp and -p <host-port>:<container-port>/udp.

1.10: Ports open

In this exercise we won’t create a new Dockerfile.

Image devopsdockeruh/simple-web-service will start a web service in port 8080 when given the command “server”. From 1.7 you should have an image ready for this. Use -p flag to access the contents with your browser. The output to your browser should be something like: { message: "You connected to the following path: ...

Submit your used commands for this exercise.

Technology agnostic

As we’ve already seen it should be possible to containerize almost any project. As we are in between Dev and Ops let’s pretend again that some developer teammates of ours did an application with a README that instructs what to install and how to run the application. Now we as the container experts can containerize it in seconds. Open this project and read through the README and think about how to transform it into a Dockerfile. Thanks to the README we should be able to decipher what we will need to do even if we have no clue about the language or technology!

We will need to clone the repository, which you may have already done. After the project is done, let’s start with a Dockerfile. We know that we need to install ruby and whatever dependencies it had. Let’s place the Dockerfile in the project root.


# We need ruby 3.0.0. I found this from docker hub
FROM ruby:3.0.0


WORKDIR /usr/src/app

Ok these are the basics, we have FROM a ruby version, EXPOSE 3000 was told at the bottom of the README and WORKDIR /usr/src/app is the convention.

# Install node, found from the internet
RUN curl -sL | bash -
RUN apt install -y nodejs

# Install yarn, found from readme
RUN npm install -g yarn

# Install the correct bundler version
RUN gem install bundler:2.2.11

Nodejs required a little bit of googling but that sounds promising. The next were told to us by the README. We won’t need to copy anything from outside of the container to run these.

# Copy all of the content from the project to the image
COPY . .

# Install all dependencies
RUN bundle install

# We pick the production guide mode since we have no intention of developing the software inside the container.
# Run database migrations by following instructions from README
RUN rails db:migrate RAILS_ENV=production

# Precompile assets by following instructions from README
RUN rake assets:precompile 

# And finally the command to run the application
CMD ["rails", "s", "-e", "production"]

And finally, we copy the project, install all of the dependencies and follow the instructions in the README.

Ok. Let’s see how well monkeying the README worked for us: docker build . -t rails-project && docker run -p 3000:3000 rails-project. After a while of waiting, the application starts in port 3000 in production mode.

1.11: Spring

Lets create a Dockerfile for a Java Spring project: github page

The setup should be straightforward with the README instructions. Tips to get you started:

Use openjdk image FROM openjdk:_tag_ to get java instead of installing it manually. Pick the tag by using the README and dockerhub page.

You’ve completed the exercise when you see a ‘Success’ message in your browser.

Submit the Dockerfile you used to run the container.

The next three exercises will start a larger project that we will configure in parts 2 and 3. They will require you to use everything you’ve learned up until now.

1.12: Hello, frontend!

This exercise is mandatory

A good developer creates well written READMEs that can be used to create Dockerfiles with ease.

Clone, fork or download the project from

Create a Dockerfile for the project (example-frontend) and give a command so that the project runs in a docker container with port 5000 exposed and published so when you start the container and navigate to http://localhost:5000 you will see message if you’re successful.

Submit the Dockerfile.

As in other exercises, do not alter the code of the project

TIP: The project has install instructions in README.

TIP: Note that the app starts to accept connections when “Accepting connections at http://localhost:5000” has been printed to the screen, this takes a few seconds

TIP: You do not have to install anything new outside containers.

1.13: Hello, backend!

This exercise is mandatory

Clone, fork or download a project from

Create a Dockerfile for the project (example-backend) and give a command so that the project runs in a docker container with port 8080 published.

When you start the container and navigate to http://localhost:8080/ping you should get a “pong” as response.

Submit the Dockerfile and the command used.

Do not alter the code of the project

1.14: Environment

This exercise is mandatory

Start both frontend-example and backend-example with correct ports exposed and add ENV to Dockerfile with necessary information from both READMEs (front,back).

Ignore the backend configurations until frontend sends requests to _backend_url_/ping when you press the button.

You know that the configuration is ready when the button for 1.14 of frontend-example responds and turns green.

Do not alter the code of either project

Submit the edited Dockerfiles and commands used to run.

The frontend will first talk to your browser. Then the code will be executed from your browser and that will send a message to backend.

TIP: When configuring web applications keep browser developer console ALWAYS open, F12 or cmd+shift+I when the browser window is open. Information about configuring cross origin requests is in README of the backend project.

TIP: Developer console has multiple views, most important ones are Console and Network. Exploring the Network tab can give you a lot of information on where messages are being sent and what is received as response!

Publishing projects

Go to to create an account. You can configure docker hub to build your images for you, but using push works as well.

Let’s publish the youtube-dl image. Log in and navigate to your dashboard and press Create Repository. The namespace can be either your personal account or an organization account. For now, let’s stick to personal accounts and write something descriptive such as youtube-dl to repository name. We will need to remember it in part 2.

Set visibility to public.

And the last thing we need is to authenticate our push by logging in:

$ docker login

Next, you will need to rename the image to include your username, and then you can push it:

$ docker tag youtube-dl <username>/<repository>

$ docker push <username>/<repository>

1.15: Homework

Create Dockerfile for an application or any other dockerised project in any of your own repositories and publish it to Docker Hub. This can be any project except clones / forks of backend-example or frontend-example.

For this exercise to be complete you have to provide the link to the project in docker hub, make sure you at least have a basic description and instructions for how to run the application in a README that’s available through your submission.

1.16: Heroku

Pushing to heroku happens in a similar way. A project has already been prepared at devopsdockeruh/heroku-example so lets pull that first. Note that the image of the project is quite large.

Go to and create a new app there and install heroku CLI. You can find additional instructions from Deploy tab under Container Registry. Tag the pulled image as, process-type can be web for this exercise. The app should be your project name in heroku.

Then push the image to heroku with docker push and release it using the heroku CLI: heroku container:release web --_app_ (you might need to login first: heroku container:login)

Heroku might take some time to get the application up and running.

For this exercise return the url in which the released application is.

You could also use the heroku CLI to build and push, but since we didn’t want to build anything this time it was easier to just tag the image.


We started by learning what Docker container and image mean. Basically we started from an empty ubuntu with nothing installed into it. It’s also possible to start from something else, but for now ubuntu had been enough.

This meant that we had to install almost everything manually, either from the command line or by using a setup file “Dockerfile” to install whatever we needed for the task at hand.

The process of dockerizing the applications meant a bit of configuration on our part, but now that we’ve done it and built the image anyone can pick up and run the application; no possible dependency or versioning issues.

Understanding the architecture and the technologies used is also part of making correct choices with the setup. This lead us to read the READMEs and documentation of the software involved in the setup, not just Docker. Fortunately in real life it’s often us who are developing and creating the Dockerfile.

The starting and stopping of containers is a bit annoying, not to mention running two applications at the same time. If only there was some way, a tool, to make it simpler… to compose.

Remember to mark your exercises into the submission application! Instructions on how and what to submit are on the exercises page.

ECTS Credits

Enrolling after each part is required for the ECTS credits. Now that you have completed part 1 use the following link to enroll in this course:

If you wish to end in this part and not do the following parts, follow the instructions at the bottom of exercises page


  • Enrollment for the course through the Open University is possible until Dec 12, 2021.

  • Credits for the course are only available to those students who have successfully enrolled on the course through the Open University and have completed the course according to the instructions.

* Electronic enrollment is available if you meet one of the following criteria:

  • You have a Finnish personal identity number

  • you are a student at the University of Helsinki, or

  • you are a student at a HAKA member institution.

If you are not a student in Finland and want to enroll on the course and receive ECTS credits.

Read the guide here under “Re­gis­tra­tion without a Finnish per­sonal identity code or on­line bank­ing ID at the Uni­versity’s Ad­mis­sions Services”: