Part 2

This part introduces container orchestration with docker-compose and relevant concepts such as docker network. By the end of this part you are able to:

Migrating to docker-compose.yml

Even with a simple image, we’ve already been dealing with plenty of command line options in both building, pushing and running the image.

Next we will switch to a tool called docker-compose to manage these.

docker-compose is designed to simplify running multi-container applications to using a single command.

In the folder where we have our Dockerfile with the following content:

FROM ubuntu:18.04

WORKDIR /mydir

RUN apt-get update
RUN apt-get install -y curl python
RUN curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
RUN chmod a+x /usr/local/bin/youtube-dl

ENV LC_ALL=C.UTF-8

ENTRYPOINT ["/usr/local/bin/youtube-dl"]

we create a file called docker-compose.yml:

version: '3' 

services: 
    youtube-dl-ubuntu:  
      image: <username>/<repositoryname>
      build: . 

The version setting is not very strict, it just needs to be above 2 because otherwise the syntax is significantly different. See https://docs.docker.com/compose/compose-file/ for more info. The key build: value can be set to a path (ubuntu), have an object with context and dockerfile keys or reference a url of a git repository.

Now we can build and push with just these commands:

$ docker-compose build
$ docker-compose push

Volumes in docker-compose

To run the image as we did previously, we will need to add the volume bind mounts. Volumes in docker-compose are defined with the following syntax location-in-host:location-in-container. Compose can work without an absolute path:

version: '3.5' 

services: 

    youtube-dl-ubuntu: 
      image: <username>/<repositoryname> 
      build: . 
      volumes: 
        - .:/mydir
      container_name: youtube-dl

We can also give the container a name it will use when running with container_name. And the service name can be used to run it:

$ docker-compose run youtube-dl-ubuntu https://imgur.com/JY5tHqr

2.1

Exercises in part 2 should be done using docker-compose

Without a command devopsdockeruh/simple-web-service will create logs into its /usr/src/app/text.log.

Create a docker-compose.yml file that starts devopsdockeruh/simple-web-service and saves the logs into your filesystem.

Submit the docker-compose.yml, make sure that it works simply by running docker-compose up if the log file exists.

Web services in docker-compose

Compose is really meant for running web services, so let’s move from simple binary wrappers to running a HTTP service.

https://github.com/jwilder/whoami is a simple service that prints the current container id (hostname).

$ docker container run -d -p 8000:8000 jwilder/whoami 
  736ab83847bb12dddd8b09969433f3a02d64d5b0be48f7a5c59a594e3a6a3541 

Navigate with a browser or curl to localhost:8000, they both will answer with the id.

Take down the container so that it’s not blocking port 8000.

$ docker container stop 736ab83847bb
$ docker container rm 736ab83847bb  

Let’s create a new folder and a docker-compose file whoami/docker-compose.yml from the command line options.

version: '3.5'  

services: 
    whoami: 
      image: jwilder/whoami 
      ports: 
        - 8000:8000 

Test it:

$ docker-compose up -d 
$ curl localhost:8000 

Environment variables can also be given to the containers in docker-compose.

version: '3.5'

services:
  backend:
      image:  
      environment:
        - VARIABLE=VALUE
        - VARIABLE 

2.2

Read about how to add command to docker-compose.yml from the documentation.

The familiar image devopsdockeruh/simple-web-service can be used to start a web service.

Create a docker-compose.yml and use it to start the service so that you can use it with your browser.

Submit the docker-compose.yml, make sure that it works simply by running docker-compose up

2.3

This exercise is mandatory

As we saw previously, starting an application with two programs was not trivial and the commands got a bit long.

In the previous part we created Dockerfiles for both frontend and backend. Next, simplify the usage into one docker-compose.yml.

Configure the backend and frontend from part 1 to work in docker-compose.

Submit the docker-compose.yml

Docker networking

Connecting two services such as a server and its database in docker can be achieved with docker-compose networks. In addition to starting services listed in docker-compose.yml the tool automatically creates and joins both containers into a network with a DNS. Each container service is named after their container name and as such containers can reference each other simply with their names.

Here are two services in a single network: webapp and webapp-helper. The webapp-helper has a server, listening for requests in port 3000, that webapp wants to access. Because they were defined in the same docker-compose.yml file the access is trivial. Docker-compose has already taken care of creating a network and webapp can simply send a request to webapp-helper:3000, the internal DNS will translate that to the correct access and ports do not have to be published outside of the network.

2.4

Add redis to example backend.

Redis is used to speed up some operations. Backend uses a slow api to get information. You can test the slow api by requesting /ping?redis=true with curl. The frontend program has a button to test this.

Configure a redis container to cache information for the backend. Use the documentation if needed when configuring: https://hub.docker.com/_/redis/

The backend README should have all the information needed to connect.

When you’ve correctly configured the button will turn green.

Submit the docker-compose.yml

restart: unless-stopped can help if the redis takes a while to get ready

TIP: If you’re stuck check out tips and tricks

You can also manually define the network and its name. A major benefit of defining network is that it makes it easy to setup a configuration where other containers connect to an existing network as an external network. This is used when a container wishes to interact with a container defined in another docker-compose file.

Defining network in docker-compose.yml. Services can be added to networks by adding networks into the definition of the service:

version: "3.8"

services:
  db:
    image: postgres:13.2-alpine
    networks:
      - database-network # Name in this docker-compose file

networks:
  database-network: # Name in this docker-compose file
    name: database-network # Name that will be the actual name of the network

This defines a network called database-network which is created with docker-compose up and removed with docker-compose down.

To connect to an external network (possibly defined another docker-compose.yml):

version: "3.8"

services:
  db:
    image: backend-image
    networks:
      - database-network

networks:
  database-network:
    external:
      name: database-network # Must match the actual name of the network

By default all services are added to a network called default. The default network can be configured and this makes it possible to connect to an external network by default as well:

version: "3.8"

services:
  db:
    image: backend-image

networks:
  default:
    external:
      name: database-network # Must match the actual name of the network

Scaling

Compose can also scale the service to run multiple instances:

$ docker-compose up --scale whoami=3 

  WARNING: The "whoami" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash. 

  Starting whoami_whoami_1 ... done 
  Creating whoami_whoami_2 ... error 
  Creating whoami_whoami_3 ... error 

The command fails due to a port clash, as each instance will attempt to bind to the same host port (8000).

We can get around this by only specifying the container port. As mentioned in part 1, when leaving the host port unspecified, Docker will automatically choose a free port.

Update the ports definition in docker-compose.yml:

    ports: 
      - 8000

Then run the command again:

$ docker-compose up --scale whoami=3
  Starting whoami_whoami_1 ... done
  Creating whoami_whoami_2 ... done
  Creating whoami_whoami_3 ... done

All three instances are now running and listening on random host ports. We can use docker-compose port to find out which ports the instances are bound to.

$ docker-compose port --index 1 whoami 8000 
  0.0.0.0:32770 

$ docker-compose port --index 2 whoami 8000 
  0.0.0.0:32769 

$ docker-compose port --index 3 whoami 8000 
  0.0.0.0:32768 

We can now curl from these ports:

$ curl 0.0.0.0:32769 
  I'm 536e11304357 

$ curl 0.0.0.0:32768 
  I'm 1ae20cd990f7 

In a server environment you’d often have a load balancer in-front of the service. For local environment (or a single server) one good solution is to use https://github.com/jwilder/nginx-proxy that configures nginx from docker daemon as containers are started and stopped.

Let’s add the proxy to our compose file and remove the port bindings from the whoami service. We’ll mount our docker.sock inside of the container in :ro read-only mode.

version: '3.5' 

services: 
    whoami: 
      image: jwilder/whoami 
    proxy: 
      image: jwilder/nginx-proxy 
      volumes: 
        - /var/run/docker.sock:/tmp/docker.sock:ro 
      ports: 
        - 80:80 

When we start this and test

$ docker-compose up -d --scale whoami=3 
$ curl localhost:80 
  <html> 
  <head><title>503 Service Temporarily Unavailable</title></head> 
  <body bgcolor="white"> 
  <center><h1>503 Service Temporarily Unavailable</h1></center> 
  <hr><center>nginx/1.13.8</center> 
  </body> 
  </html> 

It’s “working”, but the nginx just doesn’t know which service we want. The nginx-proxy works with two environment variables: VIRTUAL_HOST and VIRTUAL_PORT. VIRTUAL_PORT is not needed if the service has EXPOSE in it’s docker image. We can see that jwilder/whoami sets it: https://github.com/jwilder/whoami/blob/master/Dockerfile#L9

Note:

  • For Mac users with the M1 chip you may see the following error message: runtime: failed to create new OS thread. In this case you can use the docker image ninanung/nginx-proxy instead which offers a temporary fix until jwilder/nginx-proxy is updated to support M1 Macs.

The domain colasloth.com is configured so that all subdomains point to 127.0.0.1. More information about how this works can be found at colasloth.github.io, but in brief it’s a simple DNS “hack”. Several other domains serving the same purpose exist, such as localtest.me, lvh.me, and vcap.me, to name a few. In any case, let’s use colasloth.com here:

version: '3.5' 

services: 
    whoami: 
      image: jwilder/whoami 
      environment: 
       - VIRTUAL_HOST=whoami.colasloth.com 
    proxy: 
      image: jwilder/nginx-proxy 
      volumes: 
        - /var/run/docker.sock:/tmp/docker.sock:ro 
      ports: 
        - 80:80 

Now the proxy works:

$ docker-compose up -d --scale whoami=3 
$ curl whoami.colasloth.com 
  I'm f6f85f4848a8 
$ curl whoami.colasloth.com 
  I'm 740dc0de1954 

Let’s add couple of more containers behind the same proxy. We can use the official nginx image to serve a simple static web page. We don’t have to even build the container images, we can just mount the content to the image. Let’s prepare some content for two services called “hello” and “world”.

$ echo "hello" > hello.html
$ echo "world" > world.html

Then add these services to the docker-compose.yml file where you mount just the content as index.html in the default nginx path:

    hello: 
      image: nginx:1.19-alpine
      volumes: 
        - ./hello.html:/usr/share/nginx/html/index.html:ro 
      environment: 
        - VIRTUAL_HOST=hello.colasloth.com 
    world: 
      image: nginx:1.19-alpine
      volumes: 
        - ./world.html:/usr/share/nginx/html/index.html:ro 
      environment: 
        - VIRTUAL_HOST=world.colasloth.com 

Now let’s test:

$ docker-compose up -d --scale whoami=3 
$ curl hello.colasloth.com 
  hello 

$ curl world.colasloth.com 
  world 

$ curl whoami.colasloth.com 
  I'm f6f85f4848a8 

$ curl whoami.colasloth.com 
  I'm 740dc0de1954 

Now we have a basic single machine hosting setup up and running.

Test updating the hello.html without restarting the container, does it work?

2.5

A project over at https://github.com/docker-hy/material-applications/tree/main/scaling-exercise has a hardly working application. Go ahead and clone it for yourself. The project already includes docker-compose.yml so you can start it by running docker-compose up.

Application should be accessible through http://localhost:3000. However it doesn’t work well enough and I’ve added a load balancer for scaling. Your task is to scale the compute containers so that the button in the application turns green.

This exercise was created with Sasu Mäkinen

Please return the used commands for this exercise.

Larger application with volumes

Next we’re going to set up Redmine, a PostgreSQL database and Adminer. All of them have official docker images available as we can see from Redmine, Postgres and Adminer respectively. The officiality of the containers is not that important, just that we can expect that it will have some support. We could also, for example, setup Wordpress or a MediaWiki inside containers in the same manner if you’re interested in running existing applications inside docker. You could even set up your own personal Sentry.

In https://hub.docker.com/_/redmine there is a list of different variants in Supported tags and respective Dockerfile links - most likely for this testing we can use any of the images. From “Environment Variables” we can see that all variants can use REDMINE_DB_POSTGRES or REDMINE_DB_MYSQL environment variables to set up the database, or it will fallback to SQLite. So before moving forward, let’s setup postgres.

In https://hub.docker.com/_/postgres there’s a sample compose file under “via docker stack deploy or docker-compose” - Let’s strip that down to

version: '3.5' 

services:
  db:
    image: postgres:13.2-alpine
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: example
    container_name: db_redmine

Note:

  • restart: always was changed to unless-stopped that will keep the container running unless it’s stopped. With always the stopped container is started after reboot for example.

Under “Caveats - Where to Store Data” we can see that the /var/lib/postgresql/data can be mounted separately to preserve data in an easy-to-locate directory or let Docker manage the storage. We could use a bind mount like previously, but let’s first see what the “let Docker manage the storage” means. Let’s run the docker-compose file without setting anything new:

$ docker-compose up 

  Creating network "redmine_default" with the default driver
  Creating db_redmine ... done
  Attaching to db_redmine
  db_redmine | The files belonging to this database system will be owned by user "postgres".
  ...
  db_redmine | 2019-03-03 10:27:22.975 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
  db_redmine | 2019-03-03 10:27:22.975 UTC [1] LOG:  listening on IPv6 address "::", port 5432
  db_redmine | 2019-03-03 10:27:22.979 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
  db_redmine | 2019-03-03 10:27:22.995 UTC [50] LOG:  database system was shut down at 2019-03-03 10:27:22 UTC
  db_redmine | 2019-03-03 10:27:23.002 UTC [1] LOG:  database system is ready to accept connections

The image initializes the data files in the first start. Let’s terminate the container with ^C. Compose uses the current directory as a prefix for container and volume names so that different projects don’t clash. The prefix can be overridden with COMPOSE_PROJECT_NAME environment variable if needed.

Let’s inspect if there was a volume created with docker container inspect db_redmine | grep -A 5 Mounts

"Mounts": [
    {
        "Type": "volume",
        "Name": "794c9d8db6b5e643865c8364bf3b807b4165291f02508404ff3309b8ffde01df",
        "Source": "/var/lib/docker/volumes/794c9d8db6b5e643865c8364bf3b807b4165291f02508404ff3309b8ffde01df/_data",
        "Destination": "/var/lib/postgresql/data",

Now if we check out docker volume ls we can see that a volume with name “794c9d8db6b5e643865c8364bf3b807b4165291f02508404ff3309b8ffde01df” exists.

$ docker volume ls
  DRIVER              VOLUME NAME
  local               794c9d8db6b5e643865c8364bf3b807b4165291f02508404ff3309b8ffde01df

There may be more volumes on your machine. If you want to get rid of them you can use docker volume prune. Let’s put the whole “application” down now with docker-compose down. Then, this time let’s create a separate volume for the data.

version: '3.5'

services:
  db:
    image: postgres:13.2-alpine
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: example
    container_name: db_redmine
    volumes:
      - database:/var/lib/postgresql/data

volumes:
  database:
$ docker volume ls
  DRIVER              VOLUME NAME
  local               redmine_database

$ docker container inspect db_redmine | grep -A 5 Mounts
"Mounts": [
    {
        "Type": "volume",
        "Name": "redmine_database",
        "Source": "/var/lib/docker/volumes/ongoing_redminedata/_data",
        "Destination": "/var/lib/postgresql/data",

Ok, looks a bit more human readable even if it isn’t more accessible than bind mounts. Now when the Postgres is running, let’s add the redmine. The container seems to require just two environment variables.

redmine: 
  image: redmine:4.1-alpine
  environment: 
    - REDMINE_DB_POSTGRES=db
    - REDMINE_DB_PASSWORD=example
  ports: 
    - 9999:3000 
  depends_on: 
    - db

Notice the depends_on declaration. This makes sure that the that db service should be started first. depends_on does not guarantee that the database is up, just that the service is started first. The Postgres server is accessible with dns name “db” from the redmine service as discussed in the “docker networking” section

Now when you run docker-compose up you will see a bunch of database migrations running first.

  redmine_1  | I, [2019-03-03T10:59:20.956936 #25]  INFO -- : Migrating to Setup (1)
  redmine_1  | == 1 Setup: migrating =========================================================
  ...
  redmine_1  | [2019-03-03 11:01:10] INFO  ruby 2.6.1 (2019-01-30) [x86_64-linux]
  redmine_1  | [2019-03-03 11:01:10] INFO  WEBrick::HTTPServer#start: pid=1 port=3000

We can see that image also creates files to /usr/src/redmine/files that also need to be persisted. The Dockerfile has this line where it declares that a volume should be created. Again docker will create the volume, but it will be handled as an anonymous volume that is not managed by compose, so it’s better to be explicit about the volume. With that in mind our final file should look like this:

version: '3.5'

services:
  db:
    image: postgres:13.2-alpine
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: example
    container_name: db_redmine
    volumes:
      - database:/var/lib/postgresql/data
  redmine:
    image: redmine:4.1-alpine
    environment:
      - REDMINE_DB_POSTGRES=db
      - REDMINE_DB_PASSWORD=example
    ports:
      - 9999:3000
    volumes:
      - files:/usr/src/redmine/files
    depends_on:
      - db

volumes:
  database:
  files:

Now we can use the application with our browser through http://localhost:9999. After some changes inside the application we can inspect the changes that happened in the image and check that no extra meaningful files got written to the container:

$ docker container diff $(docker-compose ps -q redmine) 
  C /usr/src/redmine/config/environment.rb
  ...
  C /usr/src/redmine/tmp/pdf

Probably not.

Next, we will add adminer to the application. We could also just use psql to interact with a postgres database with docker container exec -it db_redmine psql -U postgres. (The command executes psql -U postgres inside the container) The same method can be used to create backups with pg_dump: docker container exec db_redmine pg_dump -U postgres > redmine.dump.

This step is straightforward, we actually had the instructions open back before we set up postgres. But let’s check the documentation and we see that the following will suffice:

adminer:
  image: adminer:4
  restart: always
  environment:
    - ADMINER_DESIGN=galkaev
  ports:
    - 8083:8080

Now when we run the application we can access adminer from http://localhost:8083. Setting up adminer is straightforward since it will be able to access the database through docker network.

2.6

Add database to example backend.

Lets use a postgres database to save messages. We won’t need to configure a volume since the official postgres image sets a default volume for us. Lets use the postgres image documentation to our advantage when configuring: https://hub.docker.com/_/postgres/. Especially part Environment Variables is of interest.

The backend README should have all the information needed to connect.

The button won’t turn green but you can send messages to yourself.

Submit the docker-compose.yml

TIP: When configuring the database, you might need to destroy the automatically created volumes. Use command docker volume prune, docker volume ls and docker volume rm to remove unused volumes when testing. Make sure to remove containers that depend on them beforehand.

restart: unless-stopped can help if the postgres takes a while to get ready

2.7

Configure a machine learning project.

Look into machine learning project created with Python and React and split into three parts: frontend, backend and training

Note that the training requires 2 volumes and backend should share volume /src/model with training.

The frontend will display on http://localhost:3000 and the application will tell if the subject of an image looks more like a cucumber or a moped.

Submit the docker-compose.yml

This exercise is known to have broken for some attendees based on CPU. The error looks something like “Illegal instruction (core dumped)”. Try downgrading / upgrading the tensorflow found in requirements.txt or join the Discord channel and message with Jakousa#1337.

Note that the generated model is a toy and will not produce good results.

It will take SEVERAL minutes to build the docker images, download training pictures and train the classifying model.

This exercise was created by Sasu Mäkinen

2.8

Add nginx to example frontend + backend.

Accessing your service from arbitrary port is counter intuitive since browsers use 80 (http) and 443 (https) by default. And having the service refer to two origins in a case where there’s only one backend isn’t desirable either. We will skip the SSL setup for https/443.

Nginx will function as a reverse proxy for us (see the image above). The requests arriving at anything other than /api will be redirected to frontend container and /api will get redirected to backend container.

At the end you should see that the frontend is accessible simply by going to http://localhost and the button works. Other buttons may have stopped working, do not worry about them.

As we will not start configuring reverse proxies on this course you can have a simple config file:

The following file should be set to /etc/nginx/nginx.conf inside the nginx container. You can use a file volume where the contents of the file are the following:

  events { worker_connections 1024; }

  http {
    server {
      listen 80;

      location / {
        proxy_pass _frontend-connection-url_;
      }

      location /api/ {
        proxy_set_header Host $host;
        proxy_pass _backend-connection-url_;
      }
    }
  }

Nginx, backend and frontend should be connected in the same network. See the image above for how the services are connected.

Submit the docker-compose.yml

Tips for making sure the backend connection works:

Try using your browser to access http://localhost/api/ping and see if it answers pong

It might be nginx configuration problem: Add trailing / to the backend url in the nginx.conf.

2.9

Postgres image uses a volume by default. Manually define volumes for the database in convenient location such as in ./database . Use the image documentations (postgres) to help you with the task. You may do the same for redis as well.

After you have configured the volume:

  • Save a few messages through the frontend
  • Run docker-compose down
  • Run docker-compose up and see that the messages are available after refreshing browser
  • Run docker-compose down and delete the volume folder manually
  • Run docker-compose up and the data should be gone

Maybe it would be simpler to back them up now that you know where they are.

TIP: To save you the trouble of testing all of those steps, just look into the folder before trying the steps. If it’s empty after docker-compose up then something is wrong.

TIP: Since you may have broken the buttons in nginx exercise you should test with a version of docker-compose.yml that doesn’t break the buttons

Submit the docker-compose.yml

2.10

Some buttons may have stopped working in the frontend + backend project. Make sure that every button for exercises works.

This may need a peek into the browsers developer consoles again like back part 1. The buttons of nginx exercise and the first button behave differently but you want them to match.

If you had to do any changes explain what you had to change.

Submit the docker-compose yml and both dockerfiles.

Containers in development

Containers are not only great in production. They can be used in development environments as well and offer a number of benefits. The same works-on-my-machine problem is faced often when a new developer joins the team. Not to mention the headache of switching runtime versions or a local database!

In our team at the University of Helsinki, the target for all project development environments is to have the setup so that a new developer only needs to install docker to get started. Of course, the target is usually missed as you need things like your favorite text editor.

Even if your application is not completely containerized during development, containers can be of use. For example, say you need mongodb version 4.0.22 installed in port 5656. It’s now oneliner: “docker run -p 5656:27017 mongo:4.0.22” (mongodb uses 27017 as default port).

Let’s containerize my node development environment. This will need some insider knowledge of node. But here is a simplified explanation if you’re not familiar: libraries are defined in package.json and package-lock.json and installed with npm install. npm is node package manager and node is the runtime. To run application with the packages we have script defined in package.json that instructs node to run index.js, the main/entry file in this case the script is executed with npm start. The application already includes code to watch for changes in the filesystem and restart the application if any changes are detected.

The project “node-dev-env” is here https://github.com/docker-hy/material-applications/tree/main/node-dev-env. I already included a development Dockerfile and a helpful docker-compose.

Dockerfile

FROM node:14

WORKDIR /usr/src/app

COPY package* ./

RUN npm install

docker-compose.yml

version: '3.7'

services:
  node-dev-env:
    build: . # Build with the Dockerfile here
    command: npm start # Run npm start as the command
    ports: 
      - 3000:3000 # The app uses port 3000 by default, publish it as 3000
    volumes:
      - ./:/usr/src/app # Let us modify the contents of the container locally
      - node_modules:/usr/src/app/node_modules # A bit of node magic, this ensures the dependencies built for the image are not available locally.
    container_name: node-dev-env # Container name for convenience

volumes: # This is required for the node_modules named volume
  node_modules:

And that’s it. We’ll use volume to copy all source code inside the volume so CMD will run the application we’re developing. Let’s try it!

$ docker-compose up
Creating network "node-dev-env_default" with the default driver
Creating volume "node-dev-env_node_modules" with default driver
Building node-dev-env
Step 1/4 : FROM node:14
...

Attaching to node-dev-env
node-dev-env    | 
node-dev-env    | > dev-env@1.0.0 start /usr/src/app
node-dev-env    | > nodemon index.js
...

node-dev-env    | App listening in port 3000

Great! The initial start up is a bit slow. It is a lot faster now that the image is already built. We can rebuild the whole environment whenever we want with docker-compose up --build.

Let’s see if the application works. Use browser to access http://localhost:3000, it should do a simple plus calculation with the query params.

However, the calulation doesn’t make sense! Let’s fix the bug. I bet it’s this line right here https://github.com/docker-hy/material-applications/blob/main/node-dev-env/index.js#L5

When I change the line, on my host machine the application instantly notices that files have changed:

▶ docker-compose up                       
Starting node-dev-env ... done
Attaching to node-dev-env
node-dev-env    | 
node-dev-env    | > dev-env@1.0.0 start /usr/src/app
node-dev-env    | > nodemon index.js
node-dev-env    | 
node-dev-env    | [nodemon] 2.0.7
node-dev-env    | [nodemon] to restart at any time, enter `rs`
node-dev-env    | [nodemon] watching path(s): *.*
node-dev-env    | [nodemon] watching extensions: js,mjs,json
node-dev-env    | [nodemon] starting `node index.js`
node-dev-env    | App listening in port 3000
node-dev-env    | [nodemon] restarting due to changes...
node-dev-env    | [nodemon] starting `node index.js`
node-dev-env    | App listening in port 3000

And now a page refresh shows that our code change fixed the issue. The development environment works.

The next exercise can be extremely easy or extremely hard. Feel free to have fun with it.

2.11

Select a project that you already have and start utilizing containers in the development environment.

Explain what you have done. It can be anything, ranging from having supporting docker-compose.yml to have services containerized to developing inside a container.

Conclusion

Again we started from the ground up by learning how to translate non-compose setup into docker-compose.yml and ran with it. Compose gave us also a few handy completely new features that we didn’t even know we needed, networks.

Now we’ve learned how to setup up vastly more complex applications with up to 5 different programs running at the same time and they only expose the used ports to the outside world (or even to our machine).

Are we ready for production yet? Short answer: no. Long answer: depends on the situation. Good thing we have part 3

Remember to mark your exercises into the submission application! Instructions on how and what to submit are on the exercises page.

ECTS Credits

Enrolling after each part is required for the ECTS credits. Now that you have completed part 2 use the following link to enroll in this course:

If you wish to end in this part and not do the following parts, follow the instructions at the bottom of exercises page

NOTE!

  • Enrollment for the course through the Open University is possible until Dec 12, 2021.

  • Credits for the course are only available to those students who have successfully enrolled on the course through the Open University and have completed the course according to the instructions.

* Electronic enrollment is available if you meet one of the following criteria:

  • You have a Finnish personal identity number

  • you are a student at the University of Helsinki, or

  • you are a student at a HAKA member institution.

If you are not a student in Finland and want to enroll on the course and receive ECTS credits.

Read the guide here under “Re­gis­tra­tion without a Finnish per­sonal identity code or on­line bank­ing ID at the Uni­versity’s Ad­mis­sions Services”: https://www.helsinki.fi/en/open-university/studying/beginning-your-studies/registration-and-fees