Crash Course at Docker: Learn to Swim with Big Fish

Original author: Homepage Followed by Alexander Mikhnenko Adnan Rahić
  • Transfer

Quick start guide you are looking for.

If you followed software trends last year, you must be tired of hearing the term Docker. You are most likely overwhelmed by the sheer number of developers talking about containers, isolated virtual machines, supervisors, and other Voodoo magic related to DevOps. Today we’ll figure it out. It's time to finally understand what containers as a service are and why they are needed.


  1. "Why do I need it?"
    • An overview of all key terms.
    • Why do we need CaaS and Docker.
  2. Fast start.
    • Install Docker.
    • Create a container.
  3. The real scenario.
    • Creating an nginx container to host a static website.
    • Learning how to use build tools to automate Docker commands.

"Why do I need it?"

Not so long ago, I asked myself the same question. Being a stubborn developer for a long time, I finally sat down and accepted the awesomeness of using containers. Here is my opinion on why you should try to apply them.


Docker - software for creating containerized applications. The containers should be small, not storing environment information to run pieces of software.

Образ контейнера представляет собой легкий, автономный исполняемый пакет части программного обеспечения, который включает все необходимое для его запуска: код, время выполнения, системные инструменты, системные библиотеки, настройки.
Официальный сайт Docker.
Короче говоря, контейнер – крошечная виртуальная машина с примитивными функциями для запуска приложения, которое было положено в него.

Virtual machine?

The name “virtual machine” (VM) speaks for itself: it is a virtual version of a real machine that mimics the hardware of a machine inside a larger machine. This means that you can run many virtual machines on one large server. Have you ever seen the movie "Beginning"? A virtual machine is something like a “Start”. The piece of software that allows the VM to work is called Hypervisor.


Do you have a brain boiling over new terms? Take a moment, for good reason. Virtual machines only work because of Hypervisor. This is special software that allows a physical machine to host several different virtual machines. From the outside, it seems that VMs run their own programs and use the host hardware. However, this Hypervisor allocates resources to the virtual machine.

Note: if you ever tried to install software (such as VirtualBox), but failed, it was most likely due to the fact that the virtualization system was not activated in the BIOS of your computer. Perhaps this happened to me more than once I remember. nervous laughter **

If you're a nerd like me, here's an awesome post on what Hypervisor is.

Virtualization 101: What is a Hypervisor?

Answering your questions ...

What is CaaS really for? We have been using virtual machines for a long time. Why did containers suddenly become good? No one said virtual machines are bad, it's just hard to handle.

DevOps, as a rule, is complicated, and it is necessary that the appointed person all the time do the work associated with him. Virtual machines take up a lot of space and RAM, and also need constant configuration. Not to mention the fact that experience is required to properly manage them.

To not do double work, automate

With Docker, you can ignore regular configurations and environment settings and focus on coding instead. With the Docker Hub, you can take pre-created images and put them to work in a short time.

But the biggest advantage is the creation of a homogeneous environment. Instead of setting a list of various dependencies to run the application, you need to install only Docker. Docker is cross-platform, so each team developer will work in the same environment. The same goes for development, production, and production server. That's cool! No more "it works on my machine."

Fast start.

Let's start with the installation. Surprisingly: only a part of the software installed on the development machine is required, and everything will work fine. Docker is all you need.

Install Docker.

Fortunately, the installation process is very simple. This is how the installation on Ubuntu takes place .

$ sudo apt-get update
$ sudo apt-get install -y

That is all that is needed. To make sure Docker is working, you can run another command.

$ sudo systemctl status docker

Docker should return the results.

● docker.service – Docker Application Container Engine
  Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Active: active (running) since Sun 2018-01-14 12:42:17 CET; 4h 46min ago
Main PID: 2156 (dockerd)
   Tasks: 26
  Memory: 63.0M
     CPU: 1min 57.541s
  CGroup: /system.slice/docker.service
          ├─2156 /usr/bin/dockerd -H fd://
          └─2204 docker-containerd --config /var/run/docker/containerd/containerd.toml

If system services are stopped, run a combo of two commands to deploy Docker and make sure that it starts at boot.

$ sudo systemctl start docker && sudo systemctl enable docker

For a basic installation of Docker, you need to run the command docker as sudo . But if you add the user to the group docker , you can run the command without sudo .

$ sudo usermod -aG docker ${USER}
$ su - ${USER}

Running commands will add the user to the group docker . To check this, run $ id -nG . If you return to the output device with your username in the list, everything is done correctly.

What about Mac and Windows? Fortunately, the installation is just as simple. Download a simple file that launches the installation wizard. There is nothing easier. Check here the installation wizard for Mac and here for Windows.

Container deployment

After deploying and running Docker, we can experiment a bit. The four first teams that need to be put to work:

  • create - creates a container from the image;
  • ps - a list of running containers is displayed, optionally a flag -a for a list of all containers;
  • start - start the created container;
  • attach - attaches the standard input and output of the terminal to a working container, literally connecting you to the container, like to any virtual machine.

Let's start small. Take the Ubuntu image from the Docker Hub and create a container from it.

$ docker create -it ubuntu:16.04 bash

We add -it as an optional function to give the container an integrated terminal. We can connect to the terminal and also run the command bash . By pointing ubuntu: 16.04 , we get an Ubuntu image with the tag version 16.04 from the Docker Hub.

After running the create command, verify that the container is created.

$ docker ps -a

The list should look something like this.

7643dba89904  ubuntu:16.04 "bash"   X min ago  Created         name

The container is created and ready to run. Starting a container is simple: specify the start container ID command .

$ docker start 7643dba89904

Check again if the container is running, but now without a flag -a .

$ docker ps

If running, join it.

$ docker attach 7643dba89904

Cursor is changing. Why? Because you just entered the container. Now you can run any bash command that you are used to in Ubuntu, as if it were an instance running in the cloud. Give it a try.

$ ls

Everything will work well, and even $ ll . A simple Docker container is all you need. This is your own little virtual platform where you can do development, testing, or whatever you want! No need to use virtual machines or heavy software. To test my point, install something in this small container. Node installation will go well. If you want to exit the container, enter exit . The container will stop and you can list by typing $ docker ps -a .

Note. Each Docker container works sudo by default, that is, the command sudo does not exist. Each command that is executed is automatically launched with authority sudo .

The real scenario.

Time to work with this material. This is what you will use in real life for your projects and production applications.

Stateless containers or protocol?

I mentioned above that each container is isolated and does not preserve state. This means that after removing the container, its contents will be permanently deleted.

$ docker rm 7643dba89904

How do you save data in this case?

Have you ever heard of volumes? Volumes allow you to map directories on the host machine to directories inside the container.

$ docker create -it -v $(pwd):/var/www ubuntu:latest bash

When creating a new container, add a flag -v to indicate which volume to create. This command binds the current working directory on the computer to the directory / var / www inside the container.

After starting the container using the command, $ docker start <container_id> you can edit the code on the main machine and see the changes in the container. Now you can save data for various use cases - from storing images to storing database files - and, of course, for development, where you need live rebooting capabilities.

Note. You can run the create and start commands at the same time as the run command.

$ docker run -it -d ubuntu:16.04 bash

Note. The only addition is the -d flag, which tells the container to work separately in the background.

Why am I talking so much about volumes?

We can create a simple nginx web server to host a static website in a couple of simple steps.

Create a new directory, name it whatever you want, I will name my myapp for convenience. All you need to do is create a simple index.html file in the myapp directory and paste it.

<!-- index.html -->
   <link href="" rel="stylesheet" integrity="sha256-MfvZlkHCEqatNoGiOXveE8FIwMzZg4W85qfrfIFBfYc= sha512-dTfge/zgoMYpP7QbHy4gWMEGsbsdZeCXz7irItjcC3sPUFtf0kuFbDz/ixG7ArTxmDjLXDmezHubeNikyKGVyQ==" crossorigin="anonymous">
   <title>Docker Quick Start</title>
   <div class="container">
     <h1>Hello Docker</h1>
     <p>This means the nginx server is working.</p>

We have a common web page with title text. It remains to start the nginx container.

$ docker run --name webserver -v $(pwd):/usr/share/nginx/html -d -p 8080:80 nginx

We capture the nginx image from the Docker Hub to instantly configure nginx. The volume configuration is similar to what we did above. We pointed to the default directory where nginx hosts the HTML files. New is the parameter --name that we set for webserver and -p 8080: 80 . We have mapped container port 80 to port 8080 on the host machine. Remember to run the command in the myapp directory .

Check if the container is working with $ docker ps and the browser window starts. Go to http: // localhost: 8080 .

We have a nginx web server that starts in just a couple of commands. Edit in index.html as you need. Reload the page and see that the content has changed.

Note. You can stop a running container with the stop command.

$ docker stop <container_id>

How to make life even easier?

There is a saying: if you need to do something twice, automate. Fortunately, Docke took care of this. With the index.html file, add the Docker file . Its name is Dockerfile , without any extensions.

# Dockerfile
FROM nginx:alpine
VOLUME /usr/share/nginx/html

Dockerfile - assembly configuration for Docker images. The focus is on images ! We indicate that we want to capture the nginx: alpine image as the basis for our image, create the volume and set port 80.

To create the image, we have the build command.

$ docker build . -t webserver:v1

. indicates where the Docker file will be used to build the image, and -t marks the tag for the image. The image will be known as webserver:v1 .

With this command, we did not extract the image from the Docker Hub, but instead created our own. To see all the images, use the command images .

$ docker images

Run the created image.

$ docker run -v $(pwd):/usr/share/nginx/html -d -p 8080:80 webserver:v1

Dockerfile Strength is an add-on that you can provide to a container. You can pre-create images to your liking, and if you do not like repetitive tasks, then take one more step and install docker-compose .


Docker-compose will allow you to create and run a container from a single command. But more importantly, you can build an entire cluster of containers and configure them using docker-compose.

Go to . траницу установки и установите docker-compose на компьютер.

Install Docker Compose
Having returned to the device, start $ docker-compose --version . We will deal with some compositions.
Together with the Dockerfile, add another file called docker-compose.yml and paste this snippet.

# docker-compose.yml
version: '2'
   build: .
    - "8080:80"
    - .:/usr/share/nginx/html

Be careful with indentation, otherwise * docker-compose.yml ** will not work properly. It remains only to run it.

$ docker-compose up (-d)

Примечание . Аргумент команды docker-compose -d используется для запуска в состоянии detached, используя можно запустить $ docker-compose ps , чтобы увидеть, что в настоящее время работает, или остановить работу контейнеров с помощью $ docker-compose stop .

Docker will collect the image from the Dockerfile in the current directory ( . ), display the ports, as we did above, and also “fumble” volumes. See what happens! The same thing that we did with the build and run commands. Now, instead of them, we execute only one command docker-compose up .
Return to the browser and you will see that everything works the same as before. The only difference is that now there is no need for tedious writing of commands in the terminal. We replaced them with two configuration files - Dockerfile and docker-compose.yml . Both of these files can be added to your Git repository. Why is it important? Because they will always work correctly in production, as expected. Exactly the same container network will be deployed on the production server!
To end this section, go back to the console and view the list of all containers again.

$ docker ps -a

If you want to remove the container, you can run the command rm that I mentioned above. Use the command to delete images rmi .

$ docker rmi <image_id>

Try not to leave container leftovers and remove them if you no longer need them.

Wider perspective?

Having made sure that Docker is not the only technology for creating containers, I must definitely mention the less popular technologies. Docker is just the most common containerization option. But rkt seems to work just fine too.

Digging deeper, I must mention the orchestration of containers. We talked only about the tip of the iceberg. Docker-compose is a container networking tool. But, when it becomes necessary to manage large volumes of containers and provide maximum uptime, orchestration comes into play.

Managing a large container-based cluster is not a trivial task. As the number of containers grows, we need a way to automate the various DevOps tasks that we usually do. Orchestration is what helps in creating hosts, creating or deleting containers when you need to scale or recreate dropped containers, network containers, and more. And here the following tools come to the rescue: Kubernetes from Google and Docker's own development - Swarm Mode .


If I did not convince you of the enormous benefits of using CaaS and the simplicity of Docker, I would highly recommend reviewing and moving one of your existing applications to the Docker container!

Контейнер Docker – это крошечная виртуальная машина, где вы можете делать все, что вам нравится, от разработки, установки, тестирования до хостинга производственных приложений.

Docker uniformity is like magic for production environments. This will facilitate application deployment and server management. Because now you know for sure: what works locally will work in the cloud. This is what I call calm. No one else will hear the infamous sentence, which we have all heard too many times.

Ну, это работает на моей машине…

Source: A crash course on Docker - Learn to swim with the big fish