Docker Overlay Driver and Overlay Networking

29/12/2020
Docker comes with three networking drivers by default. Network adapters are also initialized using these drivers, carrying the same exact name. For example, if you run docker network ls you will see a network named bridge, this driver uses bridge networking driver. This is the default network to which every container will try and connect, unless specified otherwise.

However, there are other drivers available as well, such as macvlan and Overlay driver, which is the topic of this post. Let’s have a closer look at what Overlay driver helps us accomplish and how we can create one for ourselves and attach containers to it.

What is Overlay driver?

Overlay driver is designed to facilitate communication between docker containers that are hidden from each other in entirely different networks. These networks could be private ones, or even public infrastructure on Cloud. The essential point is, if there are two hosts, each running Docker, then the Overlay network helps create a subnet which is overlaid on top of these two hosts and each Docker container connected to this overlay network can communicate with every other container using their own block of IP address, subnet and default gateway. As though they are part of the same network.

As illustrated below:

The two VMs are running docker, with containers attached to overlay network. The overlay network is “overlaid” on top of the VM and containers will get IP address like 10.0.0.2, 10.0.0.3, etc on this network. Regardless of the VMs running them or the VM’s own network configuration.

Prerequisites

Two Linux hosts with Docker installed and running on each of them. You can have two different VMs running locally, or use a couple VPS with static IPs.

Setting up Docker Swarm

The kind of setup described above is not meant for Docker running on a single host. We need a Docker Swarm where Overlay networks are truly meant to work. We won’t go into much details about Docker Swarm here, because it is Overlay that we want to discuss the most.

I have two VPS running on DigitalOcean with public IP addresses and one of them is going to be Docker Swarm Manager. Another node is going to be a worker node. This is the basic model for distributed systems like Docker Swarm.

On the Manager node, let’s initialize Docker Swarm:

root@manager:~# docker swarm init

You may have to specify which IP address to use, in case multiple IP addresses are assigned to a single network interface. If the previous command gives an error indicating that multiple IPs are being used, use the following:

root@manager:~# docker swarm init advertiseaddr IP_ADDRESS

It is important to note that the IP_ADDRESS above is the IP of your Swarm Manager host. In my case, it’s value is going to be 165.227.170.190.

This would generate an authentication token and you can copy and paste that command in your worker node’s terminal to make it a member of your Docker Swarm:

root@workernode:~#  docker swarm join token SWMTKN12nzu4e7hesie4xqhsuy1ip1dn8dg70b9iqs0v
tm5fovjh50cmk2rmfrdqup4vaujxnrpj4mmtn9 165.227.170.190:2377

Your token would differ wildly from this one, as it should. So copy the command generate after your docker swarm init command, NOT the one shown above.

Run the following command on your Docker manager to verify that the worker has actually been added:

root@manager:~# docker node ls

The output would be something similar to this:

Creating Overlay Network adding Containers

Now we can use Docker’s built-in overlay driver to create a network. Let’s call this network my-overlay. You can call it whatever seems fit to you.

root@manager:~# docker create network  driver overlay myoverlay

While you can attach containers directly to this network it isn’t something that is allowed by default, since services (which is another Docker Swarm entity) and not containers interface with this network, typically. Containers are what make up services, but that’s a story for another day.

Check the list of docker networks by running command docker network ls and you should see an entry for my-overlay in there, with scope set to swarm.

To attach containers, as part of a service, let’s run the command:

root@manager:~# docker service create name myservice network myoverlay
replicas 2 alpine sleep 1d

This will create 2 replicas of the Alpine Linux container, which is a very lightweight linux container. Let’s see how these containers are distributed amongst the two nodes that we have.

root@manager:~# docker service ps myservice
root@manager:~# docker service ps myservice

The output would show where each of the containers in this service are running:

ID                NAME                IMAGE               NODE

mlnm3xbv1m3x      myservice.1        alpine:latest       manager

ms9utjyqmqa7      myservice.2        alpine:latest       workernode

You will notice that half the containers are running on manager and the rest are running on worker node. This is the idea behind distributed system. Even if one node dies, the additional load is transferred to the other one.

Verifying the network IPs

We can run the following command on both manager and workernode:

root@manager:~# docker inspect myoverlay
root@workernode:~# docker inspect myoverlay

You will get a long JSON response in either case. Look for the container section in each case. This was the output on the Manager node, in my specific case:

The IP address is 10.0.0.11 for the one container running on Manager node.

The IP address is 10.0.0.12 for the second replica running on Workernode.

Let’s see if we can ping the first container (10.0.0.11) from the second on (10.0.0.12).  Get the container ID of the second one, running on workernode:

root@workernode:~# docker ps

Copy this ID. Let’s call it CONTAINER2 for now.

Drop into the shell of this second container, by running:

root@workernode:~# docker exec it CONTAINER2 sh

Just replace “CONTAINER2” with proper ID, obtained in the previous step. You will also notice that the prompt has changed from “root@…” to  plain “#”

In this shell, ping the other container, which you know is running on different host, in a different physical network.

# ping 10.0.0.11

Success! We can now create an abstract network just for our Docker containers which could potentially span the entire globe. That’s Docker Overlay for you.

ONET IDC thành lập vào năm 2012, là công ty chuyên nghiệp tại Việt Nam trong lĩnh vực cung cấp dịch vụ Hosting, VPS, máy chủ vật lý, dịch vụ Firewall Anti DDoS, SSL… Với 10 năm xây dựng và phát triển, ứng dụng nhiều công nghệ hiện đại, ONET IDC đã giúp hàng ngàn khách hàng tin tưởng lựa chọn, mang lại sự ổn định tuyệt đối cho website của khách hàng để thúc đẩy việc kinh doanh đạt được hiệu quả và thành công.
Bài viết liên quan

Clean up Docker: Remove Old Images, Containers, and Volumes

Clean up docker Most users are complaining about the system/server slowness and somewhere consider to buy a new system for...
28/12/2020

Setting Up Your Own Docker Image Repository

Setting up your own private Docker image repository is very important for many reasons. With your private Docker image...
29/12/2020

Docker Image vs Container

Understanding the process Docker uses to store data through images and containers will help you better design your Docker...
28/12/2020
Bài Viết

Bài Viết Mới Cập Nhật

Mua proxy v4 chạy socks5 để chơi game an toàn, tốc độ cao ở đâu?
18/05/2024

Thuê mua proxy Telegram trọn gói, tốc độ cao, giá siêu hời
18/05/2024

Thuê mua proxy Viettel ở đâu uy tín, chất lượng và giá tốt? 
14/05/2024

Dịch vụ thuê mua proxy US UK uy tín, chất lượng số #1
13/05/2024

Thuê mua proxy Việt Nam: Báo giá & các thông tin MỚI NHẤT
13/05/2024