Routing Traffic to Docker Services
By the end of this exercise, you should be able to:
- Anticipate how swarm will load balance traffic across a service with more than one replica
- Publish a port on every swarm member that forwards all incoming traffic to the virtual IP of a swarm service
Observing Load Balancing
Start by deploying a simple service which spawns containers that echo back their hostname when
curl'ed:[centos@node-0 ~]$ docker service create --name who-am-I \ --publish 8000:8000 \ --replicas 3 training/whoami:latestRun
curl -4 localhost:8000and observe the output. You should see something similar to the following:[centos@node-0 ~]$ curl -4 localhost:8000 I'm a7e5a21e6e26Take note of the response. In this example, our value is
a7e5a21e6e26. Thewhoamicontainers uniquely identify themselves by returning their respective hostname. So each one of ourwhoamiinstances should have a different value.Run
curl -4 localhost:8000again. What can you observe? Notice how the value changes each time. This shows us that the routing mesh has sent our 2nd request over to a different container, since the value was different.Repeat the command two more times. What can you observe? You should see one new value and then on the 4th request it should revert back to the value of the first container. In this example that value is
a7e5a21e6e26.Scale the number of tasks for our
who-am-Iservice to 6:[centos@node-0 ~]$ docker service update who-am-I --replicas=6Now run
curl -4 localhost:8000multiple times again. Use a loop like this:[centos@node-0 ~]$ for n in {1..10}; do curl localhost:8000 -4; done I'm 263fc24d0789 I'm 57ca6c0c0eb1 I'm c2ee8032c828 I'm c20c1412f4ff I'm e6a88a30481a I'm 86e262733b1e I'm 263fc24d0789 I'm 57ca6c0c0eb1 I'm c2ee8032c828 I'm c20c1412f4ffYou should be able to observe some new values. Note how the values repeat after the 6th curl command.
Using the Routing Mesh
Run an nginx service and expose the service port 80 on port 8080:
[centos@node-0 ~]$ docker service create --name nginx --publish 8080:80 nginxCheck which node your nginx service task is scheduled on:
[centos@node-0 ~]$ docker service ps nginxOpen a web browser and hit the IP address of that node at port 8080. You should see the NGINX welcome page. Try the same thing with the IP address of any other node in your cluster (using port 8080). No matter which swarm node IP you hit, the request gets forwarded to nginx by the routing mesh.
Cleanup
Remove all existing services, in preparation for future exercises:
[centos@node-0 ~]$ docker service rm $(docker service ls -q)
Conclusion
In these examples, you saw that requests to an exposed service will be automatically load balanced across all tasks providing that service. Furthermore, exposed services are reachable on all nodes in the swarm - whether they are running a container for that service or not.