Routing Traffic to Docker Services

By the end of this exercise, you should be able to:

  • Anticipate how swarm will load balance traffic across a service with more than one replica
  • Publish a port on every swarm member that forwards all incoming traffic to the virtual IP of a swarm service

Observing Load Balancing

  1. Start by deploying a simple service which spawns containers that echo back their hostname when curl'ed:

    [centos@node-0 ~]$ docker service create --name who-am-I \
        --publish 8000:8000 \
        --replicas 3 training/whoami:latest
    
  2. Run curl -4 localhost:8000 and observe the output. You should see something similar to the following:

    [centos@node-0 ~]$ curl -4 localhost:8000
    I'm a7e5a21e6e26
    

    Take note of the response. In this example, our value is a7e5a21e6e26. The whoami containers uniquely identify themselves by returning their respective hostname. So each one of our whoami instances should have a different value.

  3. Run curl -4 localhost:8000 again. What can you observe? Notice how the value changes each time. This shows us that the routing mesh has sent our 2nd request over to a different container, since the value was different.

  4. Repeat the command two more times. What can you observe? You should see one new value and then on the 4th request it should revert back to the value of the first container. In this example that value is a7e5a21e6e26.

  5. Scale the number of tasks for our who-am-I service to 6:

    [centos@node-0 ~]$ docker service update who-am-I --replicas=6
    
  6. Now run curl -4 localhost:8000 multiple times again. Use a loop like this:

    [centos@node-0 ~]$ for n in {1..10}; do curl localhost:8000 -4; done
    
    I'm 263fc24d0789
    I'm 57ca6c0c0eb1
    I'm c2ee8032c828
    I'm c20c1412f4ff
    I'm e6a88a30481a
    I'm 86e262733b1e
    I'm 263fc24d0789
    I'm 57ca6c0c0eb1
    I'm c2ee8032c828
    I'm c20c1412f4ff
    

    You should be able to observe some new values. Note how the values repeat after the 6th curl command.

Using the Routing Mesh

  1. Run an nginx service and expose the service port 80 on port 8080:

    [centos@node-0 ~]$ docker service create --name nginx --publish 8080:80 nginx
    
  2. Check which node your nginx service task is scheduled on:

    [centos@node-0 ~]$ docker service ps nginx
    
  3. Open a web browser and hit the IP address of that node at port 8080. You should see the NGINX welcome page. Try the same thing with the IP address of any other node in your cluster (using port 8080). No matter which swarm node IP you hit, the request gets forwarded to nginx by the routing mesh.

Cleanup

  1. Remove all existing services, in preparation for future exercises:

    [centos@node-0 ~]$ docker service rm $(docker service ls -q)
    

Conclusion

In these examples, you saw that requests to an exposed service will be automatically load balanced across all tasks providing that service. Furthermore, exposed services are reachable on all nodes in the swarm - whether they are running a container for that service or not.