Starting a Service

By the end of this exercise, you should be able to:

  • Schedule a docker service across a swarm
  • Predict and understand the scoping behavior of docker overlay networks
  • Scale a service on swarm up or down
  • Force swarm to spread workload out across user-defined divisions in a datacenter

Creating an Overlay Network and Service

  1. Create a multi-host overlay network to connect your service to:

    [centos@node-0 ~]$ docker network create --driver overlay my_overlay
    
  2. Verify that the network subnet was taken from the address pool defined when creating your swarm:

    [centos@node-0 ~]$ docker network inspect my_overlay
    
    ...
    "Subnet": "10.85.0.0/25",
    "Gateway": "10.85.0.1"
    ...
    

    The overlay network has been assigned a subnet from the address pool we specified when creating our swarm.

  3. Create a service featuring an alpine container pinging Google resolvers, plugged into your overlay network:

    [centos@node-0 ~]$ docker service create --name pinger \
        --network my_overlay alpine ping 8.8.8.8
    

    Note the syntax is a lot like docker container run; an image (alpine) is specified, followed by the PID 1 process for that container (ping 8.8.8.8).

  4. Get some information about the currently running services:

    [centos@node-0 ~]$ docker service ls
    
  5. Check which node the container was created on:

    [centos@node-0 ~]$ docker service ps pinger
    
  6. SSH into the node you found in the last step (call this node-x), find the container ID with docker container ls, and check its logs with docker container logs <container ID>. The results of the ongoing ping should be visible.

  7. Inspect the my_overlay network on the node running your pinger container:

    [centos@node-x ~]$ docker network inspect my_overlay
    

    You should be able to see the container connected to this network, and a list of swarm nodes connected to this network under the Peers key. Also notice the correspondence between the container IPs and the subnet assigned to the network under the IPAM key - this is the subnet from which container IPs on this network are drawn.

  8. Connect to your worker node, node-3, and list your networks:

    [centos@node-3 ~]$ docker network ls
    

    If the container for your service is not running here, you won't see the my_overlay network, since overlays only operate on nodes running containers attached to the overlay. On the other hand, if your container did get scheduled on node-3, you'll be able to see my-overlay as you should expect.

  9. Connect to any manager node (node-0, node-1 or node-2) and list the networks again. This time you will be able to see the network whether or not this manager has a container running on it for your pinger service; all managers maintain knowledge of all overlay networks.

  10. On the same manager, inspect the my_overlay network again. If this manager does happen to have a container for the service scheduled on it, you'll be able to see the Peers list like above; if there is no container scheduled for the service on this node, the Peers list will be absent. Peers are maintained by Swarm's gossip control plane, which is scoped to only include nodes with running containers attached to the same overlay network.

Scaling a Service

  1. Back on a manager node, scale up the number of concurrent tasks that our alpine service is running:

    [centos@node-0 ~]$ docker service update pinger --replicas=8
    
    pinger
    overall progress: 8 out of 8 tasks 
    1/8: running   [==================================================>] 
    2/8: running   [==================================================>] 
    3/8: running   [==================================================>] 
    4/8: running   [==================================================>] 
    5/8: running   [==================================================>] 
    6/8: running   [==================================================>] 
    7/8: running   [==================================================>] 
    8/8: running   [==================================================>] 
    verify: Service converged
    
  2. Now run docker service ps pinger to inspect the service. How were tasks distributed across your swarm?

  3. Use docker network inspect my_overlay again on a node that has a pinger container running. More nodes appear connected to this network under the Peers key, since all these nodes started gossiping amongst themselves when they attached containers to the my_overlay network.

Inspecting Service Logs

  1. In a previous step, you looked at the container logs for an individual task in your service; manager nodes can assemble all logs for all tasks of a given service by doing:

    [centos@node-0 ~]$ docker service logs pinger
    

    The ping logs for all 8 pinging containers will be displayed.

  2. If instead you'd like to see the logs of a single task, on a manager node run docker service ps pinger, choose any task ID, and run docker service logs <task ID>. The logs of the individual task are returned; compare this to what you did above to fetch the same information with docker container logs.

Scheduling Topology-Aware Services

By default, the Swarm scheduler will try to schedule an equal number of containers on all nodes, but in practice it is wise to consider datacenter segmentation; spreading tasks across datacenters or availability zones keeps the service available even when one such segment goes down.

  1. Add a label datacenter with value east to two nodes of your swarm:

    [centos@node-0 ~]$ docker node update --label-add datacenter=east node-0
    [centos@node-0 ~]$ docker node update --label-add datacenter=east node-1
    
  2. Add a label datacenter with value west to the other two nodes:

    [centos@node-0 ~]$ docker node update --label-add datacenter=west node-2
    [centos@node-0 ~]$ docker node update --label-add datacenter=west node-3
    
  3. Create a service using the --placement-pref flag to spread across node labels:

    [centos@node-0 ~]$ docker service create --name my_proxy \
        --replicas=2 --publish 8000:80 \
        --placement-pref spread=node.labels.datacenter \
        nginx
    

    There should be nginx containers present on nodes with every possible value of the node.labels.datacenter label, one in datacenter=east nodes, and one in datacenter=west nodes.

  4. Use docker service ps my_proxy as above to check that replicas got spread across the datacenter labels.

Updating Service Configuration

  1. If a container doesn't need to write to its filesystem, it should always be run in read-only mode, for security purposes. Update your service to use read-only containers:

    [centos@node-0 ~]$ docker service update pinger --read-only
    
    pinger
    overall progress: 2 out of 8 tasks 
    1/8: running   [==================================================>] 
    2/8: running   [==================================================>] 
    3/8: ready     [======================================>            ] 
    4/8:   
    5/8:   
    6/8:   
    7/8:   
    8/8:
    

    Over the next few seconds, you should see tasks for the pinger service shutting down and restarting; this is the swarm manager replacing old containers which no longer match their desired state (using a read-only filesystem), with new containers that match the new configuration.

    Once all containers for the pinger service have been restarted, try connecting to the container and creating a file to convince yourself this worked as expected.

Cleanup

  1. Remove all existing services, in preparation for future exercises:

    [centos@node-0 ~]$ docker service rm $(docker service ls -q)
    

Conclusion

In this exercise, we saw the basics of creating, scheduling and updating services. A common mistake people make is thinking that a service is just the containers scheduled by the service; in fact, a Docker service is the definition of desired state for those containers. Changing a service definition does not in general change containers directly; it causes them to get rescheduled by Swarm in order to match their new desired state.