Starting a Service

By the end of this exercise, you should be able to:

  • Schedule a docker service across a swarm
  • Predict and understand the scoping behavior of docker overlay networks
  • Scale a service on swarm up or down
  • Force swarm to spread workload out across user-defined divisions in a datacenter

Creating an Overlay Network and Service

  1. Create a multi-host overlay network to connect your service to:

    PS: node-0 Administrator> docker network create --driver overlay my_overlay
    
  2. Verify that the network subnet was taken from the address pool defined when creating your swarm::

    PS: node-0 Administrator> docker network inspect my_overlay
    
    ...
    "Subnet": "10.85.0.0/25",
    "Gateway": "10.85.0.1"
    ...
    

    The overlay network has been assigned a subnet from the address pool we specified when creating our swarm.

  3. Create a service featuring a microsoft/nanoserver container pinging Google resolvers:

    PS: node-0 Administrator> docker service create --name pinger `
        --network my_overlay `
        microsoft/nanoserver ping 8.8.8.8 -t
    

    Note the syntax is a lot like docker container run; an image (microsoft/nanoserver) is specified, followed by the main process for that container (ping 8.8.8.8 -t).

  4. Get some information about the currently running services:

    PS: node-0 Administrator> docker service ls
    
  5. Check which node the container was created on:

    PS: node-0 Administrator> docker service ps pinger
    
  6. Connect to the node you found in the last step (call this node-x), find the container ID with docker container ls, and check its logs with docker container logs <container ID>. The results of the ongoing ping should be visible.

  7. Inspect the my_overlay network on the node running your pinger container:

    PS: node-x Administrator> docker network inspect my_overlay
    

    You should be able to see the container connected to this network, and a list of swarm nodes connected to this network under the Peers key. Also notice the correspondence between the container IPs and the subnet assigned to the network under the IPAM key - this is the subnet from which container IPs on this network are drawn.

  8. Connect to your worker node, node-3, and list your networks:

    PS: node-3 Administrator> docker network ls
    

    If the container for your service is not running here, you won't see the my_overlay network, since overlays only operate on nodes running containers attached to the overlay. On the other hand, if your container did get scheduled on node-3, you'll be able to see my-overlay as you should expect.

  9. Connect to any manager node (node-0, node-1 or node-2) and list the networks again. This time you will be able to see the network whether or not this manager has a container running on it for your pinger service; all managers maintain knowledge of all overlay networks.

  10. On the same manager, inspect the my_overlay network again. If this manager does happen to have a container for the service scheduled on it, you'll be able to see the Peers list like above; if there is no container scheduled for the service on this node, the Peers list will be absent. Peers are maintained by Swarm's gossip control plane, which is scoped to only include nodes with running containers attached to the same overlay network.

Scaling a Service

  1. Back on manager node-0, scale up the number of concurrent tasks that our microsoft/nanoserver service is running:

    PS: node-0 Administrator> docker service update pinger --replicas=8
    
  2. Now run docker service ps pinger to inspect the service. Are all the containers running right away? How were they distributed across your swarm?

Inspecting Service Logs

  1. In a previous step, you looked at the container logs for an individual task in your service; manager nodes can assemble all logs for all tasks of a given service by doing:

    PS: node-0 Administrator> docker service logs pinger
    

    The ping logs for all 8 pinging containers will be displayed.

  2. If instead you'd like to see the logs of a single task, on a manager node run docker service ps pinger, choose any task ID, and run docker service logs <task ID>. The logs of the individual task are returned; compare this to what you did above to fetch the same information with docker container logs.

Scheduling Topology-Aware Services

By default, the Swarm scheduler will spread containers across nodes based on availability, but in practice it is wise to consider datacenter segmentation; spreading tasks across datacenters or availability zones keeps the service available even when one such segment goes down.

  1. Add a label datacenter with value east to two nodes of your swarm:

    PS: node-0 Administrator> docker node update --label-add datacenter=east node-0
    PS: node-0 Administrator> docker node update --label-add datacenter=east node-1
    
  2. Add a label datacenter with value west to the other two nodes:

    PS: node-0 Administrator> docker node update --label-add datacenter=west node-2
    PS: node-0 Administrator> docker node update --label-add datacenter=west node-3
    
  3. Create a service using the --placement-pref flag to spread across node labels:

    PS: node-0 Administrator> docker service create --name iis --replicas=2 `
        --placement-pref spread=node.labels.datacenter `
        microsoft/iis
    

    There should be microsoft/iis containers present on nodes with every possible value of the node.labels.datacenter label.

  4. Use docker service ps iis as above to check that replicas got spread across the datacenter labels.

Updating Service Configuration

  1. Let's add an environment variable to all of our containers scheduled for our pinger service:

    PS: node-0 Administrator> docker service update pinger --env-add DEMO=test
    

    You'll see a series of progress bars like this:

    overall progress: 1 out of 8 tasks
    1/8: running   [==================================================>]
    2/8: starting  [============================================>      ]
    3/8:
    4/8:
    5/8:
    6/8:
    7/8:
    8/8:
    

    Service tasks are getting shut down one at a time, and new tasks with your new environment variable are spun up in their place as Swarm's reconciliation loop reschedules containers to match the new, updated desired state defined by your service.

  2. Connect to any container belonging to the pinger service, and check that the environment variable got set as expected:

    PS: node-0 Administrator> docker container exec -it <container ID> powershell
    PS C:\> $env:DEMO
    
    test
    

Cleanup

  1. Remove all existing services, in preparation for future exercises:

    PS: node-0 Administrator> docker service rm $(docker service ls -q)
    

Conclusion

In this exercise, we saw the basics of creating, scheduling and updating services. A common mistake people make is thinking that a service is just the containers scheduled by the service; in fact, a Docker service is the definition of desired state for those containers. Changing a service definition does not in general change containers directly; it causes them to get rescheduled by Swarm in order to match their new desired state.