Starting a Service
By the end of this exercise, you should be able to:
- Schedule a docker service across a swarm
- Predict and understand the scoping behavior of docker overlay networks
- Scale a service on swarm up or down
- Force swarm to spread workload out across user-defined divisions in a datacenter
Creating an Overlay Network and Service
Create a multi-host overlay network to connect your service to:
[centos@node-0 ~]$ docker network create --driver overlay my_overlayVerify that the network subnet was taken from the address pool defined when creating your swarm:
[centos@node-0 ~]$ docker network inspect my_overlay ... "Subnet": "10.85.0.0/25", "Gateway": "10.85.0.1" ...The overlay network has been assigned a subnet from the address pool we specified when creating our swarm.
Create a service featuring an
alpinecontainer pinging Google resolvers, plugged into your overlay network:[centos@node-0 ~]$ docker service create --name pinger \ --network my_overlay alpine ping 8.8.8.8Note the syntax is a lot like
docker container run; an image (alpine) is specified, followed by the PID 1 process for that container (ping 8.8.8.8).Get some information about the currently running services:
[centos@node-0 ~]$ docker service lsCheck which node the container was created on:
[centos@node-0 ~]$ docker service ps pingerSSH into the node you found in the last step (call this
node-x), find the container ID withdocker container ls, and check its logs withdocker container logs <container ID>. The results of the ongoing ping should be visible.Inspect the
my_overlaynetwork on the node running your pinger container:[centos@node-x ~]$ docker network inspect my_overlayYou should be able to see the container connected to this network, and a list of swarm nodes connected to this network under the
Peerskey. Also notice the correspondence between the container IPs and the subnet assigned to the network under theIPAMkey - this is the subnet from which container IPs on this network are drawn.Connect to your worker node,
node-3, and list your networks:[centos@node-3 ~]$ docker network lsIf the container for your service is not running here, you won't see the
my_overlaynetwork, since overlays only operate on nodes running containers attached to the overlay. On the other hand, if your container did get scheduled onnode-3, you'll be able to seemy-overlayas you should expect.Connect to any manager node (
node-0,node-1ornode-2) and list the networks again. This time you will be able to see the network whether or not this manager has a container running on it for yourpingerservice; all managers maintain knowledge of all overlay networks.On the same manager, inspect the
my_overlaynetwork again. If this manager does happen to have a container for the service scheduled on it, you'll be able to see thePeerslist like above; if there is no container scheduled for the service on this node, thePeerslist will be absent.Peersare maintained by Swarm's gossip control plane, which is scoped to only include nodes with running containers attached to the same overlay network.
Scaling a Service
Back on a manager node, scale up the number of concurrent tasks that our
alpineservice is running:[centos@node-0 ~]$ docker service update pinger --replicas=8 pinger overall progress: 8 out of 8 tasks 1/8: running [==================================================>] 2/8: running [==================================================>] 3/8: running [==================================================>] 4/8: running [==================================================>] 5/8: running [==================================================>] 6/8: running [==================================================>] 7/8: running [==================================================>] 8/8: running [==================================================>] verify: Service convergedNow run
docker service ps pingerto inspect the service. How were tasks distributed across your swarm?Use
docker network inspect my_overlayagain on a node that has apingercontainer running. More nodes appear connected to this network under thePeerskey, since all these nodes started gossiping amongst themselves when they attached containers to themy_overlaynetwork.
Inspecting Service Logs
In a previous step, you looked at the container logs for an individual task in your service; manager nodes can assemble all logs for all tasks of a given service by doing:
[centos@node-0 ~]$ docker service logs pingerThe ping logs for all 8 pinging containers will be displayed.
If instead you'd like to see the logs of a single task, on a manager node run
docker service ps pinger, choose any task ID, and rundocker service logs <task ID>. The logs of the individual task are returned; compare this to what you did above to fetch the same information withdocker container logs.
Scheduling Topology-Aware Services
By default, the Swarm scheduler will try to schedule an equal number of containers on all nodes, but in practice it is wise to consider datacenter segmentation; spreading tasks across datacenters or availability zones keeps the service available even when one such segment goes down.
Add a label
datacenterwith valueeastto two nodes of your swarm:[centos@node-0 ~]$ docker node update --label-add datacenter=east node-0 [centos@node-0 ~]$ docker node update --label-add datacenter=east node-1Add a label
datacenterwith valuewestto the other two nodes:[centos@node-0 ~]$ docker node update --label-add datacenter=west node-2 [centos@node-0 ~]$ docker node update --label-add datacenter=west node-3Create a service using the
--placement-prefflag to spread across node labels:[centos@node-0 ~]$ docker service create --name my_proxy \ --replicas=2 --publish 8000:80 \ --placement-pref spread=node.labels.datacenter \ nginxThere should be
nginxcontainers present on nodes with every possible value of thenode.labels.datacenterlabel, one indatacenter=eastnodes, and one indatacenter=westnodes.Use
docker service ps my_proxyas above to check that replicas got spread across the datacenter labels.
Updating Service Configuration
If a container doesn't need to write to its filesystem, it should always be run in read-only mode, for security purposes. Update your service to use read-only containers:
[centos@node-0 ~]$ docker service update pinger --read-only pinger overall progress: 2 out of 8 tasks 1/8: running [==================================================>] 2/8: running [==================================================>] 3/8: ready [======================================> ] 4/8: 5/8: 6/8: 7/8: 8/8:Over the next few seconds, you should see tasks for the pinger service shutting down and restarting; this is the swarm manager replacing old containers which no longer match their desired state (using a read-only filesystem), with new containers that match the new configuration.
Once all containers for the pinger service have been restarted, try connecting to the container and creating a file to convince yourself this worked as expected.
Cleanup
Remove all existing services, in preparation for future exercises:
[centos@node-0 ~]$ docker service rm $(docker service ls -q)
Conclusion
In this exercise, we saw the basics of creating, scheduling and updating services. A common mistake people make is thinking that a service is just the containers scheduled by the service; in fact, a Docker service is the definition of desired state for those containers. Changing a service definition does not in general change containers directly; it causes them to get rescheduled by Swarm in order to match their new desired state.