Starting a Service
By the end of this exercise, you should be able to:
- Schedule a docker service across a swarm
- Predict and understand the scoping behavior of docker overlay networks
- Scale a service on swarm up or down
- Force swarm to spread workload out across user-defined divisions in a datacenter
Creating an Overlay Network and Service
Create a multi-host overlay network to connect your service to:
PS: node-0 Administrator> docker network create --driver overlay my_overlayVerify that the network subnet was taken from the address pool defined when creating your swarm::
PS: node-0 Administrator> docker network inspect my_overlay ... "Subnet": "10.85.0.0/25", "Gateway": "10.85.0.1" ...The overlay network has been assigned a subnet from the address pool we specified when creating our swarm.
Create a service featuring a
microsoft/nanoservercontainer pinging Google resolvers:PS: node-0 Administrator> docker service create --name pinger ` --network my_overlay ` microsoft/nanoserver ping 8.8.8.8 -tNote the syntax is a lot like
docker container run; an image (microsoft/nanoserver) is specified, followed by the main process for that container (ping 8.8.8.8 -t).Get some information about the currently running services:
PS: node-0 Administrator> docker service lsCheck which node the container was created on:
PS: node-0 Administrator> docker service ps pingerConnect to the node you found in the last step (call this
node-x), find the container ID withdocker container ls, and check its logs withdocker container logs <container ID>. The results of the ongoing ping should be visible.Inspect the
my_overlaynetwork on the node running your pinger container:PS: node-x Administrator> docker network inspect my_overlayYou should be able to see the container connected to this network, and a list of swarm nodes connected to this network under the
Peerskey. Also notice the correspondence between the container IPs and the subnet assigned to the network under theIPAMkey - this is the subnet from which container IPs on this network are drawn.Connect to your worker node,
node-3, and list your networks:PS: node-3 Administrator> docker network lsIf the container for your service is not running here, you won't see the
my_overlaynetwork, since overlays only operate on nodes running containers attached to the overlay. On the other hand, if your container did get scheduled onnode-3, you'll be able to seemy-overlayas you should expect.Connect to any manager node (
node-0,node-1ornode-2) and list the networks again. This time you will be able to see the network whether or not this manager has a container running on it for yourpingerservice; all managers maintain knowledge of all overlay networks.On the same manager, inspect the
my_overlaynetwork again. If this manager does happen to have a container for the service scheduled on it, you'll be able to see thePeerslist like above; if there is no container scheduled for the service on this node, thePeerslist will be absent.Peersare maintained by Swarm's gossip control plane, which is scoped to only include nodes with running containers attached to the same overlay network.
Scaling a Service
Back on manager
node-0, scale up the number of concurrent tasks that ourmicrosoft/nanoserverservice is running:PS: node-0 Administrator> docker service update pinger --replicas=8Now run
docker service ps pingerto inspect the service. Are all the containers running right away? How were they distributed across your swarm?
Inspecting Service Logs
In a previous step, you looked at the container logs for an individual task in your service; manager nodes can assemble all logs for all tasks of a given service by doing:
PS: node-0 Administrator> docker service logs pingerThe ping logs for all 8 pinging containers will be displayed.
If instead you'd like to see the logs of a single task, on a manager node run
docker service ps pinger, choose any task ID, and rundocker service logs <task ID>. The logs of the individual task are returned; compare this to what you did above to fetch the same information withdocker container logs.
Scheduling Topology-Aware Services
By default, the Swarm scheduler will spread containers across nodes based on availability, but in practice it is wise to consider datacenter segmentation; spreading tasks across datacenters or availability zones keeps the service available even when one such segment goes down.
Add a label
datacenterwith valueeastto two nodes of your swarm:PS: node-0 Administrator> docker node update --label-add datacenter=east node-0 PS: node-0 Administrator> docker node update --label-add datacenter=east node-1Add a label
datacenterwith valuewestto the other two nodes:PS: node-0 Administrator> docker node update --label-add datacenter=west node-2 PS: node-0 Administrator> docker node update --label-add datacenter=west node-3Create a service using the
--placement-prefflag to spread across node labels:PS: node-0 Administrator> docker service create --name iis --replicas=2 ` --placement-pref spread=node.labels.datacenter ` microsoft/iisThere should be
microsoft/iiscontainers present on nodes with every possible value of thenode.labels.datacenterlabel.Use
docker service ps iisas above to check that replicas got spread across the datacenter labels.
Updating Service Configuration
Let's add an environment variable to all of our containers scheduled for our
pingerservice:PS: node-0 Administrator> docker service update pinger --env-add DEMO=testYou'll see a series of progress bars like this:
overall progress: 1 out of 8 tasks 1/8: running [==================================================>] 2/8: starting [============================================> ] 3/8: 4/8: 5/8: 6/8: 7/8: 8/8:Service tasks are getting shut down one at a time, and new tasks with your new environment variable are spun up in their place as Swarm's reconciliation loop reschedules containers to match the new, updated desired state defined by your service.
Connect to any container belonging to the
pingerservice, and check that the environment variable got set as expected:PS: node-0 Administrator> docker container exec -it <container ID> powershell PS C:\> $env:DEMO test
Cleanup
Remove all existing services, in preparation for future exercises:
PS: node-0 Administrator> docker service rm $(docker service ls -q)
Conclusion
In this exercise, we saw the basics of creating, scheduling and updating services. A common mistake people make is thinking that a service is just the containers scheduled by the service; in fact, a Docker service is the definition of desired state for those containers. Changing a service definition does not in general change containers directly; it causes them to get rescheduled by Swarm in order to match their new desired state.