Introduction to Container Networking
By the end of this exercise, you should be able to:
- Attach containers to Docker's default nat network
- Resolve containers by DNS entry
Inspecting the Default Nat Network
First, let's investigate the NAT network that Docker provides by default. Start by getting some information:
PS: node-1 Administrator> Get-NetNatwhich should give something similar to this:
Name : H3acfc61d-0d8e-438c-8857-9c2b742707bf ExternalIPInterfaceAddressPrefix : InternalIPInterfaceAddressPrefix : 172.20.128.1/20 IcmpQueryTimeout : 30 TcpEstablishedConnectionTimeout : 1800 TcpTransientConnectionTimeout : 120 TcpFilteringBehavior : AddressDependentFiltering UdpFilteringBehavior : AddressDependentFiltering UdpIdleSessionTimeout : 120 UdpInboundRefresh : False Store : Local Active : TrueNote the
InternalIPInterfaceAddressPrefixvalue which provides information about the subnet managed by this network. Here it is172.20.128.1/20.Now let's use the Docker CLI to inspect the network. The
docker network inspectcommand yields network information about what containers are connected to the specified network; the default network is always callednat, so run:PS: node-1 Administrator> docker network inspect natThis results in:
[ { "Name": "nat", "Id": "28164bac70060efe504440aed3845180ad2262349852a920accd697bba33b967", "Created": "2017-08-15T12:36:40.8065675Z", "Scope": "local", "Driver": "nat", "EnableIPv6": false, "IPAM": { "Driver": "windows", "Options": null, "Config": [ { "Subnet": "172.20.128.0/20", "Gateway": "172.20.128.1" } ] }, ...In the
IPAMsection we basically see some of the same information as above, namely the subnet and the gateway of the subnet.
Connecting Containers to Default Nat
Start some named containers:
PS: node-1 Administrator> docker container run --name=u1 -dt microsoft/nanoserver PS: node-1 Administrator> docker container run --name=u2 -dt microsoft/nanoserverInspect the
natnetwork again:PS: node-1 Administrator> docker network inspect natYou should see two new entries in the
Containerssection of the result, one for each container:... "Containers": { "45e8576b05077e69e3786e85106b392d2d3a20743e10740c1d298cfb258b6922": { "Name": "u1", "EndpointID": "8e938af24a907dc8c6aaad9dca1fb040fdab76c929dec4418ea60aeb1443522e", "MacAddress": "00:15:5d:e6:0a:ec", "IPv4Address": "172.20.131.137/16", "IPv6Address": "" }, "b7e49f5566332d6e4e21b56dacb6ae6e051fb198b30ae05bf1bd1884023d3e20": { "Name": "u2", "EndpointID": "266f0c03b56dfceafeafd890828506b0cfcd503e0eb13d32c2458479015fd0c5", "MacAddress": "00:15:5d:e6:07:06", "IPv4Address": "172.20.135.21/16", "IPv6Address": "" }, ... } ...We can see that each container gets a
MacAddressand anIPv4Addressassociated. Thenatnetwork is providing level 2 routing and transfers network packets between MAC addresses.Connect to container
u2of your containers usingdocker container exec -it u2 powershell.From inside
u2, try pinging containeru1by the IP address you found in the previous step; then try pingingu1by container name,ping u1. Notice the lookup works with both the IP and the container name.Clean up these containers:
PS: node-1 Administrator> docker container rm -f u1 u2
Conclusion
In this exercise, you explored the most basic example of container networking: two containers communicating on the same host via network address translation and a layer 2 in-software router in the form a a Hyper-V switch. In addition to this basic routing technology, you saw how Docker leverages DNS lookup via container name to make our container networking portable; by allowing us to reach another container purely by name, without doing any other service discovery, we make it simple to design application logic meant to communicate container-to-container. At no point did our application logic need to discover anything directly about the networking infrastructure it was running on.