Instructor Demo: Kubernetes Basics
In this demo, we'll illustrate:
- Setting up a Kubernetes cluster with one master and two nodes
- Scheduling a pod, including the effect of taints on scheduling
- Namespaces shared by containers in a pod
Initializing Kubernetes
Everyone should follow along with this section to install Kubernetes. On
node-0, initialize the cluster withkubeadm:[centos@node-0 ~]$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16If successful, the output will end with a join command:
... You can now join any number of machines by running the following on each node as root: kubeadm join 10.10.29.54:6443 --token wdytg5.q1w1f4dau7u6wk11 --discovery-token-ca-cert-hash sha256:a3b222227e5b064d498321d1838ee271355aae810f7ef1c984f4304e68143c81To start using you cluster, you need to run:
[centos@node-0 ~]$ mkdir -p $HOME/.kube [centos@node-0 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [centos@node-0 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/configList all your nodes in the cluster:
[centos@node-0 ~]$ kubectl get nodesWhich should output something like:
NAME STATUS ROLES AGE VERSION node-0 NotReady master 2h v1.11.1The
NotReadystatus indicates that we must install a network for our cluster.Let's install the Calico network driver:
[centos@node-0 ~]$ kubectl apply -f https://bit.ly/2v9yaaVAfter a moment, if we list nodes again, ours should be ready:
[centos@node-0 ~]$ kubectl get nodes -w NAME STATUS ROLES AGE VERSION node-0 NotReady master 1m v1.11.1 node-0 NotReady master 1m v1.11.1 node-0 NotReady master 1m v1.11.1 node-0 Ready master 2m v1.11.1 node-0 Ready master 2m v1.11.1
Exploring Kubernetes Scheduling
Let's create a
demo-pod.yamlfile onnode-0after enabling Kubernetes on this single node:apiVersion: v1 kind: Pod metadata: name: demo-pod spec: volumes: - name: shared-data emptyDir: {} containers: - name: nginx image: nginx - name: mydemo image: centos:7 command: ["ping", "8.8.8.8"]Deploy the pod:
[centos@node-0 ~]$ kubectl create -f demo-pod.yamlCheck to see if the pod is running:
[centos@node-0 ~]$ kubectl get pod demo-pod NAME READY STATUS RESTARTS AGE demo-pod 0/2 Pending 0 7sThe status should be stuck in pending. Why is that?
Let's attempt to troubleshoot by obtaining some information about the pod:
[centos@node-0 ~]$ kubectl describe pod demo-podIn the bottom section titled
Events:, we should see something like this:... Events: Type Reason ... Message ---- ------ ... ------- Warning FailedScheduling ... 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.Note how it states that the one node in your cluster has a taint, which is Kubernetes's way of saying there's a reason you might not want to schedule pods there.
Get some state and config information about your single kubernetes node:
[centos@node-0 ~]$ kubectl describe nodesIf we scroll a little, we should see a field titled
Taints, and it should say something like:Taints: node-role.kubernetes.io/master:NoScheduleBy default, Kubernetes masters carry a taint that disallows scheduling pods on them. While this can be overridden, it is best practice to not allow pods to get scheduled on master nodes, in order to ensure the stability of your cluster.
Execute the join command you found above when initializing Kubernetes on
node-1andnode-2(you'll need to addsudoto the start), and then check the status back onnode-0:[centos@node-1 ~]$ sudo kubeadm join... [centos@node-2 ~]$ sudo kubeadm join... [centos@node-0 ~]$ kubectl get nodesAfter a few moments, there should be three nodes listed - all with the
Readystatus.Let's see what system pods are running on our cluster:
[centos@node-0 ~]$ kubectl get pods -n kube-systemwhich results in something similar to this:
NAME READY STATUS RESTARTS AGE calico-etcd-pfhj4 1/1 Running 1 5h calico-kube-controllers-559c657d6d-ztk8c 1/1 Running 1 5h calico-node-89k9v 2/2 Running 0 4h calico-node-brqxz 2/2 Running 2 5h calico-node-zsmh2 2/2 Running 1 41s coredns-78fcdf6894-gtj87 1/1 Running 1 5h coredns-78fcdf6894-nz2kw 1/1 Running 1 5h etcd-node-0 1/1 Running 1 5h kube-apiserver-node-0 1/1 Running 1 5h kube-controller-manager-node-0 1/1 Running 1 5h kube-proxy-qxfzt 1/1 Running 0 41s kube-proxy-vgrtm 1/1 Running 0 4h kube-proxy-ws2z5 1/1 Running 0 5h kube-scheduler-node-0 1/1 Running 1 5hWe can see the pods running on the master: etcd, api-server, controller manager and scheduler, as well as calico and DNS infrastructure pods deployed when we installed calico.
Finally, let's check the status of our demo pod now:
[centos@node-0 ~]$ kubectl get pod demo-podEverything should be working correctly with 2/2 containers in the pod running, now that there are un-tainted nodes for the pod to get scheduled on.
Exploring Containers in a Pod
Let's interact with the centos container running in demo-pod by getting a shell in it:
[centos@node-0 ~]$ kubectl exec -it -c mydemo demo-pod -- /bin/bashTry listing the processes in this container:
[root@demo-pod /]# ps -aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 24860 1992 ? Ss 14:48 0:00 ping 8.8.8.8 root 5 0.0 0.0 11832 3036 pts/0 Ss 14:48 0:00 /bin/bash root 20 0.0 0.0 51720 3508 pts/0 R+ 14:48 0:00 ps -auxWe can see the ping process we containerized in our yaml file running as PID 1 inside this container, just like we saw for plain containers.
Try reaching Nginx:
[root@demo-pod /]# curl localhost:80You should see the HTML for the default nginx landing page. Notice the difference here from a regular container; we were able to reach our nginx deployment from our centos container on a port on localhost. The nginx and centos containers share a network namespace and therefore all their ports, since they are part of the same pod.
Conclusion
In this demo, we saw two scheduling innovations Kubernetes offers: taints, which provide 'anti-affinity', or reasons not to schedule a pod on a given node; and pods, which are groups of containers that are always scheduled on the same node, and share network, IPC and hostname namespaces. These are both examples of Kubernetes's highly expressive scheduling, and are both difficult to reproduce with the simpler scheduling offered by Swarm.