Multinode Kubernetes Cluster 190213174918
Multinode Kubernetes Cluster 190213174918
Kubernetes
CHAPTER 1
Kubernetes Installation
It’s expected, that you will install Kubernetes to 3 VMs / hosts - to have multinode installation. The
installation part is taken from these two URLs:
• https://kubernetes.io/docs/setup/independent/install-kubeadm/
• https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
$ ssh root@node1
$ KUBERNETES_VERSION="1.10.3"
$ CNI_URL="https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/ ˓→kube-flannel.yml"
$ POD_NETWORK_CIDR="10.244.0.0/16"
3
Kubernetes
Install CNI:
$ export KUBECONFIG=/etc/kubernetes/admin.conf
$ kubectl apply -f $CNI_URL
Your Kuberenets Master node should be ready now. You can check it using this command:
$ ssh root@node2
$ ssh root@node3
$ KUBERNETES_VERSION="1.10.3"
All the woker nodes are prepared now - let’s connect them to master node. SSH to the master node again
and generate the “joining” command:
$ ssh -t root@node1 "kubeadm token create --print-join-command"
SSH back to the master nodes and check the cluster status - all the nodes should appear there in
“Ready” status after while:
$ ssh root@node1
$ # Check nodes
$ kubectl get nodes
Enable routing from local machine (host) to the kubernetes pods/services/etc. Adding routes
(10.244.0.0/16, 10.96.0.0/12) -> [$NODE1_IP]:
$ sudo bash -c "ip route | grep -q 10.244.0.0/16 && ip route del 10.244.0.0/16; ip ˓→route add 10.244.0.0/16 via
$NODE1_IP"
$ sudo bash -c "ip route | grep -q 10.96.0.0/12 && ip route del 10.96.0.0/12; ip ˓→route add 10.96.0.0/12 via
$NODE1_IP"
Kubernetes Basics
$ mkdir files
7
Kubernetes
Helm Installation
Install Traefik - Træfik is a modern HTTP reverse proxy and load balancer
Install rook - File, Block, and Object Storage Services for your Cloud-Native Environment
9
Kubernetes
Create a shared file system which can be mounted read-write from multiple pods
$ sleep 150
Check the Ceph monitor, OSD, pool, and placement group stats
$ kubectl get pool --namespace=rook-ceph replicapool -o yaml | sed "s/size: 1/size: 3/ ˓→" | kubectl replace -f -
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: batch/v1
kind: Job
metadata:
name: rook-ceph-test
labels:
app: rook-ceph-test
spec:
template:
metadata:
labels:
app: rook-ceph-test
spec:
containers:
- name: rook-ceph-test
image: busybox
command: [ 'dd', 'if=/dev/zero', 'of=/data/zero_file', 'bs=1M', 'count=100' ] volumeMounts:
- name: rook-ceph-test
mountPath: "/data"
restartPolicy: Never
volumes:
- name: rook-ceph-test
persistentVolumeClaim:
claimName: rook-ceph-test-pv-claim
EOF
11
Kubernetes
˓→hosts[0]=alertmanager.domain.com,alertmanager.storageSpec.volumeClaimTemplate.spec.
˓→storageClassName=rook-block,alertmanager.storageSpec.volumeClaimTemplate.spec.
˓→accessModes[0]=ReadWriteOnce,alertmanager.storageSpec.volumeClaimTemplate.spec.
˓→resources.requests.storage=20Gi,grafana.adminPassword=admin123,grafana.ingress.
˓→enabled=true,grafana.ingress.hosts[0]=grafana.domain.com,prometheus.ingress.
˓→enabled=true,prometheus.ingress.hosts[0]=prometheus.domain.com,prometheus.
˓→storageSpec.volumeClaimTemplate.spec.storageClassName=rook-block,prometheus.
˓→storageSpec.volumeClaimTemplate.spec.accessModes[0]=ReadWriteOnce,prometheus.
˓→storageSpec.volumeClaimTemplate.spec.resources.requests.storage=20Gi
Install Heapster - Compute Resource Usage Analysis and Monitoring of Container Clusters
Pods
Check ‘kuard-pod.yaml’ manifest which will run kuard application once it is imported to Kubernetes
Start pod from the pod manifest via Kubernetes API (see the ‘ContainerCreating’ status)
13
Kubernetes
Configure secure port-forwarding to access the specific pod exposed port using Kubernetes API Access the
pod by opening the web browser with url: http://127.0.0.1:8080 and http://127.0.0.1:8080/fs/{etc,var,home}
Get the logs from pod (-f for tail) (–previous will get logs from a previous instance of the container)
Run commands in your container with exec (-it for interactive session). Check if I am in container
Check pods - the kuard should disappear form the ‘pod list’
14 Chapter 4. Pods
CHAPTER 5
Health Checks
Check ‘kuard-pod-health.yaml’ manifest which will start kuard and configure HTTP health check
httpGet: path:
/ready port:
8080
(continues on next page)
15
Kubernetes
# If only one check succeeds, then the pod will again be considered ready. successThreshold: 1
livenessProbe:
httpGet:
path: /healthy
port: 8080
# Start probe 5 seconds after all the containers in the Pod are created initialDelaySeconds: 5
# The response must be max in 1 second and status HTTP code must be between
˓→ 200 and 400 timeoutSeconds: 1
EOF
Create a Pod using this manifest and then port-forward to that pod
Point your browser to http://127.0.0.1:8080 then click ‘Liveness Probe’ tab and then ‘fail’ link - it will cause
to fail health checks
$ kubectl port-forward kuard 8080:8080 &
Delete pod
Create service (only routable inside cluster). The service is assigned Cluster IP (DNS record is
automatically created) which load-balance across all of the pods that are identified by the selector
$ kubectl expose deployment app1-prod
Create app2-prod
Create service
Check if the DNS record was properly created for the Cluster IPs. app2-prod [name of the service], myns
[namespace that this service is in], svc [service], cluster.local. [base domain name for the cluster]
$ kubectl run nslookup --rm -it --restart=Never --image=busybox -- nslookup app2-prod
$ kubectl run nslookup --rm -it --restart=Never --image=busybox -- nslookup app2-prod. ˓→myns
Create app2-staging
17
Kubernetes
Show deployments
Change labels
Remove label
19
Kubernetes
ReplicaSet
matchLabels: app:
kuard version:
"2"
template:
metadata:
labels: app: kuard
version: "2"
spec:
containers:
- name: kuard
image: "gcr.io/kuar-demo/kuard-amd64:2"
EOF
Create ReplicaSet
Check pods
21
Kubernetes
Scale up ReplicaSet
Delete ReplicaSet
22 Chapter 7. ReplicaSet
CHAPTER 8
Check ‘nginx-fast-storage.yaml’ which will provision nginx to ssd labeled nodes only. By default a
DaemonSet will create a copy of a Pod on every node
template:
metadata:
labels: app:
nginx ssd:
"true"
spec:
nodeSelector:
ssd: "true"
containers:
- name: nginx
image: nginx:1.10.0
EOF
23
Kubernetes
Add label ssd=true to the node3 - nginx should be deployed there automatically
Check the nodes where nginx was deployed (it should be also on node3 with ssd=true label)
Jobs
One-shot Jobs provide a way to run a single Pod once until successful termination. Pod is restarted in case of failure
Delete job
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always args:
- "--keygen-enable"
(continues on next page)
25
Kubernetes
Get pod name of a job called ‘oneshot’ and check the logs
Show one-shot Job configuration file. See the keygen-exit-code parameter - nonzero exit code after
generating three keys
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always args:
- "--keygen-enable"
- "--keygen-exit-on-complete"
- "--keygen-exit-code=1"
- "--keygen-num-to-gen=3"
restartPolicy: OnFailure
EOF
26 Chapter 9. Jobs
Kubernetes
Show Parallel Job configuration file - generate (5x10) keys generated in 5 containers
metadata:
labels: chapter: jobs
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always args:
- "--keygen-enable"
- "--keygen-exit-on-complete"
- "--keygen-num-to-gen=5"
restartPolicy: OnFailure
EOF
27
Kubernetes
matchLabels: app:
work-queue
component: queue
chapter: jobs
template:
metadata:
labels:
app: work-queue
component: queue
chapter: jobs
spec:
containers:
- name: queue
image: "gcr.io/kuar-demo/kuard-amd64:1"
imagePullPolicy: Always
EOF
28 Chapter 9. Jobs
Kubernetes
Expose work queue - this helps consumers+producers to locate the work queue via DNS
Queue should not be empty - check the queue by looking at the ‘MemQ Server’ tab in Web interface (http://127.0.0.1:
8080/-/memq)
$ curl --silent 127.0.0.1:8080/memq/server/stats | jq
Show consumer job config file allowing start up five pods in parallel. Once the first pod exits with a zero
exit code, the Job will not start any new pods (none of the workers should exit until the work is done)
- "--keygen-enable"
- "--keygen-exit-on-complete"
- "--keygen-memq-server=http://queue:8080/memq/server"
- "--keygen-memq-queue=keygen"
restartPolicy: OnFailure
EOF
Five pods should be created to run until the work queue is empty. Open the web browser to see changing
queue status (http://127.0.0.1:8080/-/memq)
$ kubectl get pods -o wide
Check the queue status - especially the ‘dequeued’ and ‘depth’ fields
Stop port-forwarding
30 Chapter 9. Jobs
CHAPTER 10
ConfigMaps
Show file with key/value pairs which will be available to the pod
parameter2 = value2
EOF
Create a ConfigMap with that file (environment variables are specified with a special valueFrom member)
Show ConfigMaps
31
Kubernetes
$ kubectl exec kuard-config -- sh -xc "echo EXTRA_PARAM: \$EXTRA_PARAM; echo ANOTHER_ ˓→PARAM: \
$ANOTHER_PARAM && cat /config/my-config.txt"
Go to http://localhost:8080 and click on the ‘Server Env’ tab, then ‘File system browser’ tab (/config) and
look for ANOTHER_PARAM and EXTRA_PARAM values
$ kubectl port-forward kuard-config 8080:8080 &
Remove pod”
Secrets
Download certificates
Show secrets
Update secrets - generate yaml and then edit the secret ‘kubectl edit configmap my-config’
33
Kubernetes
Set port-forwarding. Go to https://localhost:8080, check the certificate and click on “File system browser” tab (/tls)
Delete pod
Deployments
List deployments
35
Kubernetes
Change deployment image (version 1.7.9 -> 1.8) - you can do the change by running ‘kubectl edit
deployment nginx-deployment’ too. . .
$ kubectl set image deployment nginx-deployment nginx=nginx:1.8
See the deployment history (first there was version nginx:1.7.9, then nginx:1.8)
Endpoints
type: ExternalName
externalName: database.company.com
EOF
Create DNS name (CNAME) that points to the specific server running the database
Show services
Remove service
37
Kubernetes
Self-Healing
Get first nginx pod and delete it - one of the nginx pods should be in ‘Terminating’ status
Get deployement details and check the events for recent changes
Get pod details - everything looks fine - you need to wait 5 minutes
Pod will not be evicted until it is 5 minutes old - (see Tolerations in ‘describe pod’ ). It prevents Kubernetes
to spin up the new containers when it is not necessary
39
Kubernetes
$ sleep 300
$ vagrant up node2
$ sleep 70
Persistent Storage
$ ssh $SSH_ARGS vagrant@node1 "sudo sh -xc \" apt-get update -qq; DEBIAN_
˓→ FRONTEND=noninteractive apt-get install -y nfs-kernel-server > /dev/null; mkdir / ˓→nfs; chown nobody:nogroup /nfs;
echo /nfs *\(rw,sync,no_subtree_check\) >> /etc/ ˓→exports; systemctl restart nfs-kernel-server \""
41
Kubernetes
metadata: name:
nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
volume: nfs-volume
EOF
Create replicaset
You can see the /tmp is mounted to both pods containing the same file ‘date’
$ kubectl exec -it $NFS_TEST_POD1 -- sh -xc "hostname; echo $NFS_TEST_POD1 >> /tmp/ ˓→date"
$ kubectl exec -it $NFS_TEST_POD2 -- sh -xc "hostname; echo $NFS_TEST_POD2 >> /tmp/ ˓→date"
Show files on NFS server - there should be ‘nfs/date’ file with 2 dates
$ ssh $SSH_ARGS vagrant@node1 "set -x; ls -al /nfs -ls; ls -n /nfs; cat /nfs/date"
43
Kubernetes
Node replacement
$ sleep 40
$ vagrant up node3
45
Kubernetes
$ ssh $SSH_ARGS vagrant@node3 "sudo sh -xc \" apt-get update -qq; DEBIAN_
˓→FRONTEND=noninteractive apt-get install -y apt-transport-https curl > /dev/null;
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -; echo ˓→deb https://apt.kubernetes.io/
˓→
$ ssh $SSH_ARGS vagrant@node3 "sudo sh -xc \" apt-get update -qq; DEBIAN_
˓→FRONTEND=noninteractive apt-get install -y docker.io kubelet=${KUBERNETES_VERSION}-
˓→00 kubeadm=${KUBERNETES_VERSION}-00 kubectl=${KUBERNETES_VERSION}-00 > /dev/null \""
Notes
Show all
47