Lab: Kubernetes Metrics Server
Lab: Kubernetes Metrics Server
Introduction
Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-
in autoscaling pipelines. Metrics Server collects resource metrics from Kubelets and exposes
them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and
Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to
debug autoscaling pipelines. Metrics Server is not meant for non-autoscaling purposes.
• Installing metrics-server
• Configure metrics-server
• Using metrics-server
Note: Ensure you have running cluster deployed
1. Ensure that you have logged-in as root user with password as linux on kube-master node.
1.1 Let us clone the git repository which contains manifests required for this exercise, by
executing the below command.
# git clone https://github.com/EyesOnCloud/k8s-metrics.git
Output:
# cat -n ~/k8s-metrics/metrics-server.yaml
1.3 Let us verify if we are able to view the metrics, by executing the below command.
Note: We will not be able to view the metrics as we haven’t installed it yet.
1.4 Let us run the manifest and install the metrics-server, by executing the below command.
Output:
1.5 Let’s verify if the metrics pod is available, by executing the below command.
Output:
1.7 Let us verify metrics of the pods, by executing the below command.
Output:
1.9 Let us sort the pods based on the cpu, by executing the below command.
# kubectl -n kube-system top pods --sort-by=cpu
Output:
# cat -n ~/k8s-metrics/multi-pod.yaml
Output:
Output:
Output:
Output:
Output: