Overview - Kubernetes
Overview - Kubernetes
Overview
Kubernetes is a portable, extensible, open source platform for managing
containerized workloads and services, that facilitates both declarative
configuration and automation. It has a large, rapidly growing ecosystem.
Kubernetes services, support, and tools are widely available.
1: Objects In Kubernetes
1.1: Kubernetes Object Management
1.2: Object Names and IDs
1.3: Labels and Selectors
1.4: Namespaces
1.5: Annotations
1.6: Field Selectors
1.7: Finalizers
1.8: Owners and Dependents
1.9: Recommended Labels
2: Kubernetes Components
3: The Kubernetes API
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an
abbreviation results from counting the eight letters between the "K" and the "s". Google open-
sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's
experience running production workloads at scale with best-of-breed ideas and practices
from the community.
Traditional deployment era: Early on, organizations ran applications on physical servers.
There was no way to define resource boundaries for applications in a physical server, and this
caused resource allocation issues. For example, if multiple applications run on a physical
server, there can be instances where one application would take up most of the resources,
and as a result, the other applications would underperform. A solution for this would be to
run each application on a different physical server. But this did not scale as resources were
underutilized, and it was expensive for organizations to maintain many physical servers.
https://kubernetes.io/docs/concepts/overview/_print/ 1/39
6/6/23, 3:45 PM Overview | Kubernetes
Virtualized deployment era: As a solution, virtualization was introduced. It allows you to run
multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows
applications to be isolated between VMs and provides a level of security as the information of
one application cannot be freely accessed by another application.
Virtualization allows better utilization of resources in a physical server and allows better
scalability because an application can be added or updated easily, reduces hardware costs,
and much more. With virtualization you can present a set of physical resources as a cluster of
disposable virtual machines.
Each VM is a full machine running all the components, including its own operating system, on
top of the virtualized hardware.
Container deployment era: Containers are similar to VMs, but they have relaxed isolation
properties to share the Operating System (OS) among the applications. Therefore, containers
are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU,
memory, process space, and more. As they are decoupled from the underlying infrastructure,
they are portable across clouds and OS distributions.
Containers have become popular because they provide extra benefits, such as:
Agile application creation and deployment: increased ease and efficiency of container
image creation compared to VM image use.
Continuous development, integration, and deployment: provides for reliable and
frequent container image build and deployment with quick and efficient rollbacks (due
to image immutability).
Dev and Ops separation of concerns: create application container images at
build/release time rather than deployment time, thereby decoupling applications from
infrastructure.
Observability: not only surfaces OS-level information and metrics, but also application
health and other signals.
Environmental consistency across development, testing, and production: runs the same
on a laptop as it does in the cloud.
Cloud and OS distribution portability: runs on Ubuntu, RHEL, CoreOS, on-premises, on
major public clouds, and anywhere else.
Application-centric management: raises the level of abstraction from running an OS on
virtual hardware to running an application on an OS using logical resources.
Loosely coupled, distributed, elastic, liberated micro-services: applications are broken
into smaller, independent pieces and can be deployed and managed dynamically – not a
monolithic stack running on one big single-purpose machine.
Resource isolation: predictable application performance.
Resource utilization: high efficiency and density.
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to
run distributed systems resiliently. It takes care of scaling and failover for your application,
provides deployment patterns, and more. For example: Kubernetes can easily manage a
canary deployment for your system.
Service discovery and load balancing Kubernetes can expose a container using the
DNS name or using their own IP address. If traffic to a container is high, Kubernetes is
able to load balance and distribute the network traffic so that the deployment is stable.
Storage orchestration Kubernetes allows you to automatically mount a storage system
of your choice, such as local storages, public cloud providers, and more.
Automated rollouts and rollbacks You can describe the desired state for your
deployed containers using Kubernetes, and it can change the actual state to the desired
https://kubernetes.io/docs/concepts/overview/_print/ 2/39
6/6/23, 3:45 PM Overview | Kubernetes
state at a controlled rate. For example, you can automate Kubernetes to create new
containers for your deployment, remove existing containers and adopt all their
resources to the new container.
Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use
to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each
container needs. Kubernetes can fit containers onto your nodes to make the best use of
your resources.
Self-healing Kubernetes restarts containers that fail, replaces containers, kills
containers that don't respond to your user-defined health check, and doesn't advertise
them to clients until they are ready to serve.
Secret and configuration management Kubernetes lets you store and manage
sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy
and update secrets and application configuration without rebuilding your container
images, and without exposing secrets in your stack configuration.
Kubernetes:
Does not limit the types of applications supported. Kubernetes aims to support an
extremely diverse variety of workloads, including stateless, stateful, and data-processing
workloads. If an application can run in a container, it should run great on Kubernetes.
Does not deploy source code and does not build your application. Continuous
Integration, Delivery, and Deployment (CI/CD) workflows are determined by organization
cultures and preferences as well as technical requirements.
Does not provide application-level services, such as middleware (for example, message
buses), data-processing frameworks (for example, Spark), databases (for example,
MySQL), caches, nor cluster storage systems (for example, Ceph) as built-in services.
Such components can run on Kubernetes, and/or can be accessed by applications
running on Kubernetes through portable mechanisms, such as the Open Service Broker.
Does not dictate logging, monitoring, or alerting solutions. It provides some integrations
as proof of concept, and mechanisms to collect and export metrics.
Does not provide nor mandate a configuration language/system (for example, Jsonnet).
It provides a declarative API that may be targeted by arbitrary forms of declarative
specifications.
Does not provide nor adopt any comprehensive machine configuration, maintenance,
management, or self-healing systems.
Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the
need for orchestration. The technical definition of orchestration is execution of a defined
workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of
independent, composable control processes that continuously drive the current state
towards the provided desired state. It shouldn't matter how you get from A to C.
Centralized control is also not required. This results in a system that is easier to use and
more powerful, robust, resilient, and extensible.
What's next
Take a look at the Kubernetes Components
Take a look at the The Kubernetes API
Take a look at the Cluster Architecture
Ready to Get Started?
https://kubernetes.io/docs/concepts/overview/_print/ 3/39
6/6/23, 3:45 PM Overview | Kubernetes
1 - Objects In Kubernetes
Kubernetes objects are persistent entities in the Kubernetes system.
Kubernetes uses these entities to represent the state of your cluster. Learn
about the Kubernetes object model and how to work with these objects.
This page explains how Kubernetes objects are represented in the Kubernetes API, and how
you can express them in .yaml format.
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system
will constantly work to ensure that object exists. By creating an object, you're effectively telling
the Kubernetes system what you want your cluster's workload to look like; this is your
cluster's desired state.
The status describes the current state of the object, supplied and updated by the Kubernetes
system and its components. The Kubernetes control plane continually and actively manages
every object's actual state to match the desired state you supplied.
For more information on the object spec, status, and metadata, see the Kubernetes API
Conventions.
Here's an example .yaml file that shows the required fields and object spec for a Kubernetes
Deployment:
https://kubernetes.io/docs/concepts/overview/_print/ 4/39
6/6/23, 3:45 PM Overview | Kubernetes
application/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
One way to create a Deployment using a .yaml file like the one above is to use the kubectl
apply command in the kubectl command-line interface, passing the .yaml file as an
argument. Here's an example:
deployment.apps/nginx-deployment created
Required fields
In the .yaml file for the Kubernetes object you want to create, you'll need to set values for
the following fields:
apiVersion - Which version of the Kubernetes API you're using to create this object
kind - What kind of object you want to create
metadata - Data that helps uniquely identify the object, including a name string, UID ,
and optional namespace
spec - What state you desire for the object
The precise format of the object spec is different for every Kubernetes object, and contains
nested fields specific to that object. The Kubernetes API Reference can help you find the spec
format for all of the objects you can create using Kubernetes.
For example, see the spec field for the Pod API reference. For each Pod, the .spec field
specifies the pod and its desired state (such as the container image name for each container
within that pod). Another example of an object specification is the spec field for the
StatefulSet API. For StatefulSet, the .spec field specifies the StatefulSet and its desired state.
Within the .spec of a StatefulSet is a template for Pod objects. That template describes Pods
that the StatefulSet controller will create in order to satisfy the StatefulSet specification.
Different kinds of object can also have different .status ; again, the API reference pages
detail the structure of that .status field, and its content for each different type of object.
https://kubernetes.io/docs/concepts/overview/_print/ 5/39
6/6/23, 3:45 PM Overview | Kubernetes
The kubectl tool uses the --validate flag to set the level of field validation. It accepts the
values ignore , warn , and strict while also accepting the values true (equivalent to
strict ) and false (equivalent to ignore ). The default validation setting for kubectl is --
validate=true .
Strict
Warn
Field validation is performed, but errors are exposed as warnings rather than failing the
request
Ignore
When kubectl cannot connect to an API server that supports field validation it will fall back
to using client-side validation. Kubernetes 1.27 and later versions always offer field validation;
older Kubernetes releases might not. If your cluster is older than v1.27, check the
documentation for your version of Kubernetes.
What's next
If you're new to Kubernetes, read more about the following:
To learn about objects in Kubernetes in more depth, read other pages in this section:
https://kubernetes.io/docs/concepts/overview/_print/ 6/39
6/6/23, 3:45 PM Overview | Kubernetes
Management techniques
Warning: A Kubernetes object should be managed using only one technique. Mixing and
matching techniques for the same object results in undefined behavior.
Imperative commands
When using imperative commands, a user operates directly on live objects in a cluster. The
user provides operations to the kubectl command as arguments or flags.
This is the recommended way to get started or to run a one-off task in a cluster. Because this
technique operates directly on live objects, it provides no history of previous configurations.
Examples
Run an instance of the nginx container by creating a Deployment object:
Trade-offs
Advantages compared to object configuration:
https://kubernetes.io/docs/concepts/overview/_print/ 7/39
6/6/23, 3:45 PM Overview | Kubernetes
Warning: The imperative replace command replaces the existing spec with the newly
provided one, dropping all changes to the object missing from the configuration file. This
approach should not be used with resource types whose specs are updated
independently of the configuration file. Services of type LoadBalancer, for example, have
their externalIPs field updated independently from the configuration by the cluster.
Examples
Create the objects defined in a configuration file:
Update the objects defined in a configuration file by overwriting the live configuration:
Trade-offs
Advantages compared to imperative commands:
https://kubernetes.io/docs/concepts/overview/_print/ 8/39
6/6/23, 3:45 PM Overview | Kubernetes
Note: Declarative object configuration retains changes made by other writers, even if the
changes are not merged back to the object configuration file. This is possible by using the
patch API operation to write only observed differences, instead of using the replace API
operation to replace the entire object configuration.
Examples
Process all object configuration files in the configs directory, and create or patch the live
objects. You can first diff to see what changes are going to be made, and then apply:
Trade-offs
Advantages compared to imperative object configuration:
Changes made directly to live objects are retained, even if they are not merged back into
the configuration files.
Declarative object configuration has better support for operating on directories and
automatically detecting operation types (create, patch, delete) per-object.
Declarative object configuration is harder to debug and understand results when they
are unexpected.
Partial updates using diffs create complex merge and patch operations.
What's next
Managing Kubernetes Objects Using Imperative Commands
Imperative Management of Kubernetes Objects Using Configuration Files
Declarative Management of Kubernetes Objects Using Configuration Files
Declarative Management of Kubernetes Objects Using Kustomize
Kubectl Command Reference
Kubectl Book
Kubernetes API Reference
https://kubernetes.io/docs/concepts/overview/_print/ 9/39
6/6/23, 3:45 PM Overview | Kubernetes
For example, you can only have one Pod named myapp-1234 within the same namespace, but
you can have one Pod and one Deployment that are each named myapp-1234 .
Names
A client-provided string that refers to an object in a resource URL, such as
/api/v1/pods/some-name .
Only one object of a given kind can have a given name at a time. However, if you delete the
object, you can make a new object with the same name.
Names must be unique across all API versions of the same resource. API resources are
distinguished by their API group, resource type, namespace (for namespaced
resources), and name. In other words, API version is irrelevant in this context.
Note: In cases when objects represent a physical entity, like a Node representing a
physical host, when the host is re-created under the same name without deleting and re-
creating the Node, Kubernetes treats the new host as the old one, which may lead to
inconsistencies.
Below are four types of commonly used name constraints for resources.
https://kubernetes.io/docs/concepts/overview/_print/ 10/39
6/6/23, 3:45 PM Overview | Kubernetes
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
UIDs
A Kubernetes systems-generated string to uniquely identify objects.
Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID. It is
intended to distinguish between historical occurrences of similar entities.
Kubernetes UIDs are universally unique identifiers (also known as UUIDs). UUIDs are
standardized as ISO/IEC 9834-8 and as ITU-T X.667.
What's next
Read about labels and annotations in Kubernetes.
See the Identifiers and Names in Kubernetes design document.
https://kubernetes.io/docs/concepts/overview/_print/ 11/39
6/6/23, 3:45 PM Overview | Kubernetes
"metadata": {
"labels": {
"key1" : "value1",
"key2" : "value2"
}
}
Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-
identifying information should be recorded using annotations.
Motivation
Labels enable users to map their own organizational structures onto system objects in a
loosely coupled fashion, without requiring clients to store these mappings.
Service deployments and batch processing pipelines are often multi-dimensional entities (e.g.,
multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-
services per tier). Management often requires cross-cutting operations, which breaks
encapsulation of strictly hierarchical representations, especially rigid hierarchies determined
by the infrastructure rather than by users.
Example labels:
These are examples of commonly used labels; you are free to develop your own conventions.
Keep in mind that label Key must be unique for a given object.
If the prefix is omitted, the label Key is presumed to be private to the user. Automated system
components (e.g. kube-scheduler , kube-controller-manager , kube-apiserver , kubectl , or
other third-party automation) which add labels to end-user objects must specify a prefix.
The kubernetes.io/ and k8s.io/ prefixes are reserved for Kubernetes core components.
For example, here's a manifest for a Pod that has two labels environment: production and
app: nginx :
apiVersion: v1
kind: Pod
metadata:
name: label-demo
labels:
environment: production
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Label selectors
Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects
to carry the same label(s).
Via a label selector, the client/user can identify a set of objects. The label selector is the core
grouping primitive in Kubernetes.
The API currently supports two types of selectors: equality-based and set-based. A label
selector can be made of multiple requirements which are comma-separated. In the case of
multiple requirements, all must be satisfied so the comma separator acts as a logical AND
( && ) operator.
The semantics of empty or non-specified selectors are dependent on the context, and API
types that use selectors should document the validity and meaning of them.
Note: For some API types, such as ReplicaSets, the label selectors of two instances must
not overlap within a namespace, or the controller can see that as conflicting instructions
and fail to determine how many replicas should be present.
Caution: For both equality-based and set-based conditions there is no logical OR (||)
operator. Ensure your filter statements are structured accordingly.
Equality-based requirement
Equality- or inequality-based requirements allow filtering by label keys and values. Matching
objects must satisfy all of the specified label constraints, though they may have additional
labels as well. Three kinds of operators are admitted = , == , != . The first two represent
equality (and are synonyms), while the latter represents inequality. For example:
environment = production
tier != frontend
The former selects all resources with key equal to environment and value equal to
production . The latter selects all resources with key equal to tier and value distinct from
frontend , and all resources with no labels with the tier key. One could filter for resources
in production excluding frontend using the comma operator:
environment=production,tier!=frontend
https://kubernetes.io/docs/concepts/overview/_print/ 13/39
6/6/23, 3:45 PM Overview | Kubernetes
One usage scenario for equality-based label requirement is for Pods to specify node selection
criteria. For example, the sample Pod below selects nodes with the label
" accelerator=nvidia-tesla-p100 ".
apiVersion: v1
kind: Pod
metadata:
name: cuda-test
spec:
containers:
- name: cuda-test
image: "registry.k8s.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1
nodeSelector:
accelerator: nvidia-tesla-p100
Set-based requirement
Set-based label requirements allow filtering keys according to a set of values. Three kinds of
operators are supported: in , notin and exists (only the key identifier). For example:
The first example selects all resources with key equal to environment and value equal to
production or qa .
The second example selects all resources with key equal to tier and values other than
frontend and backend , and all resources with no labels with the tier key.
The third example selects all resources including a label with key partition ; no values
are checked.
The fourth example selects all resources without a label with key partition ; no values
are checked.
Similarly the comma separator acts as an AND operator. So filtering resources with a
partition key (no matter the value) and with environment different than qa can be
achieved using partition,environment notin (qa) . The set-based label selector is a general
form of equality since environment=production is equivalent to environment in
(production) ; similarly for != and notin .
API
LIST and WATCH filtering
LIST and WATCH operations may specify label selectors to filter the sets of objects returned
using a query parameter. Both requirements are permitted (presented here as they would
appear in a URL query string):
equality-based requirements: ?
labelSelector=environment%3Dproduction,tier%3Dfrontend
set-based requirements: ?
labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29
Both label selector styles can be used to list or watch resources via a REST client. For example,
targeting apiserver with kubectl and using equality-based one may write:
https://kubernetes.io/docs/concepts/overview/_print/ 14/39
6/6/23, 3:45 PM Overview | Kubernetes
As already mentioned set-based requirements are more expressive. For instance, they can
implement the OR operator on values:
Label selectors for both objects are defined in json or yaml files using maps, and only
equality-based requirement selectors are supported:
"selector": {
"component" : "redis",
}
or
selector:
component: redis
https://kubernetes.io/docs/concepts/overview/_print/ 15/39
6/6/23, 3:45 PM Overview | Kubernetes
selector:
matchLabels:
component: redis
matchExpressions:
- { key: tier, operator: In, values: [cache] }
- { key: environment, operator: NotIn, values: [dev] }
What's next
Learn how to add a label to a node
Find Well-known labels, Annotations and Taints
See Recommended labels
Enforce Pod Security Standards with Namespace Labels
Use Labels effectively to manage deployments.
Read a blog on Writing a Controller for Pod Labels
https://kubernetes.io/docs/concepts/overview/_print/ 16/39
6/6/23, 3:45 PM Overview | Kubernetes
1.4 - Namespaces
In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a
single cluster. Names of resources need to be unique within a namespace, but not across
namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g.
Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes,
PersistentVolumes, etc).
Namespaces provide a scope for names. Names of resources need to be unique within a
namespace, but not across namespaces. Namespaces cannot be nested inside one another
and each Kubernetes resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via resource
quota).
It is not necessary to use multiple namespaces to separate slightly different resources, such
as different versions of the same software: use labels to distinguish resources within the
same namespace.
Note: For a production cluster, consider not using the default namespace. Instead, make
other namespaces and use those.
Initial namespaces
Kubernetes starts with four initial namespaces:
default
Kubernetes includes this namespace so that you can start using your new cluster without
first creating a namespace.
kube-node-lease
This namespace holds Lease objects associated with each node. Node leases allow the
kubelet to send heartbeats so that the control plane can detect node failure.
kube-public
This namespace is readable by all clients (including those not authenticated). This
namespace is mostly reserved for cluster usage, in case that some resources should be
visible and readable publicly throughout the whole cluster. The public aspect of this
namespace is only a convention, not a requirement.
kube-system
https://kubernetes.io/docs/concepts/overview/_print/ 17/39
6/6/23, 3:45 PM Overview | Kubernetes
Note: Avoid creating namespaces with the prefix kube-, since it is reserved for
Kubernetes system namespaces.
Viewing namespaces
You can list the current namespaces in a cluster using:
For example:
As a result, all namespace names must be valid RFC 1123 DNS labels.
Warning:
By creating namespaces with the same name as public top-level domains, Services in
these namespaces can have short DNS names that overlap with public DNS records.
Workloads from any namespace performing a DNS lookup without a trailing dot will be
redirected to those services, taking precedence over public DNS.
To mitigate this, limit privileges for creating namespaces to trusted users. If required, you
could additionally configure third-party security controls, such as admission webhooks, to
block creating any namespace with the name of public TLDs.
https://kubernetes.io/docs/concepts/overview/_print/ 18/39
6/6/23, 3:45 PM Overview | Kubernetes
# In a namespace
kubectl api-resources --namespaced=true
# Not in a namespace
kubectl api-resources --namespaced=false
Automatic labelling
FEATURE STATE: Kubernetes 1.22 [stable]
What's next
Learn more about creating a new namespace.
Learn more about deleting a namespace.
https://kubernetes.io/docs/concepts/overview/_print/ 19/39
6/6/23, 3:45 PM Overview | Kubernetes
1.5 - Annotations
You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects.
Clients such as tools and libraries can retrieve this metadata.
"metadata": {
"annotations": {
"key1" : "value1",
"key2" : "value2"
}
}
Note: The keys and the values in the map must be strings. In other words, you cannot use
numeric, boolean, list or other types for either the keys or the values.
Build, release, or image information like timestamps, release IDs, git branch, PR
numbers, image hashes, and registry address.
Client library or tool information that can be used for debugging purposes: for example,
name, version, and build information.
User or tool/system provenance information, such as URLs of related objects from other
ecosystem components.
Phone or pager numbers of persons responsible, or directory entries that specify where
that information can be found, such as a team web site.
Directives from the end-user to the implementations to modify behavior or engage non-
standard features.
Instead of using annotations, you could store this type of information in an external database
or directory, but that would make it much harder to produce shared client libraries and tools
for deployment, management, introspection, and the like.
https://kubernetes.io/docs/concepts/overview/_print/ 20/39
6/6/23, 3:45 PM Overview | Kubernetes
If the prefix is omitted, the annotation Key is presumed to be private to the user. Automated
system components (e.g. kube-scheduler , kube-controller-manager , kube-apiserver ,
kubectl , or other third-party automation) which add annotations to end-user objects must
specify a prefix.
The kubernetes.io/ and k8s.io/ prefixes are reserved for Kubernetes core components.
For example, here's a manifest for a Pod that has the annotation imageregistry:
https://hub.docker.com/ :
apiVersion: v1
kind: Pod
metadata:
name: annotations-demo
annotations:
imageregistry: "https://hub.docker.com/"
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
What's next
Learn more about Labels and Selectors.
https://kubernetes.io/docs/concepts/overview/_print/ 21/39
6/6/23, 3:45 PM Overview | Kubernetes
metadata.name=my-service
metadata.namespace!=default
status.phase=Pending
This kubectl command selects all Pods for which the value of the status.phase field is
Running :
Note: Field selectors are essentially resource filters. By default, no selectors/filters are
applied, meaning that all resources of the specified type are selected. This makes the
kubectl queries kubectl get pods and kubectl get pods --field-selector ""
equivalent.
Supported fields
Supported field selectors vary by Kubernetes resource type. All resource types support the
metadata.name and metadata.namespace fields. Using unsupported field selectors produces
an error. For example:
Error from server (BadRequest): Unable to find "ingresses" that match label selec
Supported operators
You can use the = , == , and != operators with field selectors ( = and == mean the same
thing). This kubectl command, for example, selects all Kubernetes Services that aren't in the
default namespace:
Chained selectors
As with label and other selectors, field selectors can be chained together as a comma-
separated list. This kubectl command selects all Pods for which the status.phase does not
equal Running and the spec.restartPolicy field equals Always :
https://kubernetes.io/docs/concepts/overview/_print/ 22/39
6/6/23, 3:45 PM Overview | Kubernetes
https://kubernetes.io/docs/concepts/overview/_print/ 23/39
6/6/23, 3:45 PM Overview | Kubernetes
1.7 - Finalizers
Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met
before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up
resources the deleted object owned.
When you tell Kubernetes to delete an object that has finalizers specified for it, the
Kubernetes API marks the object for deletion by populating .metadata.deletionTimestamp ,
and returns a 202 status code (HTTP "Accepted"). The target object remains in a terminating
state while the control plane, or other components, take the actions defined by the finalizers.
After these actions are complete, the controller removes the relevant finalizers from the
target object. When the metadata.finalizers field is empty, Kubernetes considers the
deletion complete and deletes the object.
You can use finalizers to control garbage collection of resources. For example, you can define
a finalizer to clean up related resources or infrastructure before the controller deletes the
target resource.
You can use finalizers to control garbage collection of objects by alerting controllers to
perform specific cleanup tasks before deleting the target resource.
Finalizers don't usually specify the code to execute. Instead, they are typically lists of keys on a
specific resource similar to annotations. Kubernetes specifies some finalizers automatically,
but you can also specify your own.
Modifies the object to add a metadata.deletionTimestamp field with the time you
started the deletion.
Prevents the object from being removed until its metadata.finalizers field is empty.
Returns a 202 status code (HTTP "Accepted")
The controller managing that finalizer notices the update to the object setting the
metadata.deletionTimestamp , indicating deletion of the object has been requested. The
controller then attempts to satisfy the requirements of the finalizers specified for that
resource. Each time a finalizer condition is satisfied, the controller removes that key from the
resource's finalizers field. When the finalizers field is emptied, an object with a
deletionTimestamp field set is automatically deleted. You can also use finalizers to prevent
deletion of unmanaged resources.
The Job controller also adds owner references to those Pods, pointing at the Job that created
the Pods. If you delete the Job while these Pods are running, Kubernetes uses the owner
references (not labels) to determine which Pods in the cluster need cleanup.
https://kubernetes.io/docs/concepts/overview/_print/ 24/39
6/6/23, 3:45 PM Overview | Kubernetes
In some situations, finalizers can block the deletion of dependent objects, which can cause
the targeted owner object to remain for longer than expected without being fully deleted. In
these situations, you should check finalizers and owner references on the target owner and
dependent objects to troubleshoot the cause.
Note: In cases where objects are stuck in a deleting state, avoid manually removing
finalizers to allow deletion to continue. Finalizers are usually added to resources for a
reason, so forcefully removing them can lead to issues in your cluster. This should only be
done when the purpose of the finalizer is understood and is accomplished in another way
(for example, manually cleaning up some dependent object).
What's next
Read Using Finalizers to Control Deletion on the Kubernetes blog.
https://kubernetes.io/docs/concepts/overview/_print/ 25/39
6/6/23, 3:45 PM Overview | Kubernetes
Ownership is different from the labels and selectors mechanism that some resources also
use. For example, consider a Service that creates EndpointSlice objects. The Service uses
labels to allow the control plane to determine which EndpointSlice objects are used for that
Service. In addition to the labels, each EndpointSlice that is managed on behalf of a Service
has an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they don’t control.
A Kubernetes admission controller controls user access to change this field for dependent
resources, based on the delete permissions of the owner. This control prevents unauthorized
users from delaying owner object deletion.
Note:
Cross-namespace owner references are disallowed by design. Namespaced dependents
can specify cluster-scoped or namespaced owners. A namespaced owner must exist in
the same namespace as the dependent. If it does not, the owner reference is treated as
absent, and the dependent is subject to deletion once all owners are verified absent.
https://kubernetes.io/docs/concepts/overview/_print/ 26/39
6/6/23, 3:45 PM Overview | Kubernetes
Kubernetes also adds finalizers to an owner resource when you use either foreground or
orphan cascading deletion. In foreground deletion, it adds the foreground finalizer so that
the controller must delete dependent resources that also have
ownerReferences.blockOwnerDeletion=true before it deletes the owner. If you specify an
orphan deletion policy, Kubernetes adds the orphan finalizer so that the controller ignores
dependent resources after it deletes the owner object.
What's next
Learn more about Kubernetes finalizers.
Learn about garbage collection.
Read the API reference for object metadata.
https://kubernetes.io/docs/concepts/overview/_print/ 27/39
6/6/23, 3:45 PM Overview | Kubernetes
In addition to supporting tooling, the recommended labels describe applications in a way that
can be queried.
The metadata is organized around the concept of an application. Kubernetes is not a platform
as a service (PaaS) and doesn't have or enforce a formal notion of an application. Instead,
applications are informal and described with metadata. The definition of what an application
contains is loose.
Note: These are recommended labels. They make it easier to manage applications but
aren't required for any core tooling.
Shared labels and annotations share a common prefix: app.kubernetes.io . Labels without a
prefix are private to users. The shared prefix ensures that shared labels do not interfere with
custom user labels.
Labels
In order to take full advantage of using these labels, they should be applied on every resource
object.
https://kubernetes.io/docs/concepts/overview/_print/ 28/39
6/6/23, 3:45 PM Overview | Kubernetes
# This is an excerpt
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: mysql-abcxzy
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
app.kubernetes.io/managed-by: helm
The name of an application and the instance name are recorded separately. For example,
WordPress has a app.kubernetes.io/name of wordpress while it has an instance name,
represented as app.kubernetes.io/instance with a value of wordpress-abcxzy . This enables
the application and instance of the application to be identifiable. Every instance of an
application must have a unique name.
Examples
To illustrate different ways to use these labels the following examples have varying
complexity.
The Deployment is used to oversee the pods running the application itself.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: myservice
app.kubernetes.io/instance: myservice-abcxzy
...
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: myservice
app.kubernetes.io/instance: myservice-abcxzy
...
https://kubernetes.io/docs/concepts/overview/_print/ 29/39
6/6/23, 3:45 PM Overview | Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
...
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
...
MySQL is exposed as a StatefulSet with metadata for both it and the larger application it
belongs to:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: mysql-abcxzy
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
...
https://kubernetes.io/docs/concepts/overview/_print/ 30/39
6/6/23, 3:45 PM Overview | Kubernetes
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: mysql-abcxzy
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
...
With the MySQL StatefulSet and Service you'll notice information about both MySQL and
WordPress, the broader application, are included.
https://kubernetes.io/docs/concepts/overview/_print/ 31/39
6/6/23, 3:45 PM Overview | Kubernetes
2 - Kubernetes Components
A Kubernetes cluster consists of the components that are a part of the control
plane and a set of machines called nodes.
A Kubernetes cluster consists of a set of worker machines, called nodes, that run
containerized applications. Every cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The
control plane manages the worker nodes and the Pods in the cluster. In production
environments, the control plane usually runs across multiple computers and a cluster usually
runs multiple nodes, providing fault-tolerance and high availability.
This document outlines the various components you need to have for a complete and
working Kubernetes cluster.
Kubernetes cluster
API server
api
Cloud controller
c-m
c-m c-c-m manager
c-c-m
c-m c-c-m (optional) c-c-m
Controller
manager c-m
etcd
api (persistence store)
api Node Node Node etcd
api
kubelet
kubelet
sched
sched
sched
Scheduler
sched
Node
Control plane components can be run on any machine in the cluster. However, for simplicity,
set up scripts typically start all control plane components on the same machine, and do not
run user containers on this machine. See Creating Highly Available clusters with kubeadm for
an example control plane setup that runs across multiple machines.
kube-apiserver
The API server is a component of the Kubernetes control plane that exposes the Kubernetes
API. The API server is the front end for the Kubernetes control plane.
etcd
Consistent and highly-available key value store used as Kubernetes' backing store for all
cluster data.
If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan
for the data.
https://kubernetes.io/docs/concepts/overview/_print/ 32/39
6/6/23, 3:45 PM Overview | Kubernetes
You can find in-depth information about etcd in the official documentation.
kube-scheduler
Control plane component that watches for newly created Pods with no assigned node, and
selects a node for them to run on.
Factors taken into account for scheduling decisions include: individual and collective resource
requirements, hardware/software/policy constraints, affinity and anti-affinity specifications,
data locality, inter-workload interference, and deadlines.
kube-controller-manager
Control plane component that runs controller processes.
Logically, each controller is a separate process, but to reduce complexity, they are all
compiled into a single binary and run in a single process.
Node controller: Responsible for noticing and responding when nodes go down.
Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to
run those tasks to completion.
EndpointSlice controller: Populates EndpointSlice objects (to provide a link between
Services and Pods).
ServiceAccount controller: Create default ServiceAccounts for new namespaces.
cloud-controller-manager
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud
controller manager lets you link your cluster into your cloud provider's API, and separates out
the components that interact with that cloud platform from components that only interact
with your cluster.
The cloud-controller-manager only runs controllers that are specific to your cloud provider. If
you are running Kubernetes on your own premises, or in a learning environment inside your
own PC, the cluster does not have a cloud controller manager.
Node controller: For checking the cloud provider to determine if a node has been
deleted in the cloud after it stops responding
Route controller: For setting up routes in the underlying cloud infrastructure
Service controller: For creating, updating and deleting cloud provider load balancers
Node Components
Node components run on every node, maintaining running pods and providing the
Kubernetes runtime environment.
kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a
Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and
ensures that the containers described in those PodSpecs are running and healthy. The
kubelet doesn't manage containers which were not created by Kubernetes.
https://kubernetes.io/docs/concepts/overview/_print/ 33/39
6/6/23, 3:45 PM Overview | Kubernetes
kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of
the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network
communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available.
Otherwise, kube-proxy forwards the traffic itself.
Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports container runtimes such as containerd, CRI-O, and any other
implementation of the Kubernetes CRI (Container Runtime Interface).
Addons
Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster
features. Because these are providing cluster-level features, namespaced resources for
addons belong within the kube-system namespace.
Selected addons are described below; for an extended list of available addons, please see
Addons.
DNS
While the other addons are not strictly required, all Kubernetes clusters should have cluster
DNS, as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which
serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
Web UI (Dashboard)
Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to
manage and troubleshoot applications running in the cluster, as well as the cluster itself.
Cluster-level Logging
A cluster-level logging mechanism is responsible for saving container logs to a central log
store with search/browsing interface.
Network Plugins
Network plugins are software components that implement the container network interface
(CNI) specification. They are responsible for allocating IP addresses to pods and enabling
them to communicate with each other within the cluster.
What's next
Learn more about the following:
https://kubernetes.io/docs/concepts/overview/_print/ 34/39
6/6/23, 3:45 PM Overview | Kubernetes
https://kubernetes.io/docs/concepts/overview/_print/ 35/39
6/6/23, 3:45 PM Overview | Kubernetes
The core of Kubernetes' control plane is the API server. The API server exposes an HTTP API
that lets end users, different parts of your cluster, and external components communicate
with one another.
The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes (for
example: Pods, Namespaces, ConfigMaps, and Events).
Most operations can be performed through the kubectl command-line interface or other
command-line tools, such as kubeadm, which in turn use the API. However, you can also
access the API directly using REST calls.
Consider using one of the client libraries if you are writing an application using the
Kubernetes API.
OpenAPI specification
Complete API details are documented using OpenAPI.
OpenAPI V2
The Kubernetes API server serves an aggregated OpenAPI v2 spec via the /openapi/v2
endpoint. You can request the response format using request headers as follows:
application/json default
* serves
application/json
OpenAPI V3
FEATURE STATE: Kubernetes v1.27 [stable]
https://kubernetes.io/docs/concepts/overview/_print/ 36/39
6/6/23, 3:45 PM Overview | Kubernetes
{
"paths": {
...,
"api/v1": {
"serverRelativeURL": "/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1
},
"apis/admissionregistration.k8s.io/v1": {
"serverRelativeURL": "/openapi/v3/apis/admissionregistration.k8s.io/v
},
....
}
}
The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-
side caching. The proper HTTP caching headers are also set by the API server for that purpose
( Expires to 1 year in the future, and Cache-Control to immutable ). When an obsolete URL
is used, the API server returns a redirect to the newest URL.
The Kubernetes API server publishes an OpenAPI v3 spec per Kubernetes group version at the
/openapi/v3/apis/<group>/<version>?hash=<hash> endpoint.
application/json default
* serves
application/json
Persistence
Kubernetes stores the serialized state of objects by writing them into etcd.
API Discovery
A list of all group versions supported by a cluster is published at the /api and /apis
endpoints. Each group version also advertises the list of resources supported via
/apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1 ).
These endpoints are used by kubectl to fetch the list of resources supported by a cluster.
Aggregated Discovery
FEATURE STATE: Kubernetes v1.27 [beta]
Kubernetes offers beta support for aggregated discovery, publishing all resources supported
by a cluster through two endpoints ( /api and /apis ) compared to one for every group
version. Requesting this endpoint drastically reduces the number of requests sent to fetch the
https://kubernetes.io/docs/concepts/overview/_print/ 37/39
6/6/23, 3:45 PM Overview | Kubernetes
discovery for the average Kubernetes cluster. This may be accessed by requesting the
respective endpoints with an Accept header indicating the aggregated discovery resource:
Accept: application/json;v=v2beta1;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList .
Versioning is done at the API level rather than at the resource or field level to ensure that the
API presents a clear, consistent view of system resources and behavior, and to enable
controlling access to end-of-life and/or experimental APIs.
To make it easier to evolve and to extend its API, Kubernetes implements API groups that can
be enabled or disabled.
API resources are distinguished by their API group, resource type, namespace (for
namespaced resources), and name. The API server handles the conversion between API
versions transparently: all the different versions are actually representations of the same
persisted data. The API server may serve the same underlying data through multiple API
versions.
For example, suppose there are two API versions, v1 and v1beta1 , for the same resource. If
you originally created an object using the v1beta1 version of its API, you can later read,
update, or delete that object using either the v1beta1 or the v1 API version, until the
v1beta1 version is deprecated and removed. At that point you can continue accessing and
modifying the object using the v1 API.
API changes
Any system that is successful needs to grow and change as new use cases emerge or existing
ones change. Therefore, Kubernetes has designed the Kubernetes API to continuously change
and grow. The Kubernetes project aims to not break compatibility with existing clients, and to
maintain that compatibility for a length of time so that other projects have an opportunity to
adapt.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the API deprecation policy.
Kubernetes makes a strong commitment to maintain compatibility for official Kubernetes APIs
once they reach general availability (GA), typically at API version v1 . Additionally, Kubernetes
maintains compatibility with data persisted via beta API versions of official Kubernetes APIs,
and ensures that data can be converted and accessed via GA API versions when the feature
goes stable.
If you adopt a beta API version, you will need to transition to a subsequent beta or stable API
version once the API graduates. The best time to do this is while the beta API is in its
deprecation period, since objects are simultaneously accessible via both API versions. Once
the beta API completes its deprecation period and is no longer served, the replacement API
version must be used.
Note: Although Kubernetes also aims to maintain compatibility for alpha APIs versions, in
some circumstances this is not possible. If you use any alpha API versions, check the
release notes for Kubernetes when upgrading your cluster, in case the API did change in
incompatible ways that require deleting all existing alpha objects prior to upgrade.
Refer to API versions reference for more details on the API version level definitions.
https://kubernetes.io/docs/concepts/overview/_print/ 38/39
6/6/23, 3:45 PM Overview | Kubernetes
API Extension
The Kubernetes API can be extended in one of two ways:
1. Custom resources let you declaratively define how the API server should provide your
chosen resource API.
2. You can also extend the Kubernetes API by implementing an aggregation layer.
What's next
Learn how to extend the Kubernetes API by adding your own CustomResourceDefinition.
Controlling Access To The Kubernetes API describes how the cluster manages
authentication and authorization for API access.
Learn about API endpoints, resource types and samples by reading API Reference.
Learn about what constitutes a compatible change, and how to change the API, from API
changes.
https://kubernetes.io/docs/concepts/overview/_print/ 39/39