The helm chart installs OpenTelemetry Collector in kubernetes cluster.
- Kubernetes 1.23+
- Helm 3.9+
Add OpenTelemetry Helm repository:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
To install the chart with the release name my-opentelemetry-collector, run the following command:
helm install my-opentelemetry-collector open-telemetry/opentelemetry-collector
See UPGRADING.md.
OpenTelemetry Collector recommends to bind receivers' servers to addresses that limit connections to authorized users. For this reason, by default the chart binds all the Collector's endpoints to the pod's IP.
More info is available in the Security Best Practices docummentation
Some care must be taken when using hostNetwork: true
, as then OpenTelemetry Collector will listen on all the addresses in the host network namespace.
By default this chart will deploy an OpenTelemetry Collector as daemonset with three pipelines (logs, metrics and traces) and logging exporter enabled by default. Besides daemonset (agent), it can be also installed as deployment.
Example: Install collector as a deployment, and do not run it as an agent.
mode: deployment
By default collector has the following receivers enabled:
- metrics: OTLP and prometheus. Prometheus is configured only for scraping collector's own metrics.
- traces: OTLP, zipkin and jaeger (thrift and grpc).
- logs: OTLP (to enable container logs, see Configuration for Kubernetes container logs).
There are two ways to configure collector pipelines, which can be used together as well.
Default components can be removed with null
. When changing a pipeline, you must explicitly list all the components that are in the pipeline, including any default components.
Example: Disable metrics and logging pipelines and non-otlp receivers:
config:
receivers:
jaeger: null
prometheus: null
zipkin: null
service:
pipelines:
traces:
receivers:
- otlp
metrics: null
logs: null
Example: Add host metrics receiver:
mode: daemonset
presets:
hostMetrics:
enabled: true
The collector can be used to collect logs sent to standard output by Kubernetes containers. This feature is disabled by default. It has the following requirements:
- It needs agent collector to be deployed.
- It requires the contrib version of the collector image.
To enable this feature, set the presets.logsCollection.enabled
property to true
.
Here is an example values.yaml
:
mode: daemonset
presets:
logsCollection:
enabled: true
includeCollectorLogs: true
The way this feature works is it adds a filelog
receiver on the logs
pipeline. This receiver is preconfigured
to read the files where Kubernetes container runtime writes all containers' console output to.
The container logs pipeline uses the logging
console exporter by default.
Paired with the default filelog
receiver that receives all containers' console output,
it is easy to accidentally feed the exported logs back into the receiver.
Also note that using the --log-level=debug
option for the logging
exporter causes it to output
multiple lines per single received log, which when looped, would amplify the logs exponentially.
To prevent the looping, the default configuration of the receiver excludes logs from the collector's containers.
If you want to include the collector's logs, make sure to replace the logging
exporter
with an exporter that does not send logs to collector's standard output.
Here's an example values.yaml
file that replaces the default logging
exporter on the logs
pipeline
with an otlphttp
exporter that sends the container logs to https://example.com:55681
endpoint.
It also clears the filelog
receiver's exclude
property, for collector logs to be included in the pipeline.
mode: daemonset
presets:
logsCollection:
enabled: true
includeCollectorLogs: true
config:
exporters:
otlphttp:
endpoint: https://example.com:55681
service:
pipelines:
logs:
exporters:
- otlphttp
The collector can be configured to add Kubernetes metadata to logs, metrics and traces.
This feature is disabled by default. It has the following requirements:
- It requires k8sattributesprocessor processor to be included in the collector, such as contrib version of the collector image.
To enable this feature, set the presets.kubernetesAttributes.enabled
property to true
.
Here is an example values.yaml
:
mode: daemonset
presets:
kubernetesAttributes:
enabled: true
The collector can be configured to collects cluster-level metrics from the Kubernetes API server. A single instance of this receiver can be used to monitor a cluster.
This feature is disabled by default. It has the following requirements:
- It requires k8sclusterreceiver to be included in the collector, such as contrib version of the collector image.
- It requires statefulset or deployment mode with a signle replica.
To enable this feature, set the presets.clusterMetrics.enabled
property to true
.
Here is an example values.yaml
:
mode: deployment
replicaCount: 1
presets:
clusterMetrics:
enabled: true
The collector can be configured to collect Kubelet metrics.
This feature is disabled by default. It has the following requirements:
- It requires kubeletstats receiver to be included in the collector, such as contrib version of the collector image.
To enable this feature, set the presets.kubeletMetrics.enabled
property to true
.
Here is an example values.yaml
:
mode: daemonset
presets:
kubeletMetrics:
enabled: true
At this time, Prometheus CRDs are supported but other CRDs are not.
The values.yaml file contains information about all other configuration options for this chart.
For more examples see Examples.