Skip to content

Commit

Permalink
update docs w.r.t 2.10 changes
Browse files Browse the repository at this point in the history
Fixes linkerd/linkerd2#5867

This PR has the changes that are fixed based on the changes
from 2.10.

Signed-off-by: Tarun Pothulapati <[email protected]>
  • Loading branch information
Pothulapati committed Mar 4, 2021
1 parent abf3b7d commit 1fe2a2e
Show file tree
Hide file tree
Showing 9 changed files with 36 additions and 47 deletions.
5 changes: 3 additions & 2 deletions linkerd.io/content/2/tasks/adding-your-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,14 @@ in the correct places, then applies it to the cluster.
## Verifying the data plane pods have been injected

Once your services have been added to the mesh, you will be able to query
Linkerd for traffic metrics about them, e.g. by using [`linkerd
Linkerd for traffic metrics about them, e.g. by using [`linkerd viz
stat`](/2/reference/cli/stat/):

```bash
linkerd stat deployments -n MYNAMESPACE
linkerd viz stat deployments -n MYNAMESPACE
```

The above command requires the `viz` extension to be installed.
Note that it may take several seconds for these metrics to appear once the data
plane proxies have been injected.

Expand Down
22 changes: 11 additions & 11 deletions linkerd.io/content/2/tasks/books.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Let's use Linkerd to discover the root cause of this app's failures. To check
out the Linkerd dashboard, run:

```bash
linkerd dashboard &
linkerd viz dashboard &
```

{{< fig src="/images/books/dashboard.png" title="Dashboard" >}}
Expand Down Expand Up @@ -251,12 +251,12 @@ curl -sL https://run.linkerd.io/booksapp/books.swagger \
| kubectl -n booksapp apply -f -
```

Verifying that this all works is easy when you use `linkerd tap`. Each live
Verifying that this all works is easy when you use `linkerd viz tap`. Each live
request will show up with what `:authority` or `Host` header is being seen as
well as the `:path` and `rt_route` being used. Run:

```bash
linkerd -n booksapp tap deploy/webapp -o wide | grep req
linkerd -n booksapp viz tap deploy/webapp -o wide | grep req
```

This will watch all the live requests flowing through `webapp` and look
Expand All @@ -272,12 +272,12 @@ As you can see:
- `:path` correctly matches
- `rt_route` contains the name of the route

These metrics are part of the [`linkerd routes`](/2/reference/cli/routes/)
command instead of [`linkerd stat`](/2/reference/cli/stat/). To see the metrics
These metrics are part of the [`linkerd viz routes`](/2/reference/cli/routes/)
command instead of [`linkerd viz stat`](/2/reference/cli/stat/). To see the metrics
that have accumulated so far, run:

```bash
linkerd -n booksapp routes svc/webapp
linkerd -n booksapp viz routes svc/webapp
```

This will output a table of all the routes observed and their golden metrics.
Expand All @@ -288,7 +288,7 @@ Profiles can be used to observe *outgoing* requests as well as *incoming*
requests. To do that, run:

```bash
linkerd -n booksapp routes deploy/webapp --to svc/books
linkerd -n booksapp viz routes deploy/webapp --to svc/books
```

This will show all requests and routes that originate in the `webapp` deployment
Expand Down Expand Up @@ -317,7 +317,7 @@ In this application, the success rate of requests from the `books` deployment to
the `authors` service is poor. To see these metrics, run:

```bash
linkerd -n booksapp routes deploy/books --to svc/authors
linkerd -n booksapp viz routes deploy/books --to svc/authors
```

The output should look like:
Expand Down Expand Up @@ -360,7 +360,7 @@ this route automatically. We see a nearly immediate improvement in success rate
by running:

```bash
linkerd -n booksapp routes deploy/books --to svc/authors -o wide
linkerd -n booksapp viz routes deploy/books --to svc/authors -o wide
```

This should look like:
Expand Down Expand Up @@ -396,7 +396,7 @@ To get started, let's take a look at the current latency for requests from
`webapp` to the `books` service:

```bash
linkerd -n booksapp routes deploy/webapp --to svc/books
linkerd -n booksapp viz routes deploy/webapp --to svc/books
```

This should look something like:
Expand Down Expand Up @@ -441,7 +441,7 @@ time a REST client would wait for a response.
Run `routes` to see what has changed:

```bash
linkerd -n booksapp routes deploy/webapp --to svc/books -o wide
linkerd -n booksapp viz routes deploy/webapp --to svc/books -o wide
```

With timeouts happening now, the metrics will change:
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2/tasks/canary-release.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ metrics show the backends receiving traffic in real time and measure the success
rate, latencies and throughput. From the CLI, you can watch this by running:

```bash
watch linkerd -n test stat deploy --from deploy/load
watch linkerd -n test viz stat deploy --from deploy/load
```

For something a little more visual, you can use the dashboard. Start it by
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2/tasks/configuring-retries.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ spec:

## Monitoring Retries

Retries can be monitored by using the `linkerd routes` command with the `--to`
Retries can be monitored by using the `linkerd viz routes` command with the `--to`
flag and the `-o wide` flag. Since retries are performed on the client-side,
we need to use the `--to` flag to see metrics for requests that one resource
is sending to another (from the server's point of view, retries are just
Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2/tasks/customize-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,10 +116,10 @@ file named `grafana.yaml` and add your modifications:
kind: ConfigMap
apiVersion: v1
metadata:
name: linkerd-grafana-config
name: grafana-config
data:
grafana.ini: |-
instance_name = linkerd-grafana
instance_name = grafana
[server]
root_url = %(protocol)s://%(domain)s:/grafana/
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2/tasks/debugging-your-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ in the [Getting Started](/2/getting-started/) guide and have Linkerd and the
demo application running in a Kubernetes cluster. If you've not done that yet,
go get started and come back when you're done!

If you glance at the Linkerd dashboard (by running the `linkerd dashboard`
If you glance at the Linkerd dashboard (by running the `linkerd viz dashboard`
command), you should see all the resources in the `emojivoto` namespace,
including the deployments. Each deployment running Linkerd shows success rate,
requests per second and latency percentiles.
Expand Down
30 changes: 9 additions & 21 deletions linkerd.io/content/2/tasks/external-prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ sense for various reasons.
{{< note >}}
Note that this approach requires you to manually add and maintain additional
scrape configuration in your Prometheus configuration.
If you prefer to use the default Linkerd Prometheus add-on,
If you prefer to use the default Linkerd Prometheus,
you can export the metrics to your existing monitoring infrastructure
following the instructions at <https://linkerd.io/2/tasks/exporting-metrics/>
{{< /note >}}
Expand All @@ -30,37 +30,30 @@ The following scrape configuration has to be applied to the external
Prometheus instance.

{{< note >}}
The below scrape configuration is a [subset of `linkerd-prometheus` scrape configuration](https://github.com/linkerd/linkerd2/blob/a1be60aea183efe12adba8c97fadcdb95cdcbd36/charts/add-ons/prometheus/templates/prometheus.yaml#L69-L147).
The below scrape configuration is a [subset of `linkerd-prometheus` scrape configuration](https://github.com/linkerd/linkerd2/blob/bc5bdeb834f571d92937fe5c2ead6bf88e37823a/viz/charts/linkerd-viz/templates/prometheus.yaml#L47-L151).
{{< /note >}}

Before applying, it is important to replace templated values (present in `{{}}`)
with direct values for the below configuration to work.

```yaml
- job_name: 'linkerd-controller'

scrape_interval: 10s
scrape_timeout: 10s

kubernetes_sd_configs:
- role: pod
namespaces:
names: ['{{.Values.namespace}}']
names:
- '{{.Values.linkerdNamespace}}'
- '{{.Values.namespace}}'
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_label_linkerd_io_control_plane_component
- __meta_kubernetes_pod_container_port_name
action: keep
regex: (.*);admin-http$
regex: admin-http
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: component

- job_name: 'linkerd-service-mirror'

scrape_interval: 10s
scrape_timeout: 10s

kubernetes_sd_configs:
- role: pod
relabel_configs:
Expand All @@ -74,10 +67,6 @@ with direct values for the below configuration to work.
target_label: component

- job_name: 'linkerd-proxy'

scrape_interval: 10s
scrape_timeout: 10s

kubernetes_sd_configs:
- role: pod
relabel_configs:
Expand All @@ -86,7 +75,7 @@ with direct values for the below configuration to work.
- __meta_kubernetes_pod_container_port_name
- __meta_kubernetes_pod_label_linkerd_io_control_plane_ns
action: keep
regex: ^{{default .Values.proxyContainerName "linkerd-proxy" .Values.proxyContainerName}};linkerd-admin;{{.Values.namespace}}$
regex: ^{{default .Values.proxyContainerName "linkerd-proxy" .Values.proxyContainerName}};linkerd-admin;{{.Values.linkerdNamespace}}$
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
Expand Down Expand Up @@ -133,7 +122,7 @@ with direct values for the below configuration to work.
The running configuration of the builtin prometheus can be used as a reference.
```bash
kubectl -n linkerd-viz get configmap linkerd-prometheus-config -o yaml
kubectl -n linkerd-viz get configmap prometheus-config -o yaml
```

## Linkerd-Viz Extension Configuration
Expand All @@ -159,8 +148,7 @@ The same has to be passed again by the user during re-installs, upgrades, etc.
When using an external Prometheus and configuring the `prometheusUrl`
field, Linkerd's Prometheus will still be included in installation.

If you wish to disable this included Prometheus, be sure to include the
If you wish to disable it, be sure to include the
following configuration as well:

```yaml
Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2/tasks/fault-injection.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ After a little while, the stats will show 100% success rate. You can verify this
by running:

```bash
linkerd -n booksapp stat deploy
linkerd -n booksapp viz stat deploy
```

The output will end up looking at little like:
Expand Down Expand Up @@ -166,7 +166,7 @@ what this looks like by running `stat` and filtering explicitly to just the
requests from `webapp`:

```bash
linkerd -n booksapp routes deploy/webapp --to service/books
linkerd -n booksapp viz routes deploy/webapp --to service/books
```

Unlike the previous `stat` command which only looks at the requests received by
Expand Down
12 changes: 6 additions & 6 deletions linkerd.io/content/2/tasks/getting-per-route-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ associate a specific request to a specific route.
For a tutorial that shows this functionality off, check out the
[books demo](/2/tasks/books/#service-profiles).

You can view per-route metrics in the CLI by running `linkerd routes`:
You can view per-route metrics in the CLI by running `linkerd viz routes`:

```bash
$ linkerd routes svc/webapp
$ linkerd viz routes svc/webapp
ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
GET / webapp 100.00% 0.6rps 25ms 30ms 30ms
GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms
Expand All @@ -34,7 +34,7 @@ specified in your service profile will end up there.
It is also possible to look the metrics up by other resource types, such as:

```bash
$ linkerd routes deploy/webapp
$ linkerd viz routes deploy/webapp
ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
[DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms
GET / webapp 100.00% 0.5rps 27ms 38ms 40ms
Expand All @@ -53,7 +53,7 @@ Then, it is possible to filter all the way down to requests going from a
specific resource to other services:

```bash
$ linkerd routes deploy/webapp --to svc/books
$ linkerd viz routes deploy/webapp --to svc/books
ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms
GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms
Expand All @@ -66,11 +66,11 @@ PUT /books/{id}.json books 41.98% 1.4rps 73ms 97ms
## Troubleshooting

If you're not seeing any metrics, there are two likely culprits. In both cases,
`linkerd tap` can be used to understand the problem. For the resource that
`linkerd viz tap` can be used to understand the problem. For the resource that
the service points to, run:

```bash
linkerd tap deploy/webapp -o wide | grep req
linkerd viz tap deploy/webapp -o wide | grep req
```

A sample output is:
Expand Down

0 comments on commit 1fe2a2e

Please sign in to comment.