Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
9eddaaf
chore: add basic structure for k6-operator docs
heitortsergent Apr 18, 2024
4ea406d
chore: hide reference for now
heitortsergent Apr 18, 2024
2f1015b
k6-operator: add main sections from the repo Readme.md
yorugac May 16, 2024
0731380
Add a short introduction to the index page
heitortsergent Jun 3, 2024
835339f
Update install-k6-operator.md
heitortsergent Jun 3, 2024
96b0ecf
Update upgrade-k6-operator.md
heitortsergent Jun 3, 2024
0bd07ec
Update troubleshooting.md
heitortsergent Jun 3, 2024
024875d
Update executing-k6-scripts-with-testrun-crd.md
heitortsergent Jun 3, 2024
fc15830
Update extensions.md
heitortsergent Jun 3, 2024
af61bb4
Update extensions.md
heitortsergent Jun 3, 2024
0bd92fb
Update common-options.md
heitortsergent Jun 3, 2024
66bd188
Update scheduling-tests.md
heitortsergent Jun 3, 2024
5c71c70
k6-operator: fix typos
yorugac Jun 14, 2024
8a7e311
k6-operator: add content for troubleshooting.md
yorugac Jun 14, 2024
9d046ed
chore: replaces instances of k6-operator with k6 Operator
heitortsergent Jul 1, 2024
b8b8104
chore: add uninstall instructions
heitortsergent Jul 1, 2024
1f999ad
chore: hide Upgrade k6 Operator page
heitortsergent Jul 1, 2024
c77a629
chore: add Use the k6 operator with Grafana Cloud k6 page
heitortsergent Jul 1, 2024
1d27807
chore: review troubleshooting doc
heitortsergent Jul 1, 2024
bc80292
chore: update Namespaced deployment heading to Watch namespace
heitortsergent Jul 1, 2024
c079328
Merge branch 'main' into chore/add-plz-docs
heitortsergent Jul 1, 2024
3f08b76
Apply suggestions from code review
heitortsergent Jul 19, 2024
ac060ef
Move docs to next and v0.52.x folders
heitortsergent Jul 19, 2024
6360694
Remove docs from v0.50.x folder
heitortsergent Jul 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
k6-operator: add main sections from the repo Readme.md
  • Loading branch information
yorugac committed May 16, 2024
commit 2f1015bca9a22fb4db556c1a5b0d0719425dd83b
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,87 @@ title: Install k6-operator

# Install k6-operator

<!-- TODO: Add content -->
## Prerequisites

The minimal prerequisite for k6-operator is a Kubernetes cluster and access to it with [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).

## Deploying the operator

### Bundle deployment

The easiest way to install the operator is with bundle:
```bash
curl https://raw.githubusercontent.com/grafana/k6-operator/main/bundle.yaml | kubectl apply -f -
```

Bundle includes default manifests for k6-operator, including `k6-operator-system` namespace and k6-operator Deployment with latest tagged Docker image. Customizations can be made on top of this manifest as needs be, e.g. with `kustomize`.

### Deployment with Helm

Helm releases of k6-operator are published together with other Grafana Helm charts and can be installed with the following commands:

```bash
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm install k6-operator grafana/k6-operator
```

Passing additional configuration can be done with `values.yaml` (example can be found [here](https://github.com/grafana/k6-operator/blob/main/charts/k6-operator/samples/customAnnotationsAndLabels.yaml)):

```bash
helm install k6-operator grafana/k6-operator -f values.yaml
```

Complete list of options available for Helm can be found [here](https://github.com/grafana/k6-operator/blob/main/charts/k6-operator/README.md).

### Makefile deployment

In order to install the operator with Makefile, the following additional tooling must be installed:
- [go](https://go.dev/doc/install)
- [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/)

A more manual, low-level way to install the operator is by running the command below:

```bash
make deploy
```

This method may be more useful for development of k6-operator, depending on specifics of the setup.

## Installing the CRD

The k6-operator includes custom resources called `TestRun`, `PrivateLoadZone` and currently also `K6`. These will be automatically installed when you do a deployment or install a bundle, but in case you want to do it yourself, you may run the command below:

```bash
make install
```

{{% admonition type="warning" %}}

`K6` CRD has been substituted with `TestRun` CRD and will be deprecated in the future. Please use `TestRun` CRD.

{{% /admonition %}}

## Namespaced deployment

By default, k6-operator watches `TestRun` and `PriaveLoadZone` custom resources in all namespaces. But it is possible to configure k6-operator to watch only a specific namespace by setting a `WATCH_NAMESPACE` environment variable for the operator's deployment:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: k6-operator-controller-manager
namespace: k6-operator-system
spec:
template:
spec:
containers:
- name: manager
image: ghcr.io/grafana/k6-operator:controller-v0.0.14
env:
- name: WATCH_NAMESPACE
value: "some-ns"
# ...
```

{{< section depth=2 >}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
weight: 300
title: Common options
---

<!-- TODO: consider removeing this once full reference is generated -->

# Common options

The only options that must be defined as part of `TestRun` CRD spec are `script` and `parallelism`. But there are many others; here are some of the most common.

## Parallelism

`parallelism` defines how many instances of k6 runneres you want to create. Each instance will be assigned an equal execution segment. For instance, if your test script is configured to run 200 VUs and `parallelism` is set to 4, the k6-operator will
create four k6 jobs, each running 50 VUs to achieve the desired VU count.

## Separate

`separate: true` indicates that the jobs created need to be distributed across different nodes. This is useful if you're running a
test with a really high VU count and want to make sure the resources of each node won't become a bottleneck.
Comment thread
heitortsergent marked this conversation as resolved.
Outdated

## Service account

If you want to use a custom Service Account you'll need to pass it into both the starter and runner object:

```yaml
apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
name: <test-name>
spec:
script:
configMap:
name: "<configmap>"
runner:
serviceAccountName: <service-account>
starter:
serviceAccountName: <service-account>
```

## Runner

Defines options for the test runner pods. The non-exhaustive list includes:

* passing resource limits and requests
* passing in labels and annotations
* passing in affinity and anti-affinity
* passing in a custom image

## Starter

Defines options for the starter pod. The non-exhaustive list includes:

* passing in custom image
* passing in labels and annotations


{{< section depth=2 >}}

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
---
weight: 100
title: Executing k6 scripts with TestRun CRD
---

# Executing k6 scripts with TestRun CRD

## Defining test scripts

There are several ways to configure scripts in `TestRun` CRD.
#The operator utilises `ConfigMap`s and `LocalFile` to serve test scripts to the jobs. To upload your own test script, run the following command to configure through `ConfigMap`:

### ConfigMap

The main way to configure script is to create a ConfigMap with the script contents:

```bash
kubectl create configmap my-test --from-file /path/to/my/test.js
```

Then specify it in `TestRun`:

```bash
script:
configMap:
name: my-test
file: test.js
```

{{% admonition type="note" %}}

There is a character limit of 1048576 bytes to a single configmap. If you need to have a larger test file, you'll need to use a volumeClaim or a localFile instead

{{% /admonition %}}

### VolumeClaim

If you have a PVC with the name `stress-test-volumeClaim` containing your script and any other supporting file(s), you can pass it to the test like this:

```yaml
spec:
script:
volumeClaim:
name: "stress-test-volumeClaim"
# test.js should exist inside /test/ folder.
# All the js files and directories test.js is importing
# should be inside the same directory as well.
file: "test.js"
```

The pods will expect to find script files in `/test/` folder. If `volumeClaim` fails, it's the first place to check: the latest initializer pod does not generate any logs and when it can't find the file, it will terminate with error. So missing file may not be that obvious and it makes sense to check it is present manually. See [GH issue](https://github.com/k6-operator/issues/143) for potential improvements.

#### Sample directory structure

```
├── test
│ ├── requests
│ │ ├── stress-test.js
│ ├── test.js
```

In the above example, `test.js` imports a function from `stress-test.js` and these files would look like this:

```js
// test.js
import stressTest from "./requests/stress-test.js";

export const options = {
vus: 50,
duration: '10s'
};

export default function () {
stressTest();
}
```

```js
// stress-test.js
import { sleep, check } from 'k6';
import http from 'k6/http';


export default () => {
const res = http.get('https://test-api.k6.io');
check(res, {
'status is 200': () => res.status === 200,
});
sleep(1);
};
```

### LocalFile

If the script is present in the filesystem of custom runner image, it can be accessed with `localFile` option:

```yaml
spec:
parallelism: 4
script:
localFile: /test/test.js
runner:
image: <custom-image>
```

{{% admonition type="note" %}}

If there is any limitation on usage of `volumeClaim` in your cluster you can use the `localFile` option, but usage of `volumeClaim` is recommneded.

{{% /admonition %}}


### Multi-file tests

In case your k6 script is split between more than one JS file, you can simply create a ConfigMap with several data entries like this:

```bash
kubectl create configmap scenarios-test --from-file test.js --from-file utils.js
```

If there are too many files to specify manually, kubectl with folder might be an option as well:
```bash
kubectl create configmap scenarios-test --from-file=./test
```

Alternatively, you can create an archive with k6:
```bash
k6 archive test.js [args]
```

The above command will create an `archive.tar` in your current folder, unless `-O` option is used to change the name of the output archive. Then it is possible to put that archive into configmap similarly to JS script:
```bash
kubectl create configmap scenarios-test --from-file=archive.tar
```

In case of using an archive it must correctly set in your yaml for `TestRun` deployment:

```yaml
# ...
spec:
script:
configMap:
name: "crocodile-stress-test"
file: "archive.tar" # <-- change here
```

In other words, `file` option must be the correct entrypoint for `k6 run` command.


## Executing tests

Tests are executed by applying the custom resource `TestRun` to a cluster where the k6-operator is running. Additional optional properties of `TestRun` CRD allow you to control some key aspects of a distributed execution. For example:

```yaml
# k6-resource.yml

apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
name: k6-sample
spec:
parallelism: 4
script:
configMap:
name: k6-test
file: test.js
separate: false
runner:
image: <custom-image>
metadata:
labels:
cool-label: foo
annotations:
cool-annotation: bar
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
resources:
limits:
cpu: 200m
memory: 1000Mi
requests:
cpu: 100m
memory: 500Mi
starter:
image: <custom-image>
metadata:
labels:
cool-label: foo
annotations:
cool-annotation: bar
securityContext:
runAsUser: 2000
runAsGroup: 2000
runAsNonRoot: true
```

`TestRun` CR is created with this command:

```bash
kubectl apply -f /path/to/your/k6-resource.yml
```

## Cleaning up resources

After completing a test run, you need to clean up the test jobs created. Manually this can be done by running the following command:

```bash
kubectl delete -f /path/to/your/k6-resource.yml
```

Alternatively, automatic deletion of all resources can be configured with `cleanup` option:
```yaml
spec:
cleanup: "post"
```

With `cleanup` option set, k6-operator will remove `TestRun` CRD and all created resources once the test run is finished.

{{< section depth=2 >}}
Loading