5 releases
Uses new Rust 2024
| new 0.1.7 | Mar 25, 2026 |
|---|---|
| 0.1.5 | Mar 22, 2026 |
| 0.1.3 | Mar 22, 2026 |
| 0.1.2 | Mar 21, 2026 |
| 0.1.0 | Mar 21, 2026 |
#97 in Testing
310KB
6.5K
SLoC
Lumen
Fast HTTP load testing CLI — dynamic templates, threshold-gated CI, and load curves.
Full documentation at lmn.talek.cloud
Why Lumen
Most load testers answer "how fast is my API?" Lumen also answers "did this release break performance?" — by letting you define pass/fail thresholds and wiring the exit code into CI.
lmn run -H https://api.example.com/orders \
--header "Authorization: Bearer ${API_TOKEN}" \
-f lmn.yaml
# exits 0 if thresholds pass, 2 if they fail
# lmn.yaml
execution:
request_count: 1000
concurrency: 50
thresholds:
- metric: error_rate
operator: lt
value: 0.01 # < 1% errors
- metric: latency_p99
operator: lt
value: 500.0 # p99 < 500ms
Installation
cargo install lmn
Docker (zero-install):
docker run --rm ghcr.io/talek-solutions/lmn:latest run -H http://host.docker.internal:3000/api
Homebrew and pre-built binaries: see Installation docs.
Quick Start
# 100 GET requests, see latency table
lmn run -H https://httpbin.org/get
# POST with an inline JSON body
lmn run -H https://httpbin.org/post -M post -B '{"name":"alice"}'
# Run from a YAML config file
lmn run -f lmn.yaml
See the Quickstart guide for a full walkthrough.
Features
- Dynamic request bodies — per-request random data from typed JSON templates
- Threshold-gated CI — exit code
2on p99/error-rate/throughput failures; wires into any pipeline - Load curves — staged virtual user ramp-up with linear or step profiles
- Auth & headers —
${ENV_VAR}secret injection,.envauto-load, repeatable headers - Response tracking — extract and aggregate fields from response bodies (e.g. API error codes)
- JSON output — machine-readable report for dashboards and CI artifacts
- Config files — full YAML config with CLI flag precedence
Observability
Stream traces to any OpenTelemetry-compatible backend:
export OTEL_EXPORTER_OTLP_ENDPOINT=http://my-collector:4318
lmn run -H https://api.example.com
Start a local Tempo + Grafana stack from lmn-cli/:
docker compose up -d
# Grafana at http://localhost:3000 → Explore → Tempo
Reference
- CLI reference — full flag and config reference
- Template placeholders — request and response template reference
- JSON output schema — machine-readable report structure
Project Structure
lmn/
├── lmn-core/ # engine, templates, HTTP, thresholds (library crate)
└── lmn-cli/ # CLI entry point, OTel setup (binary crate)
cargo build
cargo test
License
Apache-2.0 — see LICENSE.
Dependencies
~19–38MB
~455K SLoC