2 releases
Uses new Rust 2024
| 0.1.1 | Mar 1, 2026 |
|---|---|
| 0.1.0 | Mar 1, 2026 |
#738 in Command line utilities
19KB
307 lines
llms
A tiny Rust CLI for streaming responses from OpenAI, Gemini, and Claude.
What it does
- Streams model output to stdout as tokens/chunks arrive.
- Reads prompt from
--promptor stdin. - Supports provider API keys from env vars or a home config file.
- Includes config commands:
llms config set <provider> <key>llms config get <provider>llms config list
Requirements
- Rust toolchain (stable)
- Network access to the model provider API
- An API key for the provider you use
Build
cargo build --release
Binary path:
./target/release/llms
Or run directly with cargo:
cargo run -- --help
Configure API keys
You can use either env vars or the built-in config file.
Option A: env vars
export OPENAI_API_KEY="sk-..."
export GEMINI_API_KEY="..."
export ANTHROPIC_API_KEY="..."
Option B: CLI config commands
llms config set openai sk-xxxx
llms config set gemini your-gemini-key
llms config set claude your-anthropic-key
Read a key:
llms config get openai
List configured providers:
llms config list
Config file location:
~/.llms/config.json
Example file:
{
"openai": "sk-xxxx",
"gemini": "your-gemini-key",
"claude": "your-anthropic-key"
}
Key precedence:
- Provider env var (if set and non-empty)
~/.llms/config.json
Usage
Top-level help:
llms --help
Basic prompt
llms --provider openai --model gpt-5 --prompt "Explain SSE in one sentence"
Read prompt from stdin
echo "Write a haiku about rust" | llms --provider gemini --model gemini-3-flash-preview
Provider defaults
If --model is omitted, defaults are:
openai:gpt-5gemini:gemini-2.0-flashclaude:claude-opus-4-6
Piping and redirection
The CLI writes model text to stdout and errors to stderr, so normal shell piping works.
Save output to file:
llms --provider gemini --prompt "hello" > out.txt
Pipe to another process:
llms --provider gemini --prompt "hello" | tee out.txt
Split output and errors:
llms --provider gemini --prompt "hello" > out.txt 2> err.log
If running through cargo and you do not want cargo status lines:
cargo run --quiet -- --provider gemini --prompt "hello"
Troubleshooting
Missing API key for <provider>- Set the matching env var, or run
llms config set <provider> <key>.
- Set the matching env var, or run
- No response when using stdin
- Ensure stdin is not empty and not just whitespace.
- Wrong provider name
- Valid providers are
openai,gemini,claude.
- Valid providers are
Notes
- API keys are stored in plain text in
~/.llms/config.json. - Treat that file as sensitive.
License
MIT
Dependencies
~8–16MB
~254K SLoC