Your AI pair programmer that lives in the terminal.
Edit files, run commands, analyze code, manage git — all through natural language. Limit connects to Anthropic Claude, OpenAI, or z.ai to help you code, with a beautiful TUI or simple REPL.
curl -fsSL https://raw.githubusercontent.com/marioidival/limit/trunk/install.sh | bashcargo install limit-cli# 1. Clone and build
git clone https://github.com/marioidival/limit.git && cd limit
cargo build --release
# 2. Configure your LLM provider
echo 'provider = "anthropic"' > ~/.limit/config.toml
export ANTHROPIC_API_KEY="your-key-here"
# 3. Run
./target/release/limThat's it! Start chatting with your AI coding assistant.
Use
--no-tuiflag for text-based REPL:lim --no-tui
- Multi-Provider LLM Support — Anthropic Claude, OpenAI, z.ai, and local LLMs (Ollama, LM Studio, vLLM)
- 17 Built-in Tools — File I/O, Bash execution, Git operations, code analysis, web search/fetch
- Session Persistence — Auto-save/restore conversation history
- Token Tracking — SQLite-based usage tracking with cost estimation
- Docker Sandbox — Optional containerized tool execution for isolation
- LSP Integration — Go-to-definition, find-references
- AST-Aware Search — Code search that understands syntax (Rust, TypeScript, Python)
- Markdown Rendering — Rich formatting with syntax-highlighted code blocks
| Feature | Limit | Aider | Cursor | GitHub Copilot |
|---|---|---|---|---|
| Terminal-native | ✅ | ✅ | ❌ | ❌ |
| Multi-provider LLM | ✅ | ✅ | ❌ | ❌ |
| Local LLM support | ✅ | ✅ | ❌ | ❌ |
| Docker sandbox | ✅ | ❌ | ❌ | ❌ |
| Session persistence | ✅ | ✅ | ✅ | ✅ |
| Token tracking | ✅ | ❌ | ❌ | ✅ |
| Open source | ✅ | ✅ | ❌ | ❌ |
| AST-aware search | ✅ | ❌ | ✅ | ❌ |
| LSP integration | ✅ | ❌ | ✅ | ✅ |
Perfect for:
- 🖥️ Developers who live in the terminal
- 🔒 Privacy-conscious teams wanting Docker isolation
- 🏠 Local LLM enthusiasts running models offline
- 📊 Projects requiring audit trails of AI interactions
- 🔄 Multi-model workflows (switch between Claude, GPT-4, z.ai, local)
curl -fsSL https://raw.githubusercontent.com/marioidival/limit/trunk/install.sh | bashThis will install Limit to ~/.local/bin/lim.
git clone https://github.com/marioidival/limit.git
cd limit
cargo build --workspace --releaseOne-line install:
- curl or wget
- Unix-like OS (Linux, macOS)
From source:
- Rust 1.70+
- git
- Unix-like OS (Linux, macOS)
📚 Quick Setup Guides:
- Anthropic Claude - Recommended for code analysis
- OpenAI GPT - Fast and reliable
- z.ai - Cost-effective alternative
- Local LLMs - Ollama, LM Studio, vLLM
- Full Configuration Guide - All options in one place
Create a configuration file at ~/.limit/config.toml:
provider = "anthropic"
[providers.anthropic]
# api_key is optional - falls back to ANTHROPIC_API_KEY env var
api_key = "sk-ant-api03-..."
model = "claude-sonnet-4-6-20260217" # or "claude-opus-4-6-20260205" for most capable
max_tokens = 4096
timeout = 60
# Optional: Custom API endpoint
# base_url = "https://api.custom-provider.com/v1/messages"provider = "openai"
[providers.openai]
# api_key is optional - falls back to OPENAI_API_KEY or ZAI_API_KEY env var
api_key = "sk-..."
model = "gpt-5.4" # or "gpt-4" for legacy
max_tokens = 4096
timeout = 60
# base_url = "http://localhost:8080/v1/chat/completions"
# Note: Must include full endpoint path (e.g., /v1/chat/completions)provider = "zai"
[providers.zai]
# api_key is optional - falls back to ZAI_API_KEY env var
api_key = "..."
model = "glm-5" # 744B parameters, 200K context, best open-weights model
max_tokens = 4096
timeout = 60
# Optional: Enable thinking mode for complex reasoning tasks
# thinking_enabled = false
# clear_thinking = true # Set to false for Preserved Thinking in multi-turnprovider = "local" # or "ollama", "lmstudio", "vllm"
[providers.local]
model = "llama3.3" # or "qwen2.5", "deepseek-r1", "mistral"
base_url = "http://localhost:11434/v1/chat/completions"
# api_key not required for local servers
max_tokens = 4096
timeout = 120 # Longer timeout for local inferenceSee Local LLM Providers for detailed setup guides.
Provider API keys can be set via environment variables as fallback:
| Variable | Provider |
|---|---|
ANTHROPIC_API_KEY |
Anthropic Claude |
OPENAI_API_KEY |
OpenAI |
ZAI_API_KEY |
z.ai |
limlim --no-tuiIf installed from source without the install script:
cargo run --package limit-cli
lim> Read the file src/main.rs and explain what it does
[Reading src/main.rs...]
This file contains the main entry point for the CLI application...
| Command | Description |
|---|---|
/exit |
Save session and exit |
/clear |
Clear the screen |
/help |
Show available commands |
/model |
Show current model configuration |
/session list |
List all saved sessions |
/session new |
Create a new session |
/session load <id> |
Load a specific session by ID |
| Tool | Description |
|---|---|
file_read |
Read file contents (max 50MB) |
file_write |
Write content to file |
file_edit |
Edit file using diff-based replacement |
| Tool | Description |
|---|---|
bash |
Execute shell commands with timeout |
| Tool | Description |
|---|---|
git_status |
Show repository status |
git_diff |
Show changes |
git_log |
Show commit history |
git_add |
Stage files |
git_commit |
Create commit |
git_push |
Push to remote |
git_pull |
Pull from remote |
git_clone |
Clone repository |
| Tool | Description |
|---|---|
grep |
Search files with regex |
ast_grep |
AST-aware code search (Rust, TypeScript, Python) |
lsp |
LSP operations (go-to-definition, find-references) |
| Tool | Description |
|---|---|
web_search |
Search the web using Exa AI for current information |
web_fetch |
Fetch and convert web pages to markdown |
| Crate | Description |
|---|---|
limit-llm |
Multi-provider LLM client with streaming, SQLite tracking, binary persistence, model handoff |
limit-agent |
Agent runtime with tool registry, parallel execution, event system, Docker sandbox |
limit-cli |
REPL interface with 17 tools, markdown rendering, session management |
See DEVELOPMENT_GUIDE.md for:
- Building and testing
- Project structure
- Adding new tools/providers
- Code style guidelines
- Debugging tips
- Configuration Overview - All configuration options in one place
- Anthropic Claude Setup - Detailed guide for Claude setup
- OpenAI Setup - Detailed guide for GPT-5.4 setup
- z.ai Setup - Detailed guide for z.ai (GLM-5) setup
- Local LLM Providers - Ollama, LM Studio, vLLM, and custom servers
Sessions are automatically saved to ~/.limit/sessions/ and include:
- Conversation history (messages)
- Session metadata (SQLite)
- Agent state
- Token usage statistics
On startup, the last session is automatically restored.
Usage statistics are tracked in ~/.limit/tracking.db:
- Requests count
- Input/output tokens
- Cost estimation
- Duration metrics
Limit can execute tools in isolated Docker containers:
# Requires Docker to be installed and running
# Automatically detected and used if availableSandbox features:
- 🔒 Isolated execution environment
- 📁 Read-only project mount
- 🚫 No network access
- 💾 512MB memory limit
- ⏱️ 60s timeout
We welcome contributions!
- Fork and clone the repository
- Create a feature branch
- Run tests:
cargo test --workspace - Run lints:
cargo clippy --all-targets && cargo fmt - Submit a PR with conventional commit message
| Limitation | Status |
|---|---|
| Unix-only TUI | Windows support planned |
| Max 50MB file reads | By design |
| Max 100 iterations per session | Configurable via max_iterations |
MIT
