#development-tools #llm #code-analysis #ai-agent #context-generator

bin+lib amdb

Turn your codebase into AI context. A high-performance context generator for LLMs (Cursor, Claude) using Tree-sitter and Vector Search.

8 releases

new 0.4.0 Feb 13, 2026
0.3.3 Feb 9, 2026
0.2.2 Feb 2, 2026
0.2.1 Jan 26, 2026
0.1.1 Jan 23, 2026

#2200 in Text processing

MIT license

69KB
1.5K SLoC

amdb: AI Context Generator

Rust Version

amdb logo


korean

benchmark

crates.io


⚡ The Context Problem

AI coding assistants (Cursor, Windsurf, Claude) are powerful, but they are blind. They only see the files you open. They lack the deep, structural understanding of your entire codebase that you have.

amdb (Agent Memory Database) solves this. It scans your local project, builds a vector index of your code, and generates a single, highly-optimized Markdown context file. Feed this file to your AI, and watch it understand your project like never before.


📦 Installation

You don't need Rust installed. Just run this script to install the latest binary automatically. Works on macOS (Intel/Apple Silicon) and Linux (including WSL).

curl --proto '=https' --tlsv1.2 -LsSf https://github.com/BETAER-08/amdb/releases/latest/download/amdb-installer.sh | sh

Option 2: Manual Download

Prefer to download the file yourself? Go to the Releases Page and download the version for your OS.

Option 3: Install via Cargo

If you have the Rust toolchain installed:

cargo install amdb

🚀 Quick Start

1. Initialize Project

Run this in your project root. amdb will scan your code (Rust, Python, JS/TS), extract symbols, and build a vector database in a hidden .database/ folder.

amdb init

You can also specify a target directory:

amdb init ./my-project

2. Generate Context

Create a full project summary. This generates .amdb/context.md, which contains a compressed map of your entire codebase.

amdb generate

🔥 Pro Tip: Drag and drop .amdb/context.md into your AI chat (Cursor/Claude) to give it "God Mode" understanding of your project.


🧠 Advanced Usage: Focus Mode

For large projects, a full context might be too big. Use Focus Mode to generate a summary relevant to a specific feature or bug. amdb uses hybrid search (exact match first, then vector search) to find the most relevant files.

# Example: generating context for authentication logic
amdb generate --focus "login authentication jwt"

This creates a targeted summary (e.g., in .amdb/) containing only the symbols and files relevant to "login authentication jwt".

🎯 Depth Control: Expand Context with Call Graph

When using focus mode, you can control how deeply amdb explores related files using the call graph. The --depth flag determines how many levels of function calls to traverse from your initial matches.

# Depth 0: Only files that exactly match the query
amdb generate --focus "authenticate" --depth 0

# Depth 1 (default): Include files directly called by matched files
amdb generate --focus "authenticate" --depth 1

# Depth 2: Include files 2 levels deep in the call chain
amdb generate --focus "authenticate" --depth 2

How it works:

  1. Exact Match Priority: First looks for files/symbols that exactly match your query
  2. Vector Search Fallback: If no exact matches found, uses semantic similarity search
  3. Call Graph Traversal: Expands context by following function calls to depth N
  4. Smart Filtering: Only includes files within similarity threshold (0.25) to keep context relevant

Example Use Cases:

  • --depth 0: When you need only the core implementation (e.g., a single module)
  • --depth 1: When you need immediate dependencies (default, works for most cases)
  • --depth 2+: When debugging complex issues that span multiple layers

🔄 Daemon Mode: Auto-Sync Your Context

Want your AI context to stay fresh automatically? Use Daemon Mode to watch your project for changes. When you edit, rename, or delete files, amdb instantly updates the database in the background.

amdb daemon

Or specify a directory:

amdb daemon ./my-project

The daemon will:

  • ✅ Automatically detect file changes (create, modify, delete, rename)
  • ✅ Update the vector database in real-time
  • ✅ Keep your context synchronized with your codebase
  • ✅ Run silently in the background

Pro Tip: Run the daemon in a separate terminal window while you code. Your AI context stays up-to-date without manual amdb init runs.


🛠 Supported Languages

amdb uses robust Tree-sitter parsers to fully understand the syntax and structure of:

  • Rust (.rs)
  • Python (.py)
  • JavaScript (.js, .jsx, .mjs)
  • TypeScript (.ts, .tsx)
  • C (.c, .h)
  • C++ (.cpp, .hpp, .cc, .cxx)
  • C#(.cs)
  • Go (.go)
  • Java (.java)
  • Ruby (.rb)
  • PHP (.php)
  • HTML (.html, .htm)
  • CSS (.css)
  • JSON (.json)
  • Bash (.sh, .bash)

⚙️ Configuration

Custom Configuration (Optional)

You can customize amdb behavior by creating an amdb.toml file in your project root:

server_port = 3000

exclude_patterns = [
    "target",
    ".git",
    "node_modules",
    ".amdb",
    ".fastembed_cache",
    "__pycache__",
    "dist",
    "build"
]

Configuration Options:

  • server_port: Port for future server features (default: 3000)
  • exclude_patterns: Directories and patterns to ignore during scanning

Verbose Mode

Need detailed logs for debugging? Add the --verbose (or -v) flag to any command:

amdb init --verbose
amdb generate --verbose
amdb daemon --verbose

This outputs detailed debug information about file scanning, parsing, and embedding generation.


📝 Git Configuration

amdb generates local files that should usually be ignored by Git. Add this to your .gitignore:

.database/
.amdb/

Generated by amdb • The Missing Memory for AI Agents

Please email us for bug reports or inquiries. email:try.betaer@gmail.com

Dependencies

~207MB
~5M SLoC