#task-manager #workflow #intent

bin+lib intent-engine

A command-line database service for tracking strategic intent, tasks, and events

58 releases (10 breaking)

new 0.11.1 Mar 10, 2026
0.10.11 Jan 1, 2026
0.10.10 Dec 29, 2025
0.8.4 Nov 30, 2025

#569 in Database interfaces

MIT/Apache

1.5MB
30K SLoC

Rust 26K SLoC // 0.1% comments Shell 2K SLoC // 0.1% comments TypeScript 1.5K SLoC // 0.0% comments PowerShell 310 SLoC // 0.1% comments Vue 249 SLoC // 0.0% comments JavaScript 112 SLoC // 0.1% comments Ruby 62 SLoC Batch 2 SLoC

Intent-Engine

中文 | English

CI Crates.io npm License: MIT OR Apache-2.0

Persistent memory for AI coding assistants.


AI Forgets. Every Time.

Without Intent-Engine:

Day 1: "Build authentication"
       AI works brilliantly...
       [session ends]

Day 2: "Continue auth"
       AI: "What authentication?"

With Intent-Engine:

Day 1: "Build authentication"
       AI works, saves progress...
       [session ends]

Day 2: "Continue auth"
       AI: "Resuming #42: JWT auth.
            Done: token generation.
            Next: refresh tokens."

One command restores everything: ie status


Visual Dashboard

See your entire task structure at a glance:

ie dashboard

IE Dashboard

Features:

  • Task Navigator — Hierarchical tree view with search
  • Task Detail — Full spec with markdown rendering (mermaid diagrams, code blocks)
  • Decision Timeline — Chronological log of all decisions and notes
  • Multi-project Support — Switch between projects via tabs

Not Just Memory — Infrastructure

What actually happens when things go wrong:

  • Session ends → ✓ Persisted
  • Tool crashes → ✓ Recoverable
  • Week later → ✓ Full history
  • Multiple agents → ✓ Isolated
  • Complex project → ✓ Focus-driven

Why It Works

Minimal Footprint — ~200 tokens overhead, single binary, no daemons

Battle-Tested Stack — Rust + SQLite + FTS5, GB-scale in milliseconds, local-only


The Bigger Picture

The unsolved problem in AI agents: tasks that span days or weeks.

Intent-Engine provides the foundation:

Week-long refactoring:

├── Agent A (session: "api")    → focus: #12 REST endpoints
├── Agent B (session: "db")     → focus: #15 Schema migration
└── Agent C (session: "test")   → focus: #18 Integration tests
                                  depends_on: [#12, #15]
  • Interruptions → Persistent memory
  • Multi-agent → Session isolation
  • Scheduling → Dependency graph (depends_on)
  • Context explosion → Focus-driven retrieval

Result: Reliable multi-day, multi-agent workflows.


Get Started

/plugin marketplace add wayfind/origin-task
/plugin install intent-engine

OpenCode

curl -fsSL https://raw.githubusercontent.com/wayfind/intent-engine/main/opencode-plugin/intent-engine.ts \
  -o ~/.config/opencode/plugin/intent-engine.ts

Manual Install

# Choose one
npm install -g @origintask/intent-engine
brew install wayfind/tap/intent-engine
cargo install intent-engine

# Or use the install script
curl -fsSL https://raw.githubusercontent.com/wayfind/intent-engine/main/scripts/install/ie-manager.sh | sh -s install

Core Commands

ie status                         # Restore context
ie dashboard                      # Open visual dashboard
echo '{"tasks":[...]}' | ie plan  # Create/update tasks
ie log decision "chose X"         # Record decisions
ie search "keyword"               # Search history

LLM-Powered Features (Optional)

Event-to-Task Synthesis - Automatically generate structured task summaries from event history:

# Configure LLM (one-time setup)
ie config set llm.endpoint "http://localhost:8080/v1/chat/completions"
ie config set llm.api_key "sk-your-key"
ie config set llm.model "gpt-3.5-turbo"  # Or local model

# Test connection
ie config test-llm

# Now when completing tasks, synthesis happens automatically for AI-owned tasks
ie task done 42  # Generates structured Goal/Approach/Decisions/Outcome summary

Cost Awareness:

  • 1,500 tokens per synthesis ($0.003 with GPT-3.5-turbo)
  • 20 tasks/day ≈ $22/year with GPT-3.5, or use local models (free)
  • Synthesis only happens when LLM configured (graceful degradation)
  • See LLM Use Cases for full details

How It Works

Session Start  →  ie status  →  Full context restored
                                       ↓
Working        →  ie plan    →  Tasks tracked
               →  ie log     →  Decisions recorded
                                       ↓
Interruption   →  Auto-persisted
                                       ↓
Next Session   →  ie status  →  Continue where you left off

Documentation


MIT OR Apache-2.0 · GitHub

Give your AI the memory it deserves.

Dependencies

~58–110MB
~2M SLoC