Ship agents like software. Govern them like infrastructure.
MAPLE is the open-source runtime and supply-chain foundation for the MapleAI Agent OS. It combines governed execution, worldline identity, provenance, package artifacts, model control, and operational surfaces for agent systems that can create real consequences.
Brand: MapleAI
Legal entity: MapelAI Intelligence Inc.
- Runtime and consequence control:
maple-cli,palm-daemon,maple-runtime,worldline-*, andmaple-kernel-* - Package and registry foundations:
maple-package,maple-init,maple-build,maple-package-trust, andmaple-registry-client - Model-control foundations:
maple-model-core,maple-model-router,maple-model-server,maple-model-benchmark, backend adapters, and PALM playground integration - Governance and improvement:
maple-guard-*,maple-foundry-*,maple-fleet-*, andpalm-*
MAPLE's Docker-like layer is the agent package supply chain. The core idea is that an agent is a versioned artifact with an explicit contract, not a folder of prompts plus ad hoc glue.
Implemented foundations:
maple-packageparses and validatesMaplefile.yamlmaple-initgenerates templates and scaffold directories for package kindsmaple-buildresolves dependencies, assembles deterministic OCI layers, and writesmaple.lockmaple-package-trustsigns digests, generates SBOMs, and creates build attestationsmaple-registry-clientpushes, pulls, and mirrors OCI artifactsmaple-fleet-stackdefines stack YAML and dependency ordering for multi-service agent systems
Current instruction:
- Author a
Maplefile.yamlusing the implemented schema in docs/guides/maplefile.md. - Scaffold package directories manually or via the
maple-initcrate. - Use the build, trust, and registry crates from Rust or internal automation.
- Use PALM plus fleet crates for deployment control.
Important: the final top-level maple build, maple sign, maple push, and maple up UX is not exposed in maple-cli yet. The crates are present; the public product CLI is still converging.
MAPLE's Ollama-like layer is model control, not just local model download. The repo already implements model metadata, a local model store, routing policy, backend neutrality, and OpenAI-compatible serving types.
Implemented foundations:
maple-model-corestores models under~/.maple/modelsand parsesMapleModelfilemaple-model-routerroutes across backends with policy, fallback, and circuit breakingmaple-model-serverprovides OpenAI-compatible request and response types plus handler logicmaple-model-benchmarkdefines benchmark suites and quality gates- PALM playground can target
local_llama,open_ai,anthropic,grok, andgemini
Current instruction:
- Start Ollama locally:
ollama serve
ollama pull llama3.2:3b- Verify the runtime path:
cargo run -p maple-cli -- doctor --model llama3.2:3b- Start PALM:
cargo run -p maple-cli -- daemon start --foreground- Point PALM playground at Ollama:
cargo run -p palm -- playground set-backend \
--kind local_llama \
--model llama3.2:3b \
--endpoint http://127.0.0.1:11434
cargo run -p palm -- playground infer "Summarize current runtime status"Important: the final maple model pull, maple model run, and maple model serve CLI is not exposed in maple-cli yet. Today the practical operator path is Ollama plus PALM playground plus the model crates.
# Inspect the current CLI surface
cargo run -q -p maple-cli -- --help
# Check PALM, Postgres, and Ollama connectivity
cargo run -p maple-cli -- doctor --model llama3.2:3b
# Run the local agent demo
cargo run -p maple-cli -- agent demo --prompt "log current runtime status"
# Start the daemon
cargo run -p maple-cli -- daemon start --foreground
# In another terminal: inspect the runtime
cargo run -p maple-cli -- kernel status
cargo run -p maple-cli -- worldline create --profile agent --label demo-agent
cargo run -p maple-cli -- worldline list
# Use PALM directly or via `maple palm ...`
cargo run -p palm -- playground backends
cargo run -p palm -- deployment listgraph TD
A[Reference Agents] --> B[Fleet, Foundry, Guard]
B --> C[Packages, Registry, Models]
C --> D[WorldLine Kernel]
D --> E[Types, Identity, Temporal Model, Cryptography]
- Reference agents: support, finance, compliance, and operator workflows
- Fleet / Foundry / Guard: rollout, eval, approvals, policy, and compliance
- Packages / Registry / Models: artifact supply chain plus model routing and serving foundations
- WorldLine kernel: commitment boundary, memory, provenance, and event fabric
- Foundation: types, identity, temporal, and cryptographic primitives
maple/
├── crates/ # Runtime, package, model, guard, fleet, and worldline crates
├── contracts/ # Packaging and conformance contracts
├── examples/ # Runnable worldline and runtime examples
├── docs/ # Canonical documentation set
├── ibank/ # Domain-specific financial application surfaces
└── deploy/ # Deployment assets when present
MAPLE is in the middle of the MapleAI Agent OS redesign.
- The runtime, worldline, daemon, and PALM operational surfaces are the most complete public interfaces today.
- The package, registry, model, guard, foundry, and fleet layers are implemented primarily as crates.
- The docs in this repo now distinguish between "implemented now" and "target product UX" where that boundary matters.
- Website: https://www.mapleai.org
- Docs: https://www.mapleai.org/docs
- Email: hello@mapleai.org
Copyright 2026 - MapelAI Intelligence Inc.