2 releases
Uses new Rust 2024
| 0.1.4 | Jan 27, 2026 |
|---|---|
| 0.1.3 | Jan 23, 2026 |
#1392 in HTTP server
1MB
18K
SLoC
ObjectiveAI API Server
Score everything. Rank everything. Simulate anyone.
A self-hostable API server for ObjectiveAI - run the full ObjectiveAI platform locally or use the library to build your own custom server.
Website | API | GitHub | Discord
Overview
This crate provides two ways to use the ObjectiveAI API:
- Run the server - Start a local instance of the ObjectiveAI API
- Import as a library - Build your own server with custom authentication, routing, or middleware
Running Locally
Prerequisites
- Rust (latest stable)
- An OpenRouter API key (for LLM access)
- Optionally, an ObjectiveAI API key (for Profile Computation)
Quick Start
# Clone the repository
git clone https://github.com/ObjectiveAI/objectiveai
cd objectiveai/objectiveai-api
# Create a .env file
cat > .env << EOF
OPENROUTER_API_KEY=sk-or-...
OBJECTIVEAI_API_KEY=oai-... # Optional
EOF
# Run the server
cargo run --release
The server starts on http://localhost:5000 by default.
Environment Variables
| Variable | Default | Description |
|---|---|---|
OPENROUTER_API_KEY |
(required) | Your OpenRouter API key |
OBJECTIVEAI_API_KEY |
(optional) | ObjectiveAI API key for caching and remote Functions |
OBJECTIVEAI_API_BASE |
https://api.objective-ai.io |
ObjectiveAI API base URL |
OPENROUTER_API_BASE |
https://openrouter.ai/api/v1 |
OpenRouter API base URL |
ADDRESS |
0.0.0.0 |
Server bind address |
PORT |
5000 |
Server port |
USER_AGENT |
(optional) | User agent for upstream requests |
HTTP_REFERER |
(optional) | HTTP referer for upstream requests |
X_TITLE |
(optional) | X-Title header for upstream requests |
Backoff Configuration
| Variable | Default | Description |
|---|---|---|
CHAT_COMPLETIONS_BACKOFF_INITIAL_INTERVAL |
100 |
Initial retry interval (ms) |
CHAT_COMPLETIONS_BACKOFF_MAX_INTERVAL |
1000 |
Maximum retry interval (ms) |
CHAT_COMPLETIONS_BACKOFF_MAX_ELAPSED_TIME |
40000 |
Maximum total retry time (ms) |
CHAT_COMPLETIONS_BACKOFF_MULTIPLIER |
1.5 |
Backoff multiplier |
CHAT_COMPLETIONS_BACKOFF_RANDOMIZATION_FACTOR |
0.5 |
Randomization factor |
Using as a Library
Add to your Cargo.toml:
[dependencies]
objectiveai-api = "0.1.0"
Example: Custom Server
use objectiveai_api::{chat, ctx, vector, functions, ensemble, ensemble_llm};
use std::sync::Arc;
// Create your HTTP client
let http_client = reqwest::Client::new();
// Create the ObjectiveAI HTTP client
let objectiveai_client = Arc::new(objectiveai::HttpClient::new(
http_client.clone(),
Some("https://api.objective-ai.io".to_string()),
Some("apk...".to_string()),
None, None, None,
));
// Build the component stack
let ensemble_llm_fetcher = Arc::new(
ensemble_llm::fetcher::CachingFetcher::new(Arc::new(
ensemble_llm::fetcher::ObjectiveAiFetcher::new(objectiveai_client.clone()),
)),
);
let chat_client = Arc::new(chat::completions::Client::new(
ensemble_llm_fetcher.clone(),
Arc::new(chat::completions::usage_handler::LogUsageHandler),
// ... upstream client configuration
));
// Use in your own Axum/Actix/Warp routes
Architecture
Modules
| Module | Description |
|---|---|
auth |
Authentication and API key management |
chat |
Chat completions with Ensemble LLMs |
vector |
Vector completions for scoring and ranking |
functions |
Function execution and Profile management |
ensemble |
Ensemble management and caching |
ensemble_llm |
Ensemble LLM management and caching |
ctx |
Request context for dependency injection |
error |
Error response handling |
util |
Utilities for streaming and indexing |
Component Stack
Request
│
▼
┌─────────────────────────────────────────────────┐
│ Functions Client │
│ - Executes Function pipelines │
│ - Handles Profile weights │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Vector Completions Client │
│ - Runs ensemble voting │
│ - Combines votes into scores │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Chat Completions Client │
│ - Sends prompts to individual LLMs │
│ - Handles retries and backoff │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Upstream Client (OpenRouter) │
│ - Actual LLM API calls │
└─────────────────────────────────────────────────┘
Customization Points
Each layer uses traits for dependency injection:
- Fetchers - Implement custom caching or data sources for Ensembles, Functions, Profiles
- Usage Handlers - Track usage, billing, or analytics
- Context Extensions - Add per-request state (authentication, BYOK keys, etc.)
API Endpoints
Chat Completions
POST /chat/completions- Create chat completion
Vector Completions
POST /vector/completions- Create vector completionPOST /vector/completions/{id}- Get completion votesPOST /vector/completions/cache- Get cached vote
Functions
GET /functions- List functionsGET /functions/{owner}/{repo}- Get functionPOST /functions/{owner}/{repo}- Execute remote function with inline profile
Profiles
GET /functions/profiles- List profilesGET /functions/profiles/{owner}/{repo}- Get profilePOST /functions/{owner}/{repo}/profiles/{owner}/{repo}- Execute remote function with remote profilePOST /functions/profiles/compute- Train a profile
Ensembles
GET /ensembles- List ensemblesGET /ensembles/{id}- Get ensemble
License
MIT
Dependencies
~20–40MB
~518K SLoC