1 unstable release
| 0.1.0 | Jan 22, 2026 |
|---|
#1269 in Asynchronous
20KB
330 lines
Structured JSON Agent (Rust)
A typed and extensible Rust library for creating and running Iterative AI Agents that guarantee structured JSON output.
This library orchestrates a Generator ↔ Reviewer cycle to ensure that the output from Large Language Models (LLMs) strictly adheres to a defined JSON Schema.
Features
- Guaranteed JSON Output: Enforces strict adherence to JSON Schemas (Draft-07+).
- Iterative Self-Correction: Automatically detects validation errors and feeds them back to a "Reviewer" model to fix the output.
- Type-Safe: Built with Rust for safety and performance.
- Model Agnostic: Compatible with OpenAI by default (via
async-openai), but extensible for other providers. - Production Ready: Includes error handling, extensive validation, and a clean API.
Installation
Add this to your Cargo.toml:
[dependencies]
structured-json-agent = "0.1.0"
serde_json = "1.0"
tokio = { version = "1.0", features = ["full"] }
Usage
use structured_json_agent::{StructuredAgent, StructuredAgentConfig, OpenAIService};
use serde_json::json;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Define your Schemas
let input_schema = json!({
"type": "object",
"properties": {
"topic": { "type": "string" },
"depth": { "type": "string", "enum": ["basic", "advanced"] }
},
"required": ["topic", "depth"]
});
let output_schema = json!({
"type": "object",
"properties": {
"title": { "type": "string" },
"keyPoints": { "type": "array", "items": { "type": "string" } },
"summary": { "type": "string" }
},
"required": ["title", "keyPoints", "summary"]
});
// 2. Initialize the Agent
let api_key = std::env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY not set");
let llm_service = Arc::new(OpenAIService::new(api_key));
let agent = StructuredAgent::new(StructuredAgentConfig {
llm_service,
generator_model: "gpt-4-turbo".to_string(),
reviewer_model: "gpt-3.5-turbo".to_string(), // Can be a faster/cheaper model for simple fixes
input_schema,
output_schema,
system_prompt: "You are an expert summarizer. Create a structured summary based on the topic.".to_string(),
max_iterations: Some(3), // Optional: Max correction attempts (default: 5)
})?;
// 3. Run the Agent
let result = agent.run(json!({
"topic": "Clean Architecture",
"depth": "advanced"
})).await?;
println!("Result: {}", serde_json::to_string_pretty(&result)?);
// Output is guaranteed to match outputSchema
Ok(())
}
How It Works
- Validation: The input JSON is validated against the
input_schema. - Generation: The
generator_modelcreates an initial response based on the system prompt and input. - Verification Loop:
- The response is parsed and validated against
output_schema. - If Valid: The result is returned immediately.
- If Invalid: The
reviewer_modelis invoked with the invalid JSON, the specific validation errors, and the expected schema. It attempts to fix the JSON.
- The response is parsed and validated against
- Convergence: This cycle repeats until a valid JSON is produced or
max_iterationsis reached.
Extensibility
You can implement the LLMService trait to support other LLM providers (Anthropic, Gemini, Local Models, etc.).
use async_trait::async_trait;
use structured_json_agent::{LLMService, AgentError};
pub struct MyCustomLLM;
#[async_trait]
impl LLMService for MyCustomLLM {
async fn chat_completion(&self, model: &str, system_prompt: &str, user_prompt: &str) -> Result<String, AgentError> {
// Implement your custom logic here
Ok("{}".to_string())
}
}
License
MIT
Dependencies
~21–40MB
~507K SLoC