5 stable releases
| 2.0.1 | Dec 30, 2025 |
|---|---|
| 2.0.0 | Dec 29, 2025 |
| 1.0.2 | Oct 17, 2025 |
| 1.0.1 | Sep 16, 2025 |
#293 in Asynchronous
110KB
2K
SLoC
Mill-IO
A lightweight, production-ready event loop library for Rust that provides efficient non-blocking I/O management without relying on heavyweight async runtimes. Mill-IO is a reactor-based event loop implementation built on top of mio that offers:
- Runtime-agnostic: No dependency on Tokio or other async runtimes
- Cross-platform: Leverages mio's polling abstraction (epoll, kqueue, IOCP)
- Thread pool integration: Configurable worker threads for handling I/O events
- Compute pool: Dedicated priority-based thread pool for CPU-intensive tasks
- High-level networking: TCP server/client with connection management
- Object pooling: Reduces allocation overhead for frequent operations
- Clean API: Simple registration and handler interface
Installation
Add Mill-IO to your Cargo.toml:
[dependencies]
mill-io = "2.0.0"
With networking support:
[dependencies]
mill-io = { version = "2.0.0", features = ["net"] }
For unstable features:
[dependencies]
mill-io = { version = "2.0.0", features = ["unstable"] }
Quick Start
use mill_io::{EventLoop, EventHandler};
use mio::{net::TcpListener, Interest, Token, event::Event};
struct EchoHandler;
impl EventHandler for EchoHandler {
fn handle_event(&self, event: &Event) {
// Handle incoming connections
}
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
let event_loop = EventLoop::default();
let mut listener = TcpListener::bind("127.0.0.1:8080".parse()?)?;
event_loop.register(
&mut listener,
Token(1),
Interest::READABLE,
EchoHandler
)?;
println!("Server listening on 127.0.0.1:8080");
event_loop.run()?;
Ok(())
}
See examples/scratch_echo_server.rs for a complete implementation.
High-Level TCP Networking
Mill-IO provides a high-level TCP API that handles connection management automatically. Enable with the net feature.
TCP Server
use mill_io::net::tcp::{TcpServer, TcpServerConfig, traits::*, ServerContext};
use mill_io::{EventLoop, error::Result};
use std::sync::Arc;
struct EchoHandler;
impl NetworkHandler for EchoHandler {
fn on_connect(&self, _ctx: &ServerContext, conn_id: ConnectionId) -> Result<()> {
println!("Client connected: {:?}", conn_id);
Ok(())
}
fn on_data(&self, ctx: &ServerContext, conn_id: ConnectionId, data: &[u8]) -> Result<()> {
// echo back the data
ctx.send_to(conn_id, data)?;
Ok(())
}
fn on_disconnect(&self, _ctx: &ServerContext, conn_id: ConnectionId) -> Result<()> {
println!("Client disconnected: {:?}", conn_id);
Ok(())
}
}
fn main() -> Result<()> {
let event_loop = Arc::new(EventLoop::default());
let config = TcpServerConfig::builder()
.address("127.0.0.1:8080".parse().unwrap())
.buffer_size(8192)
.max_connections(1000)
.no_delay(true)
.build();
let server = Arc::new(TcpServer::new(config, EchoHandler)?);
server.start(&event_loop, mio::Token(0))?;
event_loop.run()?;
Ok(())
}
Server Context Operations
The ServerContext provides methods for interacting with connections:
// Send data to a specific connection
ctx.send_to(conn_id, b"Hello")?;
// Broadcast to all connections
ctx.broadcast(b"Message to all")?;
// Close a connection
ctx.close_connection(conn_id)?;
Compute Thread Pool
Mill-IO includes a dedicated thread pool for CPU-intensive operations, keeping the I/O event loop responsive. Tasks support priority scheduling.
Basic Usage
use mill_io::{EventLoop, TaskPriority};
let event_loop = EventLoop::default();
// Spawn with default (Normal) priority
event_loop.spawn_compute(|| {
// CPU-intensive work here
let result = expensive_calculation();
println!("Result: {}", result);
});
// Spawn with specific priority
event_loop.spawn_compute_with_priority(|| {
// Critical computation
}, TaskPriority::Critical);
Task Priorities
Tasks are executed based on priority (highest first):
TaskPriority::Critical- Urgent tasks, processed firstTaskPriority::High- Important tasksTaskPriority::Normal- Default priorityTaskPriority::Low- Background tasks
Monitoring Metrics
let metrics = event_loop.get_compute_metrics();
println!("Tasks submitted: {}", metrics.tasks_submitted());
println!("Tasks completed: {}", metrics.tasks_completed());
println!("Tasks failed: {}", metrics.tasks_failed());
println!("Active workers: {}", metrics.active_workers());
println!("Queue depths - Low: {}, Normal: {}, High: {}, Critical: {}",
metrics.queue_depth_low(),
metrics.queue_depth_normal(),
metrics.queue_depth_high(),
metrics.queue_depth_critical()
);
println!("Total execution time: {}ms", metrics.total_execution_time_ns() / 1_000_000);
Use Cases
- Cryptographic operations (hashing, encryption)
- Image/video processing
- Data compression
- Complex calculations
- File parsing
Examples
Mill-IO includes several practical examples demonstrating different use cases (See examples).
Configuration
Mill-IO provides flexible configuration options:
Default Configuration
use mill_io::EventLoop;
// Uses CPU cores for workers, 1024 event capacity, 150ms timeout
let event_loop = EventLoop::default();
Custom Configuration
use mill_io::EventLoop;
let event_loop = EventLoop::new(
8, // Number of worker threads
2048, // Maximum events per poll iteration
50 // Poll timeout in milliseconds
)?;
TCP Server Configuration
use mill_io::net::tcp::TcpServerConfig;
let config = TcpServerConfig::builder()
.address("0.0.0.0:8080".parse().unwrap())
.buffer_size(16384) // Read buffer size
.max_connections(10000) // Connection limit
.no_delay(true) // Disable Nagle's algorithm
.keep_alive(Some(Duration::from_secs(60)))
.build();
Thread Pool Sizing Guidelines
- CPU-bound tasks: Number of CPU cores
- I/O-bound tasks: 2-4x number of CPU cores
- Mixed workloads: Start with CPU cores + 2
Architecture
For detailed architectural documentation, see Architecture Guide
Platform Support
Mill-IO supports all major platforms through mio:
- Linux: epoll-based polling
- macOS: kqueue-based polling
- Windows: IOCP-based polling
- FreeBSD/OpenBSD: kqueue-based polling
Minimum supported Rust version: 1.70
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.
Contributing
Contributions are welcome! Please read our Contributing Guide for details on our development process, coding standards, and how to submit pull requests.
For questions or discussions, feel free to open an issue or reach out to the maintainers.
Dependencies
~0.6–2.5MB
~47K SLoC