4 releases
| new 0.2.2 | Mar 10, 2026 |
|---|---|
| 0.2.1 | Mar 10, 2026 |
| 0.2.0 | Mar 1, 2026 |
| 0.1.0 | Feb 10, 2026 |
#1851 in Database interfaces
260KB
7K
SLoC
axactor
Tokio actor runtime with local runtime + optional distributed cluster runtime.
Capabilities
#[actor]/#[msg]macro-based API- user/control mailbox split
- restart policy + restart mailbox policy (
Keep/DrainAndDrop) - link / monitor / trap-exit
- lifecycle stream (
Started,Restarted,Stopped) - keyed
Registrywith idle reaper - distributed cluster:
- shard lease ownership + epoch fencing
- ensure/direct routing
- monitor/demonitor/down protocol
- handshake dedup + request/response auth + protocol negotiation
- runtime session ids with authenticated same-logical session rotation
- control/data plane separation
- overlays:
InMemoryOverlay,TcpOverlay
- ref API split:
LocalRef<M>RemoteRef<C: RefContract>AnyRef<C>/Addr<C>
Install
[dependencies]
axactor = "0.2.2"
tokio = { version = "1", features = ["full"] }
Optional Redis lease backend:
axactor = { version = "0.2.2", features = ["redis-lease"] }
Local Quick Example
use axactor::{actor, Context, SpawnConfig, System};
use std::sync::Arc;
pub struct Counter {
n: i64,
}
#[actor]
impl Counter {
pub fn new() -> Self {
Self { n: 0 }
}
#[msg]
fn inc(&mut self) {
self.n += 1;
}
#[msg]
fn get(&mut self) -> i64 {
self.n
}
async fn on_start(&mut self, _ctx: &mut Context) {}
}
#[tokio::main]
async fn main() {
let system = Arc::new(System::new());
let cfg = SpawnConfig::new("counter", 256);
let h = system.spawn_with(Counter::new, cfg).unwrap();
let r = h.actor();
r.inc().unwrap();
assert_eq!(r.get().await.unwrap(), 1);
}
Registry
Registry::get_or_spawn(...) -> Result<R, RegistryError>
use axactor::{registry::Registry, SpawnConfig, System};
use std::sync::Arc;
let system = Arc::new(System::new());
let reg: Registry<String, CounterRef> = Registry::new(system.clone());
let cfg = SpawnConfig::new("counter", 256);
let c = reg.get_or_spawn("k1".to_string(), cfg, Counter::new).await?;
c.inc()?;
# Ok::<(), axactor::registry::RegistryError>(())
Cluster Essentials
Secure mode is default:
allow_insecure_auth = falseauth_tokenmust be set (otherwiseClusterNode::start(...)panics)
use axactor::cluster::{
ClusterConfig, ClusterNode, InMemoryOverlay, MemoryLeaseStore, NodeId,
};
use std::sync::Arc;
let store = Arc::new(MemoryLeaseStore::new());
let overlay = InMemoryOverlay::new();
let cfg = ClusterConfig {
auth_token: Some("shared-token".to_string()),
allow_insecure_auth: false,
..ClusterConfig::default()
};
let _node = ClusterNode::start(NodeId::new("node-a"), store, overlay, cfg);
NodeId::new("node-a") is the logical node name. ClusterNode::start(...)
derives a runtime wire id by appending @boot-<hex>-session-<n>, and only that
exact suffix shape is treated as a runtime session marker. This avoids
collapsing arbitrary logical names that merely contain @boot-.
Remote non-handshake traffic is trusted only after the peer's current runtime session has completed the hello/auth flow. Session-rotated peers can continue to resolve pending responses after a successful re-handshake, but unauthenticated same-logical spoofing is rejected before host execution.
Use TcpOverlay for multi-process transport:
use axactor::cluster::{TcpKeepaliveOptions, TcpOverlay};
use std::time::Duration;
let overlay = TcpOverlay::with_keepalive_options(
address_book,
1024 * 1024,
Duration::from_secs(2),
TcpKeepaliveOptions {
enabled: true,
idle: Some(Duration::from_secs(30)),
interval: Some(Duration::from_secs(10)),
probes: Some(5),
},
);
Ref API Split
LocalRef<M>: local hot path, no network/serialization branch.
let local: axactor::LocalRef<MyActorMsg> = my_actor_ref.as_local_ref();
local.try_send(MyActorMsg::Ping {})?;
RemoteRef<C>: wire-safe contract (RemoteSafe bounds on Tell/Ask/Reply).
use axactor::{RefContract, RemoteRef};
struct ChatContract;
impl RefContract for ChatContract {
type Tell = ChatTell;
type Ask = ChatAsk;
type Reply = ChatReply;
}
AnyRef<C> / Addr<C>: optional location-transparent wrapper.
AnyRef::monitor() is supported.
For local contracts, attach monitor adapter explicitly:
let local = axactor::LocalContractRef::<MyContract>::new(tell, send_wait, ask)
.with_monitor_fn(|| Box::pin(async move {
// return tokio::sync::mpsc::Receiver<axactor::cluster::DownEvent>
todo!()
}));
Production Notes
InMemoryOverlay+MemoryLeaseStoreare for tests/dev.- For production, use TCP overlay + external lease store (Redis/etcd-equivalent).
- Keep
allow_insecure_auth = falseand set shared auth token (or stronger transport auth). - Validate both the default and reduced feature sets if you maintain trybuild snapshots:
cargo test -p axactorcargo test -p axactor --no-default-featurescargo test -p axactor --all-features
See workspace docs:
License
MIT
Dependencies
~14–28MB
~312K SLoC