2 unstable releases
| new 0.2.1 | Jan 29, 2026 |
|---|---|
| 0.2.0 |
|
| 0.1.0 | Nov 23, 2025 |
#318 in HTTP server
275KB
5K
SLoC
http-cache-tower-server
Server-side HTTP response caching middleware for Tower-based frameworks (Axum, Hyper, Tonic).
Overview
This crate provides Tower middleware for caching your server's HTTP responses to improve performance and reduce load. Unlike client-side caching, this middleware caches responses after your handlers execute, making it ideal for expensive operations like database queries or complex computations.
When to Use This
Use http-cache-tower-server when you want to:
- Cache expensive API responses (database queries, aggregations)
- Reduce load on backend services
- Improve response times for read-heavy workloads
- Cache server-rendered content
- Speed up responses that are computed but rarely change
Client vs Server Caching
| Crate | Purpose | Use Case |
|---|---|---|
http-cache-tower |
Client-side caching | Cache responses from external APIs you call |
http-cache-tower-server |
Server-side caching | Cache your own application's responses |
Important: If you're experiencing issues with path parameter extraction or routing when using http-cache-tower in a server application, you should use this crate instead. See Issue #121 for details.
Installation
cargo add http-cache-tower-server
Features
By default, manager-cacache is enabled.
manager-cacache(default): Enable cacache disk-based cache backendmanager-moka: Enable moka in-memory cache backendmanager-foyer: Enable foyer hybrid in-memory + disk cache backend
Quick Start
Basic Example (Axum)
use axum::{Router, routing::get, response::IntoResponse};
use http_cache_tower_server::ServerCacheLayer;
use http_cache::CACacheManager;
use std::path::PathBuf;
async fn expensive_handler() -> impl IntoResponse {
// Simulate expensive operation
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
// Set cache control to cache for 60 seconds
(
[("cache-control", "max-age=60")],
"This response is cached for 60 seconds"
)
}
#[tokio::main]
async fn main() {
// Create cache manager
let manager = CACacheManager::new(PathBuf::from("./cache"), false);
// Create router with cache layer
let app = Router::new()
.route("/expensive", get(expensive_handler))
.layer(ServerCacheLayer::new(manager));
// Run server
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
.await
.unwrap();
axum::serve(listener, app).await.unwrap();
}
How It Works
- Request arrives → Routing layer processes it (path params extracted)
- Cache lookup → Check if response is cached
- Cache hit → Return cached response immediately
- Cache miss → Call your handler
- Handler returns → Check Cache-Control headers
- Should cache? → Store response if cacheable
- Return response → Send to client
Cache Status Headers
Responses include an x-cache header indicating cache status:
x-cache: HIT→ Response served from cachex-cache: MISS→ Response generated by handler (may be cached)- No header → Response not cacheable
Cache Key Generation
Built-in Keyers
DefaultKeyer (default)
Caches based on HTTP method and path:
use http_cache_tower_server::{ServerCacheLayer, DefaultKeyer};
let layer = ServerCacheLayer::new(manager);
// GET /users/123 → "GET /users/123"
// GET /users/456 → "GET /users/456"
QueryKeyer
Includes query parameters in cache key:
use http_cache_tower_server::{ServerCacheLayer, QueryKeyer};
let layer = ServerCacheLayer::with_keyer(manager, QueryKeyer);
// GET /search?q=rust → "GET /search?q=rust"
// GET /search?q=http → "GET /search?q=http"
CustomKeyer
For advanced scenarios (authentication, content negotiation, etc.):
use http_cache_tower_server::{ServerCacheLayer, CustomKeyer};
use http::Request;
// Include user ID from headers in cache key
let keyer = CustomKeyer::new(|req: &Request<()>| {
let user_id = req.headers()
.get("x-user-id")
.and_then(|v| v.to_str().ok())
.unwrap_or("anonymous");
format!("{} {} user:{}", req.method(), req.uri().path(), user_id)
});
let layer = ServerCacheLayer::with_keyer(manager, keyer);
// GET /dashboard with x-user-id: 123 → "GET /dashboard user:123"
// GET /dashboard with x-user-id: 456 → "GET /dashboard user:456"
Configuration Options
use http_cache_tower_server::{ServerCacheLayer, ServerCacheOptions};
use std::time::Duration;
let options = ServerCacheOptions {
// Default TTL when no Cache-Control header present
default_ttl: Some(Duration::from_secs(60)),
// Maximum TTL (even if response specifies longer)
max_ttl: Some(Duration::from_secs(3600)),
// Minimum TTL (even if response specifies shorter)
min_ttl: Some(Duration::from_secs(10)),
// Add X-Cache headers (HIT/MISS)
cache_status_headers: true,
// Maximum response body size to cache (128 MB)
max_body_size: 128 * 1024 * 1024,
// Cache responses without explicit Cache-Control
cache_by_default: false,
// Respect Vary header (currently extracted but not enforced)
respect_vary: true,
};
let layer = ServerCacheLayer::new(manager)
.with_options(options);
Caching Behavior (RFC 9111 Compliant)
This middleware implements a shared cache per RFC 9111 (HTTP Caching).
Cached Responses
Responses are cached when they have:
- Status code: 2xx (200, 201, 204, etc.)
- Cache-Control:
max-age=X→ Cached for X seconds - Cache-Control:
s-maxage=X→ Cached for X seconds (shared cache specific) - Cache-Control:
public→ Cached with default TTL
Never Cached
Responses are never cached if they have:
- Status code: Non-2xx (4xx, 5xx, 3xx)
- Cache-Control:
no-store→ Prevents all caching - Cache-Control:
no-cache→ Requires revalidation (not supported) - Cache-Control:
private→ Only for private caches
Directive Precedence
When multiple directives are present:
s-maxage(shared cache specific) takes precedencemax-age(general directive)public(uses default TTL)- Expires header (fallback, not currently parsed)
Example Headers
// Cached for 60 seconds
("cache-control", "max-age=60")
// Cached for 120 seconds (s-maxage overrides max-age for shared caches)
("cache-control", "max-age=60, s-maxage=120")
// Cached with default TTL
("cache-control", "public")
// Never cached
("cache-control", "no-store")
("cache-control", "private")
("cache-control", "no-cache")
Security Considerations
⚠️ This is a Shared Cache
Critical: Cached responses are served to ALL users. Never cache user-specific data without appropriate measures.
Safe Usage Patterns
✅ Public Content
async fn public_page() -> impl IntoResponse {
(
[("cache-control", "max-age=300")],
"Public content safe to cache"
)
}
✅ User-Specific with CustomKeyer
// Include user ID in cache key
let keyer = CustomKeyer::new(|req: &Request<()>| {
let user_id = extract_user_id(req);
format!("{} {} user:{}", req.method(), req.uri().path(), user_id)
});
❌ UNSAFE: User Data Without Keyer
// ❌ DANGEROUS: Will serve user123's data to user456!
async fn user_profile() -> impl IntoResponse {
let user_data = get_current_user_data().await;
(
[("cache-control", "max-age=60")], // ❌ Don't do this!
user_data
)
}
✅ User Data with Private Directive
// ✅ Safe: Won't be cached
async fn user_profile() -> impl IntoResponse {
let user_data = get_current_user_data().await;
(
[("cache-control", "private")], // Won't be cached
user_data
)
}
Best Practices
- Never cache authenticated endpoints unless using a CustomKeyer that includes session/user ID
- Use
Cache-Control: privatefor user-specific responses - Validate cache keys to prevent cache poisoning
- Set body size limits to prevent DoS attacks
- Use TTL constraints to prevent cache bloat
Advanced Examples
Content Negotiation
For responses that vary by Accept-Language:
let keyer = CustomKeyer::new(|req: &Request<()>| {
let lang = req.headers()
.get("accept-language")
.and_then(|v| v.to_str().ok())
.unwrap_or("en");
format!("{} {} lang:{}", req.method(), req.uri().path(), lang)
});
let layer = ServerCacheLayer::with_keyer(manager, keyer);
Conditional Caching
Only cache certain routes:
use axum::middleware;
async fn cache_middleware(
req: Request<Body>,
next: Next<Body>,
) -> Response {
// Only cache GET requests to /api/*
if req.method() == Method::GET && req.uri().path().starts_with("/api/") {
// Apply cache layer
}
next.run(req).await
}
TTL by Route
async fn long_cache_handler() -> impl IntoResponse {
(
[("cache-control", "max-age=3600")], // 1 hour
"Rarely changing content"
)
}
async fn short_cache_handler() -> impl IntoResponse {
(
[("cache-control", "max-age=60")], // 1 minute
"Frequently updated content"
)
}
Limitations
Vary Header
The middleware extracts Vary headers but does not currently enforce them during cache lookup. For content negotiation:
- Use a
CustomKeyerthat includes relevant headers in the cache key, OR - Set
Cache-Control: privateto prevent caching
Authorization Header
The middleware does not check for Authorization headers in requests. Authenticated endpoints should either:
- Use
Cache-Control: private(won't be cached), OR - Use a
CustomKeyerthat includes user/session ID, OR - Not be cached at all
Expires Header
The Expires header is recognized but not currently parsed. Modern applications should use Cache-Control directives instead.
Examples
See the examples directory:
axum_basic.rs- Basic usage with Axum
Run with:
cargo run --example axum_basic --features manager-cacache
Comparison with Other Crates
vs axum-response-cache
- This crate: RFC 9111 compliant, respects Cache-Control headers
- axum-response-cache: Simpler API, less RFC compliant
vs tower-cache-control
- This crate: Full caching implementation with storage
- tower-cache-control: Only sets Cache-Control headers
Minimum Supported Rust Version (MSRV)
1.88.0
Contributing
Contributions are welcome! Please see the main repository for contribution guidelines.
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Dependencies
~5–42MB
~591K SLoC