Introduction to Oxidite
Welcome to the comprehensive guide to Oxidite, a modern, high-performance web framework for Rust. This guide takes you from installation to advanced features, covering everything needed to build production-ready applications.
What is Oxidite?
Oxidite is a batteries-included web framework that combines Rust’s performance with developer-friendly APIs. It provides a complete ecosystem for building scalable web applications, from REST APIs to fullstack server-side rendered apps.
Key Features
- High Performance: Built on
hyperandtokiofor blazing speed - Advanced ORM: Complete database layer with relationships, soft deletes, validation
- Powerful CLI: Scaffolding, migrations, hot-reload dev server, code generators
- Batteries Included: RBAC/PBAC, API Keys, Queues, Caching, Email, Storage, Plugins
- Enterprise Security: Password hashing, JWT, OAuth2, 2FA, rate limiting
- Template Engine: Jinja2-style templates with inheritance and auto-escaping
- Real-time: WebSockets and Redis pub/sub support
- Type-Safe: Strong typing for requests, responses, and database queries
- Auto-Documentation: OpenAPI/Swagger UI generation
Philosophy
Oxidite follows the philosophy of “convention over configuration” while maintaining the flexibility to build everything from simple APIs to complex full-stack applications. The framework provides sensible defaults while allowing customization where needed.
Who Should Read This Guide?
This guide is designed for:
- Rust developers looking to build web applications
- Developers familiar with frameworks like Express.js, FastAPI, or Laravel who want to leverage Rust’s performance
- Teams building scalable web services
- Anyone interested in modern web development patterns with type safety
How to Use This Guide
This guide is structured to take you from beginner to advanced concepts:
- Start with the Getting Started section to learn the basics
- Move through Core Concepts to understand the fundamentals
- Explore Advanced Features to unlock the full power of Oxidite
- Learn about the Ecosystem to integrate with other tools
- Review Deployment sections to prepare for production
Each chapter builds on the previous ones, but you can also jump to specific sections as needed.
Installation
This chapter covers how to install Oxidite and set up your development environment.
Prerequisites
Before installing Oxidite, you'll need:
- Rust 1.75 or higher
- Cargo (comes with Rust)
- Git
You can install Rust using rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Installing the Oxidite CLI
Install the oxidite-cli package. It provides the oxidite executable:
# Install from source (recommended for development)
cargo install --path oxidite-cli
# Or install from crates.io
cargo install oxidite-cli
# Or pin this generated CLI build
cargo install oxidite-cli --version 2.1.0-gen
Creating Your First Project
Once you have the CLI installed, create a new project:
oxidite new my-app
cd my-app
oxidite --version
This creates a new Oxidite project with the expected directories, configuration, and generator layout.
Manual Installation
If you prefer to add Oxidite to an existing project manually, add it to your Cargo.toml:
[dependencies]
oxidite = { version = "2.1", features = ["full"] }
tokio = { version = "1.0", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
Development Dependencies
For testing and development, you may also want to add:
[dev-dependencies]
oxidite-testing = "2.0"
tokio-test = "0.4"
Verifying Installation
To verify your installation, create a simple test file:
use oxidite::prelude::*;
async fn hello(_req: Request) -> Result<Response> {
Ok(Response::text("Hello, Oxidite!"))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/", hello);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Run this with:
cargo run
You should see your server running on http://127.0.0.1:3000.
Troubleshooting
If you encounter issues:
- Ensure you have the latest version of Rust installed
- Make sure your Cargo is up to date
- Check that you have all required build tools for your platform
- Verify that you're using the correct features for your use case
Common features include:
full: All features enableddatabase: Database ORM capabilitiesauth: Authentication and authorizationqueue: Background job processingcache: Caching capabilitiesrealtime: WebSocket and SSE supporttemplates: Server-side template renderingmail: Email sending capabilitiesstorage: File storage (local/S3)graphql: GraphQL supportplugin: Plugin system support
Hello World
Let’s start with the classic “Hello, World!” example to get familiar with Oxidite’s basic concepts.
The Simplest Application
Here’s the most basic Oxidite application:
use oxidite::prelude::*;
async fn hello(_req: Request) -> Result<Response> {
Ok(Response::text("Hello, World!"))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/", hello);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Let’s break this down:
use oxidite::prelude::*;- Imports all the essential types and functionsasync fn hello(...)- Defines a handler function that takes a request and returns a response_req: Request- The incoming request (we use_since we don’t use it)Ok(Response::text(...))- Creates a text responseResult<Response>- The handler returns a Result with either a Response or an ErrorRouter::new()- Creates a new router to define routesrouter.get("/", hello)- Registers the hello function to handle GET requests to “/”Server::new(router)- Creates a server with the configured router.listen(...)- Starts the server on port 3000
Different Response Types
Let’s explore different ways to respond:
JSON Response
async fn api_hello(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "Hello, World!",
"timestamp": chrono::Utc::now().to_rfc3339()
})))
}
HTML Response
async fn html_hello(_req: Request) -> Result<Response> {
Ok(Response::html(r#"
<!DOCTYPE html>
<html>
<head><title>Hello</title></head>
<body>
<h1>Hello, World!</h1>
<p>Welcome to Oxidite!</p>
</body>
</html>
"#.to_string()))
}
Different Routes
use oxidite::prelude::*;
async fn home(_req: Request) -> Result<Response> {
Ok(Response::text("Welcome to the home page!"))
}
async fn about(_req: Request) -> Result<Response> {
Ok(Response::text("About us page"))
}
async fn contact(_req: Request) -> Result<Response> {
Ok(Response::text("Contact information"))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/", home);
router.get("/about", about);
router.get("/contact", contact);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Using Path Parameters
Oxidite supports path parameters that you can extract:
use oxidite::prelude::*;
async fn greet(Path(name): Path<String>) -> Result<Response> {
Ok(Response::text(format!("Hello, {}!", name)))
}
async fn user_details(Path(user_id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user_id,
"name": format!("User {}", user_id),
"active": true
})))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Path parameter: /greet/Alice will extract "Alice"
router.get("/greet/:name", greet);
// Numeric parameter: /users/123 will extract 123
router.get("/users/:user_id", user_details);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Using Query Parameters
You can also extract query parameters:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct GreetingParams {
name: Option<String>,
title: Option<String>,
}
async fn personalized_greeting(Query(params): Query<GreetingParams>) -> Result<Response> {
let name = params.name.unwrap_or_else(|| "World".to_string());
let title = params.title.unwrap_or_else(|| "".to_string());
let greeting = if title.is_empty() {
format!("Hello, {}!", name)
} else {
format!("Hello, {} {}!", title, name)
};
Ok(Response::text(greeting))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/greet", personalized_greeting);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
// This handles URLs like:
// /greet?name=Alice
// /greet?name=Bob&title=Mr.
Using Request Body
For POST requests, you can extract JSON data:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct CreateUser {
name: String,
email: String,
}
async fn create_user(Json(payload): Json<CreateUser>) -> Result<Response> {
// Process the payload...
Ok(Response::json(serde_json::json!({
"message": "User created successfully",
"user": {
"id": 123, // In a real app, this would come from your database
"name": payload.name,
"email": payload.email
}
})))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.post("/users", create_user);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Error Handling
Let’s add some error handling:
use oxidite::prelude::*;
async fn maybe_error(query: Query<serde_json::Value>) -> Result<Response> {
let should_error = query.0.get("error").and_then(|v| v.as_bool()).unwrap_or(false);
if should_error {
return Err(Error::BadRequest("Something went wrong".to_string()));
}
Ok(Response::text("Success!"))
}
async fn not_found_handler(_req: Request) -> Result<Response> {
Err(Error::NotFound)
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/maybe-error", maybe_error);
router.get("/not-found", not_found_handler);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Complete Example with Multiple Features
Here’s a more complete example combining multiple features:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct UserQuery {
limit: Option<u32>,
offset: Option<u32>,
}
async fn home(_req: Request) -> Result<Response> {
Ok(Response::html(r#"
<!DOCTYPE html>
<html>
<head><title>Oxidite Demo</title></head>
<body>
<h1>Welcome to Oxidite!</h1>
<nav>
<a href="/api/hello">API Hello</a> |
<a href="/users?page=1">Users API</a> |
<a href="/greet/World">Greet Route</a>
</nav>
</body>
</html>
"#.to_string()))
}
async fn api_hello(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "Hello from API",
"framework": "Oxidite",
"version": "2.0"
})))
}
async fn get_users(Query(params): Query<UserQuery>) -> Result<Response> {
let limit = params.limit.unwrap_or(10);
let offset = params.offset.unwrap_or(0);
Ok(Response::json(serde_json::json!({
"users": [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"}
],
"pagination": {
"limit": limit,
"offset": offset,
"total": 2
}
})))
}
async fn greet_user(Path(name): Path<String>) -> Result<Response> {
Ok(Response::text(format!("Hello, {}!", name)))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/", home);
router.get("/api/hello", api_hello);
router.get("/users", get_users);
router.get("/greet/:name", greet_user);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Running Your Application
To run any of these examples:
- Create a new Rust project:
cargo new hello-oxidite - Add Oxidite to your
Cargo.toml:[dependencies] oxidite = { version = "2.1", features = ["full"] } tokio = { version = "1.0", features = ["full"] } serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" chrono = { version = "0.4", features = ["serde"] } - Replace the contents of
src/main.rswith your example - Run with
cargo run - Visit
http://127.0.0.1:3000in your browser
This Hello World example demonstrates the fundamental concepts of Oxidite: handlers, routes, responses, and request data extraction. These concepts form the foundation for building more complex applications.
Basic Routing
Routing is how your Oxidite application maps HTTP requests to handler functions. This chapter covers the fundamentals of routing in Oxidite.
Basic Route Definitions
Routes in Oxidite are defined by mapping HTTP methods and paths to handler functions:
use oxidite::prelude::*;
// Define a handler function
async fn hello_world(_req: Request) -> Result<Response> {
Ok(Response::text("Hello, World!"))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Register a GET route at "/"
router.get("/", hello_world);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Supported HTTP Methods
Oxidite supports all standard HTTP methods:
use oxidite::prelude::*;
async fn handle_get(_req: Request) -> Result<Response> {
Ok(Response::text("GET request handled"))
}
async fn handle_post(_req: Request) -> Result<Response> {
Ok(Response::text("POST request handled"))
}
async fn handle_put(_req: Request) -> Result<Response> {
Ok(Response::text("PUT request handled"))
}
async fn handle_delete(_req: Request) -> Result<Response> {
Ok(Response::text("DELETE request handled"))
}
async fn handle_patch(_req: Request) -> Result<Response> {
Ok(Response::text("PATCH request handled"))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/resource", handle_get);
router.post("/resource", handle_post);
router.put("/resource", handle_put);
router.delete("/resource", handle_delete);
router.patch("/resource", handle_patch);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Path Parameters
Oxidite supports path parameters that can be extracted using the Path extractor:
use oxidite::prelude::*;
use serde::Deserialize;
// Handler with path parameter
async fn get_user(Path(user_id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user_id,
"name": format!("User {}", user_id),
"email": format!("user{}@example.com", user_id)
})))
}
// Handler with multiple path parameters
async fn get_user_post(Path((user_id, post_id)): Path<(u32, u32)>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user_id,
"post_id": post_id,
"title": format!("Post {} by User {}", post_id, user_id)
})))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Single parameter: /users/123
router.get("/users/:user_id", get_user);
// Multiple parameters: /users/123/posts/456
router.get("/users/:user_id/posts/:post_id", get_user_post);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Named Struct for Path Parameters
You can also use a named struct for better organization:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct UserId {
user_id: u32,
}
#[derive(Deserialize)]
struct UserPostId {
user_id: u32,
post_id: u32,
}
async fn get_user_by_struct(Path(params): Path<UserId>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": params.user_id,
"name": format!("User {}", params.user_id)
})))
}
async fn get_user_post_by_struct(Path(params): Path<UserPostId>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": params.user_id,
"post_id": params.post_id,
"title": format!("Post {} by User {}", params.post_id, params.user_id)
})))
}
Query Parameters
Query parameters can be extracted using the Query extractor:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct UserQuery {
page: Option<u32>,
limit: Option<u32>,
sort: Option<String>,
active: Option<bool>,
}
async fn get_users(Query(params): Query<UserQuery>) -> Result<Response> {
let page = params.page.unwrap_or(1);
let limit = params.limit.unwrap_or(10);
let sort = params.sort.unwrap_or_else(|| "id".to_string());
let active = params.active.unwrap_or(true);
Ok(Response::json(serde_json::json!({
"users": [], // In a real app, this would come from your database
"pagination": {
"page": page,
"limit": limit,
"total": 100 // In a real app, this would be the actual count
},
"filters": {
"sort": sort,
"active": active
}
})))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Handles: /users?page=2&limit=20&sort=name&active=true
router.get("/users", get_users);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Route Groups and Nesting
You can group related routes for better organization:
use oxidite::prelude::*;
// API versioning example
async fn v1_users(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({ "version": "v1", "endpoint": "users" })))
}
async fn v2_users(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({ "version": "v2", "endpoint": "users", "enhanced": true })))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Versioned APIs
router.get("/api/v1/users", v1_users);
router.get("/api/v2/users", v2_users);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Wildcard Routes
Oxidite supports wildcard routes for catch-all functionality:
use oxidite::prelude::*;
async fn catch_all(_req: Request) -> Result<Response> {
Ok(Response::text("Page not found".to_string()))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Register your specific routes first
router.get("/", |_req| async { Ok(Response::text("Home page".to_string())) });
router.get("/about", |_req| async { Ok(Response::text("About page".to_string())) });
// Wildcard route should be registered last
// This will catch any routes not matched by previous handlers
router.get("/*", catch_all);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Route Middleware
You can apply middleware to specific routes or route groups:
use oxidite::prelude::*;
async fn logging_middleware(req: Request, next: Next) -> Result<Response> {
println!("Request: {} {}", req.method(), req.uri());
let response = next.run(req).await?;
println!("Response: {}", response.status());
Ok(response)
}
async fn protected_route(_req: Request) -> Result<Response> {
Ok(Response::text("This is a protected route".to_string()))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Apply middleware to a specific route
router.get("/protected")
.middleware(logging_middleware)
.handler(protected_route);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Route Organization Best Practices
1. Order Matters
Register more specific routes before general ones:
// Correct order
router.get("/users/:id", get_user);
router.get("/users/list", list_users); // More specific than /users/:id
// Wrong order - this would never be reached
// router.get("/users/list", list_users);
// router.get("/users/:id", get_user); // Would match /users/list first
2. Group Related Routes
Keep related functionality together:
// Group user-related routes
router.get("/users", get_users);
router.post("/users", create_user);
router.get("/users/:id", get_user);
router.put("/users/:id", update_user);
router.delete("/users/:id", delete_user);
// Group post-related routes
router.get("/posts", get_posts);
router.post("/posts", create_post);
router.get("/posts/:id", get_post);
3. Use Descriptive Names
Make your route patterns descriptive and consistent:
// Good: clear and RESTful
"/users/:user_id/posts/:post_id/comments"
"/api/v1/users/search"
"/admin/dashboard/stats"
// Less ideal: unclear or inconsistent
"/u/:id/p/:pid/c"
"/search/v1/user"
"/dashboard/admin/stats"
Summary
Routing in Oxidite is straightforward and flexible:
- Use
.get(),.post(),.put(),.delete(), etc. to register routes - Extract path parameters with
Path<T> - Extract query parameters with
Query<T> - Organize routes logically and consistently
- Register specific routes before general/wildcard routes
- Apply middleware as needed for specific routes or groups
With these routing fundamentals, you can create well-structured applications that handle various types of requests effectively.
Requests
The Request type in Oxidite represents incoming HTTP requests. This chapter covers how to work with requests, extract information from them, and handle different types of request data.
Overview
In Oxidite, the Request type wraps the underlying hyper::Request and provides access to all the information contained in an HTTP request. While extractors provide a convenient way to access specific parts of the request, sometimes you need direct access to the request object itself.
Basic Request Access
You can access a request directly in your handler:
use oxidite::prelude::*;
async fn inspect_request(req: Request) -> Result<Response> {
let method = req.method();
let uri = req.uri();
let version = req.version();
Ok(Response::json(serde_json::json!({
"method": method.to_string(),
"uri": uri.to_string(),
"version": format!("{:?}", version),
"headers": extract_headers(&req)
})))
}
fn extract_headers(req: &Request) -> serde_json::Value {
let mut headers = serde_json::Map::new();
for (name, value) in req.headers() {
if let Ok(value_str) = value.to_str() {
headers.insert(name.as_str().to_string(), serde_json::Value::String(value_str.to_string()));
}
}
serde_json::Value::Object(headers)
}
Accessing Request Headers
You can access headers from the request object:
use oxidite::prelude::*;
async fn handle_headers(req: Request) -> Result<Response> {
// Access specific headers
let content_type = req.headers().get("content-type")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown");
let user_agent = req.headers().get("user-agent")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown");
let authorization = req.headers().get("authorization")
.and_then(|hv| hv.to_str().ok())
.map(|s| s.to_string());
Ok(Response::json(serde_json::json!({
"content_type": content_type,
"user_agent": user_agent,
"has_auth": authorization.is_some(),
"auth_scheme": authorization.as_ref().map(|auth| {
auth.split_whitespace().next().unwrap_or("unknown").to_string()
})
})))
}
// Case-insensitive header access
use hyper::header::USER_AGENT;
async fn handle_specific_header(req: Request) -> Result<Response> {
let user_agent = req.headers()
.get(USER_AGENT)
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown");
Ok(Response::json(serde_json::json!({ "user_agent": user_agent })))
}
Accessing Request URI and Query Parameters
You can access the URI and its components:
use oxidite::prelude::*;
async fn inspect_uri(req: Request) -> Result<Response> {
let uri = req.uri();
Ok(Response::json(serde_json::json!({
"scheme": uri.scheme().map(|s| s.to_string()).unwrap_or_default(),
"authority": uri.authority().map(|a| a.to_string()).unwrap_or_default(),
"path": uri.path(),
"query": uri.query().unwrap_or_default(),
"full_uri": uri.to_string()
})))
}
// Parse query parameters manually (though Query extractor is preferred)
use std::collections::HashMap;
async fn manual_query_parsing(req: Request) -> Result<Response> {
let query_string = req.uri().query().unwrap_or_default();
let params: HashMap<String, String> = query_string
.split('&')
.filter_map(|pair| {
let mut parts = pair.split('=');
let key = parts.next()?;
let value = parts.next().unwrap_or("");
Some((key.to_string(), value.to_string()))
})
.collect();
Ok(Response::json(serde_json::json!(params)))
}
Accessing Request Method and Version
You can inspect the HTTP method and protocol version:
use oxidite::prelude::*;
async fn method_inspector(req: Request) -> Result<Response> {
let method = req.method();
let version = req.version();
let method_str = match *method {
http::Method::GET => "GET",
http::Method::POST => "POST",
http::Method::PUT => "PUT",
http::Method::DELETE => "DELETE",
http::Method::PATCH => "PATCH",
http::Method::HEAD => "HEAD",
http::Method::OPTIONS => "OPTIONS",
_ => "OTHER",
};
let version_str = match version {
http::Version::HTTP_09 => "HTTP/0.9",
http::Version::HTTP_10 => "HTTP/1.0",
http::Version::HTTP_11 => "HTTP/1.1",
http::Version::HTTP_2 => "HTTP/2.0",
http::Version::HTTP_3 => "HTTP/3.0",
_ => "UNKNOWN",
};
Ok(Response::json(serde_json::json!({
"method": method_str,
"version": version_str,
"is_standard_method": method.is_safe() || method.is_idempotent() || method.is_extension()
})))
}
Working with Request Extensions
Request extensions provide a way to store and access custom data:
use oxidite::prelude::*;
use std::any::Any;
// Define custom extension types
#[derive(Debug, Clone)]
struct RequestMetadata {
request_id: String,
timestamp: chrono::DateTime<chrono::Utc>,
}
async fn handle_extensions(mut req: Request) -> Result<Response> {
// Store data in request extensions
req.extensions_mut().insert(RequestMetadata {
request_id: uuid::Uuid::new_v4().to_string(),
timestamp: chrono::Utc::now(),
});
// Later, you can retrieve it
if let Some(metadata) = req.extensions().get::<RequestMetadata>() {
return Ok(Response::json(serde_json::json!({
"request_id": metadata.request_id,
"timestamp": metadata.timestamp.to_rfc3339()
})));
}
Ok(Response::json(serde_json::json!({ "status": "no_metadata" })))
}
Accessing the Request Body
While extractors are preferred for body access, you can access the raw body directly:
use oxidite::prelude::*;
use http_body_util::BodyExt;
async fn access_raw_body(mut req: Request) -> Result<Response> {
// Collect the entire body
let body_bytes = req
.body_mut()
.collect()
.await
.map_err(|e| Error::InternalServerError(e.to_string()))?
.to_bytes();
let body_str = String::from_utf8_lossy(&body_bytes);
Ok(Response::json(serde_json::json!({
"body_size": body_bytes.len(),
"body_content": body_str.to_string(),
"is_valid_utf8": std::str::from_utf8(&body_bytes).is_ok()
})))
}
Request Validation
You can perform validation directly on the request:
use oxidite::prelude::*;
async fn validate_request(req: Request) -> Result<Response> {
// Validate content type
let content_type = req.headers()
.get("content-type")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("");
if !content_type.starts_with("application/json") {
return Err(Error::BadRequest("Content-Type must be application/json".to_string()));
}
// Validate method
if *req.method() != http::Method::POST {
return Err(Error::BadRequest("Only POST method allowed".to_string()));
}
// Validate size limits
let content_length = req.headers()
.get("content-length")
.and_then(|hv| hv.to_str().ok())
.and_then(|s| s.parse::<usize>().ok())
.unwrap_or(0);
const MAX_SIZE: usize = 1024 * 1024; // 1MB
if content_length > MAX_SIZE {
return Err(Error::BadRequest("Request body too large".to_string()));
}
Ok(Response::json(serde_json::json!({ "status": "validated" })))
}
Request Context and State
Access application state alongside the request:
use oxidite::prelude::*;
use std::sync::Arc;
#[derive(Clone)]
struct AppContext {
app_name: String,
version: String,
maintenance_mode: bool,
}
async fn contextual_handler(
req: Request,
State(ctx): State<Arc<AppContext>>
) -> Result<Response> {
// Combine request data with application context
let user_agent = req.headers()
.get("user-agent")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown");
if ctx.maintenance_mode {
return Err(Error::ServiceUnavailable("Service temporarily unavailable".to_string()));
}
Ok(Response::json(serde_json::json!({
"app": {
"name": ctx.app_name,
"version": ctx.version
},
"client": {
"user_agent": user_agent
},
"request_info": {
"method": req.method().to_string(),
"path": req.uri().path()
}
})))
}
Middleware with Request Access
You can access and modify requests in middleware:
use oxidite::prelude::*;
async fn request_logging_middleware(req: Request, next: Next) -> Result<Response> {
let method = req.method().clone();
let uri = req.uri().clone();
println!("Incoming request: {} {}", method, uri);
let start = std::time::Instant::now();
let response = next.run(req).await?;
let duration = start.elapsed();
println!("Request completed in {:?}", duration);
Ok(response)
}
async fn add_request_id_middleware(req: Request, next: Next) -> Result<Response> {
// Add a request ID to the request extensions
let request_id = uuid::Uuid::new_v4().to_string();
let mut req = req;
req.extensions_mut().insert(("request_id", request_id.clone()));
let mut response = next.run(req).await?;
// Add request ID to response headers
use hyper::header::HeaderMap;
let mut headers = HeaderMap::new();
headers.insert("X-Request-ID", request_id.parse().unwrap());
// In a real implementation, you'd merge these headers with the response
Ok(response)
}
Security Considerations
When working with requests, consider these security aspects:
use oxidite::prelude::*;
async fn secure_request_handler(req: Request) -> Result<Response> {
// Check for suspicious headers
let suspicious_headers = ["x-forwarded-for", "x-real-ip", "x-client-ip"];
for header in suspicious_headers {
if req.headers().get(header).is_some() {
println!("Warning: Suspicious header {} detected", header);
}
}
// Validate host header to prevent host header attacks
if let Some(host) = req.headers().get("host") {
if let Ok(host_str) = host.to_str() {
// Validate against allowed hosts
if !is_allowed_host(host_str) {
return Err(Error::BadRequest("Invalid Host header".to_string()));
}
}
}
// Check for potential SQL injection patterns in URI
let uri_path = req.uri().path();
if contains_sql_patterns(uri_path) {
return Err(Error::BadRequest("Potential SQL injection detected".to_string()));
}
Ok(Response::json(serde_json::json!({ "status": "secure" })))
}
fn is_allowed_host(host: &str) -> bool {
// In a real app, check against allowed hosts
host.ends_with(".yourdomain.com") || host.starts_with("localhost")
}
fn contains_sql_patterns(text: &str) -> bool {
let sql_patterns = ["'", "\"", "--", "/*", "*/", "xp_", "sp_"];
sql_patterns.iter().any(|pattern| text.to_lowercase().contains(pattern))
}
Performance Tips
-
Use Extractors: Use extractors instead of manual request parsing when possible - they’re optimized and handle errors properly.
-
Minimize Body Access: Access the request body only when necessary, as it consumes the body stream.
-
Cache Computed Values: If you compute something from the request multiple times, store it in extensions.
-
Validate Early: Perform validation early in the request lifecycle to fail fast.
Summary
Working with requests in Oxidite involves:
- Direct access to request metadata (method, URI, headers, version)
- Use of extensions for storing custom request data
- Proper validation and security checks
- Integration with state and middleware
- Following security best practices
While extractors are often the preferred approach for accessing specific request data, direct request access gives you full control over request inspection and manipulation.
Responses
In Oxidite, responses are how you send data back to clients. The framework provides multiple ways to create responses, from simple text to complex HTML templates.
Basic Response Types
Oxidite provides several convenience methods on the Response type to create different kinds of responses:
JSON Responses
The most common response type for APIs is JSON:
use oxidite::prelude::*;
async fn api_handler(_req: Request) -> Result<Response> {
let data = serde_json::json!({
"message": "Hello, World!",
"status": "success"
});
Ok(Response::json(data))
}
HTML Responses
For server-rendered content, you can create HTML responses:
async fn home_page(_req: Request) -> Result<Response> {
Ok(Response::html("<h1>Welcome to Oxidite!</h1>".to_string()))
}
Text Responses
For plain text responses:
async fn plain_text_handler(_req: Request) -> Result<Response> {
Ok(Response::text("This is plain text"))
}
Empty Responses
Sometimes you just need to return an empty response with a specific status:
async fn empty_ok(_req: Request) -> Result<Response> {
Ok(Response::ok()) // 200 OK
}
async fn no_content(_req: Request) -> Result<Response> {
Ok(Response::no_content()) // 204 No Content
}
Using the Response Utilities
While the direct methods on Response are preferred, you can also use the response utilities:
use oxidite::response;
async fn alternative_json(_req: Request) -> Result<Response> {
Ok(response::json(serde_json::json!({ "data": "value" })))
}
async fn alternative_html(_req: Request) -> Result<Response> {
Ok(response::html("<p>Alternative HTML</p>".to_string()))
}
Custom Responses
For more control, you can create custom responses with specific headers and status codes:
use hyper::header::{CONTENT_TYPE, LOCATION};
use http::StatusCode;
async fn custom_response(_req: Request) -> Result<Response> {
use http_body_util::Full;
use bytes::Bytes;
let mut response = hyper::Response::builder()
.status(StatusCode::CREATED)
.header(CONTENT_TYPE, "application/json")
.header(LOCATION, "/resources/123")
.body(Full::new(Bytes::from(r#"{"id": 123, "status": "created"}"#)))
.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(response)
}
Template Responses
When using the template engine, you can render templates directly as responses:
use oxidite::prelude::*;
use oxidite_template::{TemplateEngine, Context};
async fn template_handler(_req: Request) -> Result<Response> {
let mut engine = TemplateEngine::new();
engine.add_template("index", "<h1>Hello {{ name }}!</h1>")?;
let mut context = Context::new();
context.set("name", "Oxidite");
// Render directly as response
let response = engine.render_response("index", &context)?;
Ok(response)
}
Error Responses
Oxidite provides various error response types that automatically map to appropriate HTTP status codes:
async fn error_example(_req: Request) -> Result<Response> {
// This will return a 404 Not Found
if !resource_exists() {
return Err(Error::NotFound);
}
// This will return a 400 Bad Request
if !valid_input() {
return Err(Error::BadRequest("Invalid input".to_string()));
}
// This will return a 401 Unauthorized
if !authenticated() {
return Err(Error::Unauthorized("Authentication required".to_string()));
}
// This will return a 403 Forbidden
if !authorized() {
return Err(Error::Forbidden("Access denied".to_string()));
}
// This will return a 409 Conflict
if conflict_exists() {
return Err(Error::Conflict("Resource conflict".to_string()));
}
// This will return a 422 Unprocessable Entity
if validation_fails() {
return Err(Error::Validation("Validation failed".to_string()));
}
// This will return a 429 Too Many Requests
if rate_limited() {
return Err(Error::RateLimited);
}
// This will return a 503 Service Unavailable
if service_unavailable() {
return Err(Error::ServiceUnavailable("Service temporarily unavailable".to_string()));
}
// Success response
Ok(Response::json(serde_json::json!({ "status": "success" })))
}
Response Headers
You can also add custom headers to your responses. While the direct Response methods don’t expose headers directly, you can create custom responses when needed:
use hyper::header::{HeaderMap, HeaderValue, CACHE_CONTROL};
async fn cached_response(_req: Request) -> Result<Response> {
use http_body_util::Full;
use bytes::Bytes;
let mut response = hyper::Response::builder()
.status(http::StatusCode::OK)
.header(CONTENT_TYPE, "application/json")
.header(CACHE_CONTROL, "public, max-age=3600")
.body(Full::new(Bytes::from(r#"{"data": "cached"}"#)))
.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(response)
}
Streaming Responses
For large data or streaming content, you can create responses with streaming bodies, though this requires more advanced usage:
use futures::stream::{self, StreamExt};
use http_body_util::StreamBody;
use hyper::body::Frame;
async fn streaming_response(_req: Request) -> Result<Response> {
let stream = stream::iter(vec![
Ok::<_, hyper::Error>(Frame::data("chunk1")),
Ok::<_, hyper::Error>(Frame::data("chunk2")),
Ok::<_, hyper::Error>(Frame::data("chunk3")),
]);
let body = StreamBody::new(stream);
let response = hyper::Response::builder()
.status(http::StatusCode::OK)
.header(hyper::header::CONTENT_TYPE, "text/plain")
.body(body.boxed())
.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(response)
}
Summary
The Response API in Oxidite is designed to be intuitive and flexible:
- Use
Response::json(),Response::html(), andResponse::text()for the most common response types - Use
Response::ok()andResponse::no_content()for empty responses - Use the template engine’s
render_response()method for server-side rendering - Handle errors with appropriate
Errorvariants that map to correct HTTP status codes - Fall back to manual response construction for complex scenarios
This approach provides both convenience for common use cases and flexibility for advanced scenarios.
Request Extractors
Request extractors are a key feature in Oxidite that allow you to extract data from incoming HTTP requests in a type-safe manner. This chapter covers all the available extractors and how to use them effectively.
Overview
Extractors in Oxidite implement the FromRequest trait, which allows them to automatically extract data from requests when used as parameters in handler functions. This provides a clean and type-safe way to access different parts of the request.
Available Extractors
Path Extractor
The Path extractor extracts path parameters from the URL:
use oxidite::prelude::*;
// Single parameter
async fn get_user(Path(user_id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user_id,
"name": format!("User {}", user_id)
})))
}
// Multiple parameters as tuple
async fn get_user_post(Path((user_id, post_id)): Path<(u32, u32)>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user_id,
"post_id": post_id
})))
}
// Parameters as struct
use serde::Deserialize;
#[derive(Deserialize)]
struct UserParams {
user_id: u32,
}
async fn get_user_by_struct(Path(params): Path<UserParams>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": params.user_id,
"name": format!("User {}", params.user_id)
})))
}
Query Extractor
The Query extractor extracts query parameters from the URL:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct UserQuery {
page: Option<u32>,
limit: Option<u32>,
sort: Option<String>,
active: Option<bool>,
}
async fn get_users(Query(params): Query<UserQuery>) -> Result<Response> {
let page = params.page.unwrap_or(1);
let limit = params.limit.unwrap_or(10);
let sort = params.sort.unwrap_or_else(|| "id".to_string());
let active = params.active.unwrap_or(true);
Ok(Response::json(serde_json::json!({
"pagination": {
"page": page,
"limit": limit
},
"sorting": sort,
"filter": { "active": active }
})))
}
// Raw query string access
async fn handle_raw_query(Query(raw): Query<serde_json::Value>) -> Result<Response> {
Ok(Response::json(raw))
}
Json Extractor
The Json extractor parses JSON from the request body:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct CreateUser {
name: String,
email: String,
age: u8,
}
async fn create_user(Json(payload): Json<CreateUser>) -> Result<Response> {
// payload contains the deserialized JSON data
Ok(Response::json(serde_json::json!({
"message": "User created successfully",
"user": {
"id": 123, // In a real app, this would come from your database
"name": payload.name,
"email": payload.email,
"age": payload.age
}
})))
}
// Generic JSON handling
async fn handle_generic_json(Json(data): Json<serde_json::Value>) -> Result<Response> {
// Process any JSON data
Ok(Response::json(serde_json::json!({
"received": data,
"processed": true
})))
}
Form Extractor
The Form extractor handles application/x-www-form-urlencoded data:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct LoginForm {
username: String,
password: String,
remember_me: Option<bool>,
}
async fn login_handler(Form(login_data): Form<LoginForm>) -> Result<Response> {
// login_data contains the deserialized form data
if login_data.username == "admin" && login_data.password == "secret" {
Ok(Response::json(serde_json::json!({
"status": "success",
"message": "Login successful",
"remember_me": login_data.remember_me.unwrap_or(false)
})))
} else {
Err(Error::Unauthorized("Invalid credentials".to_string()))
}
}
// Generic form handling
async fn handle_generic_form(Form(data): Form<serde_json::Value>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"form_data": data,
"status": "received"
})))
}
Cookies Extractor
The Cookies extractor provides access to request cookies:
use oxidite::prelude::*;
async fn handle_cookies(cookies: Cookies) -> Result<Response> {
let mut response_data = serde_json::json!({
"cookie_count": 0,
"cookies": {}
});
let mut cookies_map = serde_json::Map::new();
let mut count = 0;
for (name, value) in cookies.iter() {
cookies_map.insert(name.to_string(), serde_json::Value::String(value.to_string()));
count += 1;
}
if count > 0 {
response_data["cookie_count"] = serde_json::Value::Number(count.into());
response_data["cookies"] = serde_json::Value::Object(cookies_map);
}
Ok(Response::json(response_data))
}
// Access specific cookies
async fn get_session(cookies: Cookies) -> Result<Response> {
let session_id = cookies.get("session_id");
let theme = cookies.get("theme").unwrap_or("light");
Ok(Response::json(serde_json::json!({
"session_id": session_id,
"theme": theme,
"has_session": session_id.is_some()
})))
}
Body Extractor
The Body extractor provides access to the raw request body:
use oxidite::prelude::*;
// Extract as string
async fn handle_text_body(Body(raw_body): Body<String>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"length": raw_body.len(),
"content": raw_body,
"type": "text"
})))
}
// Extract as bytes
async fn handle_binary_body(Body(bytes): Body<Vec<u8>>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"size": bytes.len(),
"type": "binary"
})))
}
// Extract as Bytes
use bytes::Bytes;
async fn handle_bytes_body(Body(bytes): Body<Bytes>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"size": bytes.len(),
"type": "bytes"
})))
}
State Extractor
The State extractor provides access to application state:
use oxidite::prelude::*;
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
app_name: String,
version: String,
database_url: String,
}
async fn handler_with_state(State(state): State<Arc<AppState>>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"app_name": state.app_name,
"version": state.version,
"has_database": !state.database_url.is_empty()
})))
}
#[tokio::main]
async fn main() -> Result<()> {
let app_state = Arc::new(AppState {
app_name: "MyApp".to_string(),
version: "1.0.0".to_string(),
database_url: "postgresql://localhost/myapp".to_string(),
});
let mut router = Router::new();
// Attach state to router
router.with_state(app_state);
router.get("/info", handler_with_state);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Combining Multiple Extractors
You can use multiple extractors in a single handler:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct CommentQuery {
page: Option<u32>,
limit: Option<u32>,
}
#[derive(Deserialize)]
struct CreateComment {
content: String,
parent_id: Option<u32>,
}
// Example combining multiple extractors
async fn complex_handler(
Path((post_id, comment_id)): Path<(u32, u32)>,
Query(params): Query<CommentQuery>,
Json(payload): Json<CreateComment>,
cookies: Cookies,
State(app_state): State<Arc<AppState>>
) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"post_id": post_id,
"comment_id": comment_id,
"query_params": params,
"request_body": payload,
"cookies_present": cookies.iter().count(),
"app_info": app_state.app_name
})))
}
Custom Extractors
You can create custom extractors by implementing the FromRequest trait:
use oxidite::prelude::*;
use std::future::Future;
use std::pin::Pin;
// Custom extractor for authenticated users
#[derive(Clone)]
struct AuthenticatedUser {
id: u32,
username: String,
permissions: Vec<String>,
}
impl FromRequest for AuthenticatedUser {
type Error = Error;
type Future = Pin<Box<dyn Future<Output = Result<Self, Self::Error>>>>;
fn from_request(req: &Request) -> Self::Future {
Box::pin(async move {
// Extract auth token from headers
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
match auth_header {
Some(token) if token.starts_with("Bearer ") => {
let token = token.trim_start_matches("Bearer ");
// Validate token and fetch user (simplified)
if validate_and_fetch_user(token).await.is_ok() {
Ok(AuthenticatedUser {
id: 1,
username: "john_doe".to_string(),
permissions: vec!["read".to_string(), "write".to_string()],
})
} else {
Err(Error::Unauthorized("Invalid token".to_string()))
}
}
_ => Err(Error::Unauthorized("Missing or invalid token".to_string()))
}
})
}
}
async fn validate_and_fetch_user(_token: &str) -> Result<(), ()> {
// In a real app, validate against your auth system
Ok(())
}
async fn protected_handler(user: AuthenticatedUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user.id,
"username": user.username,
"permissions": user.permissions
})))
}
Error Handling with Extractors
Extractors automatically handle parsing errors and return appropriate HTTP status codes:
use oxidite::prelude::*;
// If JSON parsing fails, returns 400 Bad Request
async fn handle_bad_json(Json(data): Json<serde_json::Value>) -> Result<Response> {
Ok(Response::json(data))
}
// If path parameter parsing fails (e.g., "abc" for u32), returns 400 Bad Request
async fn handle_bad_path(Path(id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({ "id": id })))
}
// If query parameter parsing fails, returns 400 Bad Request
use serde::Deserialize;
#[derive(Deserialize)]
struct BadQuery {
number: u32,
}
async fn handle_bad_query(Query(params): Query<BadQuery>) -> Result<Response> {
Ok(Response::json(serde_json::json!({ "number": params.number })))
}
Performance Considerations
-
Extractor Ordering: Place extractors that are most likely to fail early in the handler signature to fail fast.
-
Body Consumption: Be aware that the request body can only be consumed once. If you need to access the body multiple times, you’ll need to store it in state or parse it once and store the result.
-
Validation: Consider validating data after extraction rather than during extraction for better performance:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct UserData {
email: String,
age: u8,
}
async fn create_user_validated(Json(mut user): Json<UserData>) -> Result<Response> {
// Validate after extraction
if !is_valid_email(&user.email) {
return Err(Error::Validation("Invalid email format".to_string()));
}
if user.age < 13 {
return Err(Error::Validation("User must be at least 13 years old".to_string()));
}
// Process valid user
Ok(Response::json(serde_json::json!({ "status": "created", "user": user })))
}
fn is_valid_email(email: &str) -> bool {
email.contains('@') && email.contains('.')
}
Summary
Request extractors provide a powerful and type-safe way to access different parts of HTTP requests:
- Use
Path<T>for path parameters - Use
Query<T>for query parameters - Use
Json<T>for JSON request bodies - Use
Form<T>for form data - Use
Cookiesfor cookie access - Use
Body<T>for raw request bodies - Use
State<T>for application state - Combine multiple extractors as needed
- Handle errors appropriately
- Consider performance implications
Extractors are a fundamental part of Oxidite’s design and enable clean, readable handler functions.
Middleware
Middleware in Oxidite provides a way to modify requests and responses globally or for specific routes. This chapter covers how to create, use, and compose middleware.
Overview
Middleware in Oxidite is a function that sits between the server and your route handlers. It can:
- Modify incoming requests
- Modify outgoing responses
- Perform authentication/validation
- Log requests and responses
- Handle cross-cutting concerns
Basic Middleware
A basic middleware function has the signature async fn(Request, Next) -> Result<Response>:
use oxidite::prelude::*;
async fn basic_middleware(req: Request, next: Next) -> Result<Response> {
// Process request before handler
println!("Request received: {} {}", req.method(), req.uri());
// Call the next handler in the chain
let response = next.run(req).await?;
// Process response after handler
println!("Response sent with status: {}", response.status());
Ok(response)
}
Adding Middleware to Routes
You can add middleware to specific routes:
use oxidite::prelude::*;
async fn handler(_req: Request) -> Result<Response> {
Ok(Response::text("Hello from protected route".to_string()))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Add middleware to a specific route
router.get("/protected")
.middleware(basic_middleware)
.handler(handler);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Global Middleware
Add middleware to apply to all routes:
use oxidite::prelude::*;
async fn global_middleware(req: Request, next: Next) -> Result<Response> {
println!("Global middleware: {}", req.uri());
next.run(req).await
}
async fn home(_req: Request) -> Result<Response> {
Ok(Response::text("Home page".to_string()))
}
async fn about(_req: Request) -> Result<Response> {
Ok(Response::text("About page".to_string()))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Add global middleware
router.middleware(global_middleware);
router.get("/", home);
router.get("/about", about);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Request/Response Modification
Middleware can modify both requests and responses:
use oxidite::prelude::*;
async fn request_modifier(req: Request, next: Next) -> Result<Response> {
// Modify the request (e.g., add headers)
let mut req = req;
req.headers_mut().insert("X-Request-Processed", "true".parse().unwrap());
let mut response = next.run(req).await?;
// Modify the response
response.headers_mut().insert("X-Response-Processed", "true".parse().unwrap());
Ok(response)
}
async fn response_modifier(req: Request, next: Next) -> Result<Response> {
let start_time = std::time::Instant::now();
let mut response = next.run(req).await?;
let duration = start_time.elapsed();
// Add timing information to response
response.headers_mut().insert(
"X-Response-Time",
format!("{:.2?}", duration).parse().unwrap()
);
Ok(response)
}
Authentication Middleware
A common use case is authentication:
use oxidite::prelude::*;
async fn auth_middleware(req: Request, next: Next) -> Result<Response> {
// Check for authentication token
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
match auth_header {
Some(token) if token.starts_with("Bearer ") => {
let token = token.trim_start_matches("Bearer ");
if validate_token(token).await {
// Token is valid, continue with request
next.run(req).await
} else {
// Invalid token
Err(Error::Unauthorized("Invalid token".to_string()))
}
}
_ => {
// No valid token provided
Err(Error::Unauthorized("Missing or invalid authorization header".to_string()))
}
}
}
async fn validate_token(_token: &str) -> bool {
// In a real app, validate against your auth system
_token == "valid-token"
}
async fn protected_route(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({ "message": "Access granted" })))
}
Logging Middleware
A comprehensive logging middleware:
use oxidite::prelude::*;
use chrono::Utc;
async fn logging_middleware(req: Request, next: Next) -> Result<Response> {
let start = std::time::Instant::now();
let method = req.method().clone();
let uri = req.uri().clone();
let user_agent = req.headers()
.get("user-agent")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown")
.to_string();
println!(
"[{}] {} {} - User-Agent: {}",
Utc::now().format("%Y-%m-%d %H:%M:%S"),
method,
uri,
user_agent
);
let response = next.run(req).await?;
let duration = start.elapsed();
println!(
"[{}] {} {} - {} - {:.2?}",
Utc::now().format("%Y-%m-%d %H:%M:%S"),
method,
uri,
response.status(),
duration
);
Ok(response)
}
CORS Middleware
Cross-Origin Resource Sharing middleware:
use oxidite::prelude::*;
async fn cors_middleware(req: Request, next: Next) -> Result<Response> {
// Handle preflight requests
if req.method() == http::Method::OPTIONS {
let mut response = Response::ok();
set_cors_headers(response.headers_mut());
return Ok(response);
}
let mut response = next.run(req).await?;
set_cors_headers(response.headers_mut());
Ok(response)
}
fn set_cors_headers(headers: &mut http::HeaderMap) {
headers.insert(
"Access-Control-Allow-Origin",
"*".parse().unwrap()
);
headers.insert(
"Access-Control-Allow-Methods",
"GET, POST, PUT, DELETE, OPTIONS".parse().unwrap()
);
headers.insert(
"Access-Control-Allow-Headers",
"Content-Type, Authorization".parse().unwrap()
);
}
Rate Limiting Middleware
Simple rate limiting middleware:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
struct RateLimiter {
requests: Arc<Mutex<HashMap<String, Vec<Instant>>>>,
max_requests: u32,
window_duration: Duration,
}
impl RateLimiter {
fn new(max_requests: u32, window_seconds: u64) -> Self {
Self {
requests: Arc::new(Mutex::new(HashMap::new())),
max_requests,
window_duration: Duration::from_secs(window_seconds),
}
}
fn is_allowed(&self, key: &str) -> bool {
let mut requests = self.requests.lock().unwrap();
let now = Instant::now();
let window_start = now - self.window_duration;
// Clean old requests
if let Some(times) = requests.get_mut(key) {
times.retain(|time| *time > window_start);
}
// Check if we're over the limit
let current_count = requests
.entry(key.to_string())
.or_insert_with(Vec::new)
.len();
if current_count < self.max_requests as usize {
requests.get_mut(key).unwrap().push(now);
true
} else {
false
}
}
}
async fn rate_limit_middleware(
req: Request,
next: Next
) -> Result<Response> {
// Create a rate limiter (in a real app, this would be shared state)
thread_local! {
static RATE_LIMITER: RateLimiter = RateLimiter::new(10, 60); // 10 requests per minute
}
let client_ip = req.headers()
.get("x-forwarded-for")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown")
.to_string();
if !RATE_LIMITER.with(|limiter| limiter.is_allowed(&client_ip)) {
return Err(Error::RateLimited);
}
next.run(req).await
}
Middleware Composition
You can compose multiple middleware functions:
use oxidite::prelude::*;
async fn middleware_a(req: Request, next: Next) -> Result<Response> {
println!("A: Before");
let result = next.run(req).await;
println!("A: After");
result
}
async fn middleware_b(req: Request, next: Next) -> Result<Response> {
println!("B: Before");
let result = next.run(req).await;
println!("B: After");
result
}
async fn middleware_c(req: Request, next: Next) -> Result<Response> {
println!("C: Before");
let result = next.run(req).await;
println!("C: After");
result
}
async fn handler(_req: Request) -> Result<Response> {
println!("Handler executed");
Ok(Response::text("Response".to_string()))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// Middlewares are executed in the order they're added
router.middleware(middleware_a);
router.middleware(middleware_b);
router.middleware(middleware_c);
router.get("/", handler);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
// Output would be:
// A: Before
// B: Before
// C: Before
// Handler executed
// C: After
// B: After
// A: After
Error Handling in Middleware
Middleware can catch and handle errors:
use oxidite::prelude::*;
async fn error_handling_middleware(req: Request, next: Next) -> Result<Response> {
match next.run(req).await {
Ok(response) => Ok(response),
Err(Error::NotFound) => {
Ok(Response::json(serde_json::json!({
"error": "Resource not found",
"code": 404
})))
}
Err(Error::Unauthorized(msg)) => {
Ok(Response::json(serde_json::json!({
"error": "Unauthorized",
"message": msg,
"code": 401
})))
}
Err(other_error) => Err(other_error),
}
}
Conditional Middleware
Apply middleware conditionally:
use oxidite::prelude::*;
async fn conditional_middleware(req: Request, next: Next) -> Result<Response> {
// Only apply to certain paths
if req.uri().path().starts_with("/api/") {
println!("API request: {}", req.uri());
}
next.run(req).await
}
async fn api_handler(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({ "endpoint": "api" })))
}
async fn web_handler(_req: Request) -> Result<Response> {
Ok(Response::text("Web page".to_string()))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.middleware(conditional_middleware);
router.get("/api/data", api_handler);
router.get("/web/page", web_handler);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Stateful Middleware
Middleware can use application state:
use oxidite::prelude::*;
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
maintenance_mode: bool,
}
async fn stateful_middleware(
req: Request,
next: Next,
State(state): State<Arc<AppState>>
) -> Result<Response> {
if state.maintenance_mode && req.method() != http::Method::GET {
return Err(Error::ServiceUnavailable("Maintenance mode".to_string()));
}
next.run(req).await
}
#[tokio::main]
async fn main() -> Result<()> {
let app_state = Arc::new(AppState {
maintenance_mode: false,
});
let mut router = Router::new();
router.with_state(app_state);
router.middleware(stateful_middleware);
// ... add routes
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Built-in Middleware
Oxidite provides several built-in middleware options:
use oxidite::prelude::*;
use oxidite_middleware::{Logger, RateLimiter, Cors};
// Logger middleware
async fn with_logger() -> Result<()> {
let mut router = Router::new();
// Add logging middleware
router.middleware(Logger::new());
// ... add routes
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
// Rate limiting middleware
async fn with_rate_limit() -> Result<()> {
let mut router = Router::new();
// Add rate limiting middleware
router.middleware(RateLimiter::new(100, std::time::Duration::from_secs(60))); // 100 requests per minute
// ... add routes
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Performance Considerations
- Order Matters: Put fast-executing middleware first
- Avoid Heavy Computation: Don’t perform heavy operations in middleware
- Use Efficient Data Structures: Use appropriate data structures for rate limiting, etc.
- Early Exit: Return early when possible to avoid unnecessary processing
Security Considerations
- Input Validation: Validate inputs in middleware
- Rate Limiting: Protect against abuse
- Authentication: Verify credentials before processing
- Logging: Log appropriately without exposing sensitive data
Summary
Middleware in Oxidite is a powerful way to handle cross-cutting concerns:
- Use
async fn(Request, Next) -> Result<Response>signature - Apply globally with
router.middleware()or to specific routes - Modify requests before and responses after the handler
- Handle authentication, logging, CORS, rate limiting, etc.
- Compose multiple middleware functions
- Consider performance and security implications
Middleware provides a clean separation of concerns and keeps your route handlers focused on business logic.
Architecture
This chapter explains how Oxidite is split across crates and how requests move through the system.
Workspace Structure
oxidite: top-level facade and feature flags.oxidite-core: router, request/response, server primitives.oxidite-middleware: common cross-cutting layers.oxidite-db+oxidite-macros: ORM, derive macros, migrations.oxidite-auth,oxidite-cache,oxidite-queue,oxidite-realtime,oxidite-template: batteries-included runtime capabilities.oxidite-cli: scaffolding, migration, and developer workflow tooling.
Request Lifecycle
- The server accepts an HTTP request in
oxidite-core. - The router matches method/path and prepares extractors.
- Middleware chain runs pre-handler logic.
- Handler executes with typed extractors.
- Handler returns typed response.
- Middleware chain runs post-handler logic.
- Response is serialized and returned to the client.
Database Layer Design
Oxidite ORM sits on top of sqlx::Any:
Databasetrait abstracts pool/transaction execution.Modeltrait provides typed CRUD and validation hooks.ModelQueryoffers builder ergonomics.- Relationship helpers (
HasMany,HasOne,BelongsTo) keep joins and loading explicit. - Raw SQL remains first-class through
execute_query/fetch_all/fetch_one.
Extension Strategy
Prefer adding capabilities in dedicated crates and surfacing stable public APIs through oxidite.
This keeps compile times predictable and avoids making core crates monolithic.
Framework Guide: Building Real Applications with Oxidite
This guide is a practical map for building production services with Oxidite.
How Oxidite is structured
At a high level:
oxidite-corehandles HTTP primitives (request/response/router/server).- feature crates layer capabilities (db/auth/queue/realtime/cache/storage/etc).
oxiditeumbrella crate re-exports these capabilities behind feature flags.
Typical project structure
src/
main.rs
routes/
handlers/
models/
services/
middleware/
jobs/
Recommended ownership:
- handlers: HTTP boundary only
- services: business logic
- models/repositories: persistence logic
- jobs: async/background flows
Request lifecycle
- Router matches method + path.
- Middleware stack runs (request ID, auth, rate limit, etc).
- Extractors parse input (
Path,Query,Json,State,Cookies,Form). - Handler executes business logic.
- Response is serialized and returned.
Error handling strategy
Use typed errors per domain and map them at the HTTP boundary.
- validation ->
400 - auth errors ->
401/403 - missing resources ->
404 - conflicts ->
409 - internal failures ->
500
Prefer explicit error enums instead of stringly-typed errors.
Data access strategy
Use oxidite-db with three tiers:
- basic CRUD via
Modelderive - typed query composition via
ModelQuery - raw SQL for advanced joins/analytics/hot paths
Security baseline checklist
- hash passwords with
oxidite-authhelpers - validate and sanitize untrusted input (
oxidite-security) - apply rate limiting middleware
- enforce RBAC/PBAC checks in handlers/services
- keep secrets in config/env, not code
Observability baseline checklist
- request IDs on all incoming requests
- structured logs at handler/service boundaries
- latency and error counters per route/domain
- retry/dead-letter metrics for async workers
Testing strategy
- unit tests for pure business logic
- handler tests with
oxidite-testingtest server/request/response - integration tests for migrations + DB transactions
- contract tests for public API payloads
Performance strategy
- cache expensive read endpoints
- paginate list endpoints
- stream large responses where useful
- avoid N+1 query patterns (use eager loading)
- benchmark hot endpoints before/after changes
Deployment strategy
- ship behind health checks
- use staged rollout (canary/weighted)
- preserve rollback path for each release
- run schema changes with backward compatibility windows
Handler and Service Patterns
This chapter shows patterns that keep Oxidite apps maintainable at scale.
Thin handlers, thick services
Handler responsibilities:
- parse input
- call service
- map domain result to HTTP response
Service responsibilities:
- validate business rules
- coordinate repositories/external calls
- return typed domain errors
Pattern: command/query split
Use separate methods for:
- command paths (writes)
- query paths (reads)
Benefits:
- clearer performance tuning
- easier authorization policies
- simpler testing
Pattern: explicit transactions
For multi-step writes:
- open transaction
- perform all related changes
- commit only on full success
- rollback on any failure
Use DbPool::with_transaction for concise transaction boundaries.
Pattern: pagination first
All list endpoints should accept:
- page/per_page or limit/offset
- deterministic sort order
Use Pagination::from_page(...) + order_by(...) for stable paging.
Pattern: idempotent writes
For retry-prone endpoints/jobs:
- accept idempotency key
- persist dedup marker in DB
- return prior result on duplicate key
Pattern: explicit authorization
Run authorization checks close to business decisions.
- route-level guards for broad access
- service-level checks for resource ownership rules
Pattern: consistent response envelopes
Adopt stable JSON envelopes:
- success: data + metadata
- error: code + message + details
This simplifies frontend and monitoring integration.
Error Handling and Diagnostics
Great DX comes from precise errors and predictable behavior.
Error layers
Use distinct error types per layer:
- transport errors (HTTP/extractor)
- auth errors
- domain validation errors
- persistence errors
- external integration errors
Public API error design
A practical error payload:
{
"error": {
"code": "validation_failed",
"message": "email is invalid",
"details": {"field": "email"}
}
}
Guidelines:
- stable
codevalues for machines - human-readable
message - optional structured
details
Mapping typed errors to status codes
Recommended map:
- validation -> 400
- unauthenticated -> 401
- unauthorized -> 403
- not found -> 404
- conflict -> 409
- rate limited -> 429
- dependency failure -> 502/503
- internal -> 500
Logging and tracing
- include request ID in all error logs
- include domain entity IDs where safe
- avoid leaking secrets or tokens
- log root cause once; propagate typed context upward
Macro diagnostics
For oxidite-macros derive errors:
- keep model fields explicit
- use supported attribute forms
- rely on compile-time diagnostics for incorrect types/attributes
Migration safety diagnostics
Before a migration rollout:
- run parity checks for response and error shape
- run DB constraint violation scenarios
- verify not-found and authorization edge cases
Database ORM
Oxidite provides a powerful Object-Relational Mapping (ORM) system that allows you to work with databases using Rust structs. This chapter covers how to define models, perform database operations, and use relationships.
Overview
The Oxidite ORM provides:
- Type-safe database operations
- Model definitions with derive macros
- Relationship management
- Migrations and schema management
- Query building capabilities
- Validation and hooks
Model Definition
Define your database models using the Model derive macro:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Model, Serialize, Deserialize)]
#[model(table = "users")]
pub struct User {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null)]
pub email: String,
#[model(not_null)]
pub name: String,
#[model(default = "now")]
pub created_at: String,
#[model(updated_at)]
pub updated_at: String,
#[model(default = "false")]
pub active: bool,
}
// Helper function for default timestamp
fn now() -> String {
chrono::Utc::now().to_rfc3339()
}
Basic CRUD Operations
Creating Records
use oxidite::prelude::*;
async fn create_user() -> Result<()> {
let user = User {
id: 0, // Will be auto-generated
email: "john@example.com".to_string(),
name: "John Doe".to_string(),
created_at: now(),
updated_at: now(),
active: true,
};
let saved_user = user.save().await?;
println!("Created user with ID: {}", saved_user.id);
Ok(())
}
// Alternative: Using create method
async fn create_user_alternative() -> Result<()> {
let user = User::create(User {
id: 0,
email: "jane@example.com".to_string(),
name: "Jane Smith".to_string(),
created_at: now(),
updated_at: now(),
active: true,
}).await?;
println!("Created user: {}", user.name);
Ok(())
}
Reading Records
async fn find_users() -> Result<()> {
// Find all users
let all_users = User::find_all().await?;
println!("Found {} users", all_users.len());
// Find user by ID
if let Some(user) = User::find_by_id(1).await? {
println!("Found user: {}", user.name);
} else {
println!("User not found");
}
// Find users with conditions (simplified example)
let active_users = User::find_where("active = true").await?;
println!("Found {} active users", active_users.len());
Ok(())
}
Updating Records
async fn update_user() -> Result<()> {
if let Some(mut user) = User::find_by_id(1).await? {
user.name = "John Updated".to_string();
user.updated_at = now();
let updated_user = user.save().await?;
println!("Updated user: {}", updated_user.name);
}
Ok(())
}
// Bulk update
async fn bulk_update() -> Result<()> {
let updated_count = User::update_where(
"active = false",
&[("updated_at", &now())]
).await?;
println!("Updated {} users", updated_count);
Ok(())
}
Deleting Records
async fn delete_user() -> Result<()> {
if let Some(user) = User::find_by_id(1).await? {
user.delete().await?;
println!("Deleted user: {}", user.name);
}
Ok(())
}
// Bulk delete
async fn bulk_delete() -> Result<()> {
let deleted_count = User::delete_where("created_at < '2023-01-01'").await?;
println!("Deleted {} old users", deleted_count);
Ok(())
}
Relationships
Define relationships between models:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Model, Serialize, Deserialize)]
#[model(table = "posts")]
pub struct Post {
#[model(primary_key)]
pub id: i32,
pub title: String,
pub content: String,
pub user_id: i32, // Foreign key
#[model(created_at)]
pub created_at: String,
}
#[derive(Model, Serialize, Deserialize)]
#[model(table = "comments")]
pub struct Comment {
#[model(primary_key)]
pub id: i32,
pub content: String,
pub user_id: i32, // Foreign key
pub post_id: i32, // Foreign key
#[model(created_at)]
pub created_at: String,
}
// Update User model to include relationships
#[derive(Model, Serialize, Deserialize)]
#[model(table = "users")]
pub struct User {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null)]
pub email: String,
#[model(not_null)]
pub name: String,
#[model(default = "now")]
pub created_at: String,
#[model(updated_at)]
pub updated_at: String,
#[model(default = "false")]
pub active: bool,
}
// Access related records
async fn work_with_relationships() -> Result<()> {
// Find a user
if let Some(user) = User::find_by_id(1).await? {
// Find user's posts
let posts = Post::find_where(&format!("user_id = {}", user.id)).await?;
println!("User {} has {} posts", user.name, posts.len());
// Find user's comments
let comments = Comment::find_where(&format!("user_id = {}", user.id)).await?;
println!("User {} has {} comments", user.name, comments.len());
}
Ok(())
}
Query Building
Use the query builder for complex queries:
use oxidite::prelude::*;
async fn complex_queries() -> Result<()> {
// Find users with custom conditions
let users = User::find_where("name LIKE '%John%' AND active = true").await?;
println!("Found {} users matching criteria", users.len());
// Find with ordering
let recent_users = User::find_where("active = true")
.order_by("created_at DESC")
.limit(10)
.await?;
// Find with joins (conceptual - exact syntax may vary)
let users_with_posts = execute_raw_query("
SELECT u.*, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
WHERE u.active = true
GROUP BY u.id
ORDER BY post_count DESC
").await?;
Ok(())
}
async fn execute_raw_query<T>(_sql: &str) -> Result<Vec<T>> {
// Implementation would depend on the specific database connector
Ok(vec![])
}
Migrations
Database migrations allow you to manage schema changes:
use oxidite_db::Migration;
pub struct CreateUsersTable;
impl Migration for CreateUsersTable {
fn version(&self) -> i64 {
20231201000001 // YYYYMMDDHHMMSS
}
fn name(&self) -> &'static str {
"create_users_table"
}
fn up(&self) -> &'static str {
r#"
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
active BOOLEAN DEFAULT TRUE
)
"#
}
fn down(&self) -> &'static str {
"DROP TABLE users"
}
}
pub struct CreatePostsTable;
impl Migration for CreatePostsTable {
fn version(&self) -> i64 {
20231201000002
}
fn name(&self) -> &'static str {
"create_posts_table"
}
fn up(&self) -> &'static str {
r#"
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
title VARCHAR(255) NOT NULL,
content TEXT NOT NULL,
user_id INTEGER REFERENCES users(id),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
"#
}
fn down(&self) -> &'static str {
"DROP TABLE posts"
}
}
Validation
Add validation to your models:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Model, Serialize, Deserialize)]
#[model(table = "users")]
pub struct ValidatedUser {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null)]
pub email: String,
#[model(not_null)]
pub name: String,
#[model(validate = "validate_age")]
pub age: u8,
#[model(default = "now")]
pub created_at: String,
#[model(updated_at)]
pub updated_at: String,
}
impl ValidatedUser {
// Validation method
fn validate_age(&self) -> Result<(), String> {
if self.age < 13 {
Err("User must be at least 13 years old".to_string())
} else if self.age > 120 {
Err("Invalid age".to_string())
} else {
Ok(())
}
}
// Hook methods
fn before_save(&mut self) -> Result<(), String> {
self.updated_at = now();
self.validate_age() // Run validation before saving
}
fn after_save(&self) -> Result<(), String> {
println!("User {} saved with ID {}", self.name, self.id);
Ok(())
}
}
Transactions
Perform operations within transactions:
use oxidite::prelude::*;
async fn transaction_example() -> Result<()> {
// Start a transaction
let tx = begin_transaction().await?;
match async {
// Create user
let user = User {
id: 0,
email: "transaction@example.com".to_string(),
name: "Transaction User".to_string(),
created_at: now(),
updated_at: now(),
active: true,
};
let saved_user = user.save().await?;
// Create a post for the user
let post = Post {
id: 0,
title: "First Post".to_string(),
content: "Hello, world!".to_string(),
user_id: saved_user.id,
created_at: now(),
};
post.save().await?;
Ok::<_, Error>(saved_user.id)
}.await {
Ok(user_id) => {
// Commit the transaction
tx.commit().await?;
println!("Successfully created user {} and associated post", user_id);
}
Err(e) => {
// Rollback the transaction
tx.rollback().await?;
println!("Transaction failed: {:?}", e);
return Err(e);
}
}
Ok(())
}
async fn begin_transaction() -> Result<Transaction> {
// Implementation would depend on the database connector
Ok(Transaction {})
}
pub struct Transaction;
impl Transaction {
pub async fn commit(self) -> Result<()> {
Ok(())
}
pub async fn rollback(self) -> Result<()> {
Ok(())
}
}
Soft Deletes
Models can support soft deletes:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Model, Serialize, Deserialize)]
#[model(table = "soft_delete_users", soft_delete = true)]
pub struct SoftDeleteUser {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null)]
pub email: String,
#[model(not_null)]
pub name: String,
#[model(deleted_at)] // Special field for soft deletes
pub deleted_at: Option<String>,
#[model(default = "now")]
pub created_at: String,
#[model(updated_at)]
pub updated_at: String,
}
async fn soft_delete_example() -> Result<()> {
// Find all users (includes soft-deleted ones)
let all_users = SoftDeleteUser::find_all_with_deleted().await?;
// Find only active users (excludes soft-deleted ones)
let active_users = SoftDeleteUser::find_all().await?;
// Soft delete a user
if let Some(mut user) = SoftDeleteUser::find_by_id(1).await? {
user.delete().await?; // This sets deleted_at instead of removing the record
println!("User soft-deleted");
}
// Restore a soft-deleted user
if let Some(mut user) = SoftDeleteUser::find_by_id_trashed(1).await? {
user.restore().await?; // This clears the deleted_at field
println!("User restored");
}
Ok(())
}
Connection Management
Configure database connections:
use oxidite::prelude::*;
async fn configure_database() -> Result<()> {
// Configure database connection
let db_config = DatabaseConfig {
url: std::env::var("DATABASE_URL").unwrap_or("sqlite::memory:".to_string()),
pool_size: 10,
timeout: std::time::Duration::from_secs(30),
};
// Initialize the database connection
init_database(db_config).await?;
Ok(())
}
pub struct DatabaseConfig {
pub url: String,
pub pool_size: usize,
pub timeout: std::time::Duration,
}
async fn init_database(_config: DatabaseConfig) -> Result<()> {
// Implementation would depend on the specific database connector
Ok(())
}
Error Handling
Handle database errors appropriately:
use oxidite::prelude::*;
async fn error_handling_example() -> Result<()> {
match User::find_by_id(999999).await {
Ok(Some(user)) => {
println!("Found user: {}", user.name);
}
Ok(None) => {
println!("User not found");
}
Err(Error::InternalServerError(msg)) => {
eprintln!("Database error: {}", msg);
return Err(Error::InternalServerError(msg));
}
Err(e) => {
eprintln!("Unexpected error: {:?}", e);
return Err(e);
}
}
Ok(())
}
Performance Considerations
- Use Indexes: Add database indexes for frequently queried fields
- Batch Operations: Use batch operations when possible
- Connection Pooling: Use connection pooling for better performance
- N+1 Queries: Be aware of N+1 query problems with relationships
- Caching: Consider caching frequently accessed data
Security Considerations
- SQL Injection: The ORM protects against SQL injection by using parameterized queries
- Input Validation: Always validate input before saving to the database
- Access Control: Implement proper access control for database operations
- Data Encryption: Consider encrypting sensitive data at rest
Summary
The Oxidite ORM provides a comprehensive solution for database operations:
- Define models with the
Modelderive macro - Perform CRUD operations with type safety
- Define and work with relationships
- Handle migrations for schema management
- Add validation and hooks to models
- Use transactions for data consistency
- Support for soft deletes
- Proper error handling and security considerations
The ORM abstracts away the complexity of raw SQL while providing the flexibility to execute custom queries when needed.
ORM Deep Dive
This chapter covers model design, query ergonomics, and practical escape hatches.
Model Conventions
A typical model uses:
id: i64primary key- optional
created_at: i64,updated_at: i64 - optional
deleted_at: Option<i64>for soft deletes
use oxidite_db::{Model, sqlx};
#[derive(Model, sqlx::FromRow)]
#[model(table = "users")]
struct User {
id: i64,
name: String,
email: String,
created_at: i64,
updated_at: i64,
deleted_at: Option<i64>,
}
Query API
Use ModelQuery for common cases:
filter_eq,filter_like,filter_is_null,filter_is_not_nullorder_by+SortDirectionpaginate(Pagination)with_deleted()for soft-deleted records
For advanced DB-specific behavior, use raw SQL with bound parameters.
Save Semantics
Model::save() delegates to is_persisted():
true=>updatefalse=>create
Derived models implement is_persisted() as id > 0 by default. Override when needed.
Batch Operations
Use trait helpers for simple batches:
insert_manyupdate_many
For high-volume workloads, use explicit transaction + raw SQL/bulk SQL patterns.
Error Handling
Result<T>for SQL-layer operations (sqlx::Error)OrmResult<T>for ORM-layer typed errors (OrmError)
Prefer OrmResult at user-facing API boundaries where diagnostics matter.
Authentication
Authentication in Oxidite provides multiple methods to verify user identity. This chapter covers various authentication mechanisms including JWT, sessions, API keys, and OAuth2.
Overview
Oxidite provides comprehensive authentication support including:
- JSON Web Tokens (JWT)
- Session-based authentication
- API key authentication
- OAuth2 integration
- Role-based access control (RBAC)
- Password hashing and verification
JWT Authentication
JSON Web Tokens provide stateless authentication:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
use chrono::{Duration, Utc};
#[derive(Debug, Serialize, Deserialize)]
struct Claims {
sub: String, // Subject (user ID)
exp: i64, // Expiration time
iat: i64, // Issued at time
role: String, // User role
}
async fn generate_jwt(user_id: &str, role: &str) -> Result<String> {
let expiration = Utc::now()
.checked_add_signed(Duration::hours(24))
.expect("valid timestamp")
.timestamp();
let claims = Claims {
sub: user_id.to_string(),
exp: expiration,
iat: Utc::now().timestamp(),
role: role.to_string(),
};
// In a real app, use a proper JWT library like jsonwebtoken
// This is a simplified example
let token = create_jwt_token(&claims)?;
Ok(token)
}
fn create_jwt_token(_claims: &Claims) -> Result<String> {
// Implementation would use a proper JWT library
Ok("fake.jwt.token".to_string())
}
async fn verify_jwt(token: &str) -> Result<Claims> {
// In a real app, verify the JWT token
// This is a simplified example
verify_jwt_token(token)
}
fn verify_jwt_token(_token: &str) -> Result<Claims> {
// Implementation would use a proper JWT library
Ok(Claims {
sub: "123".to_string(),
exp: chrono::Utc::now().timestamp() + 86400, // 24 hours
iat: chrono::Utc::now().timestamp(),
role: "user".to_string(),
})
}
// Login endpoint
#[derive(Deserialize)]
struct LoginRequest {
username: String,
password: String,
}
async fn login(Json(credentials): Json<LoginRequest>) -> Result<Response> {
// Verify credentials (simplified)
if verify_credentials(&credentials.username, &credentials.password).await {
let token = generate_jwt(&credentials.username, "user").await?;
Ok(Response::json(serde_json::json!({
"token": token,
"expires_in": 86400 // 24 hours in seconds
})))
} else {
Err(Error::Unauthorized("Invalid credentials".to_string()))
}
}
async fn verify_credentials(_username: &str, _password: &str) -> bool {
// In a real app, verify against your user database
_username == "admin" && _password == "password"
}
JWT Middleware
Create middleware to protect routes with JWT authentication:
use oxidite::prelude::*;
async fn jwt_auth_middleware(req: Request, next: Next) -> Result<Response> {
// Extract token from Authorization header
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
match auth_header {
Some(auth) if auth.starts_with("Bearer ") => {
let token = auth.trim_start_matches("Bearer ").trim();
match verify_jwt(token).await {
Ok(claims) => {
// Add user info to request extensions
let mut req = req;
req.extensions_mut().insert(AuthenticatedUser {
id: claims.sub,
role: claims.role,
});
next.run(req).await
}
Err(_) => Err(Error::Unauthorized("Invalid or expired token".to_string())),
}
}
_ => Err(Error::Unauthorized("Missing or invalid authorization header".to_string())),
}
}
#[derive(Clone)]
struct AuthenticatedUser {
id: String,
role: String,
}
// Protected route using authenticated user
async fn protected_route(user: AuthenticatedUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "Access granted",
"user_id": user.id,
"role": user.role
})))
}
Session Authentication
Session-based authentication stores user state on the server:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
#[derive(Clone)]
struct SessionStore {
sessions: Arc<Mutex<HashMap<String, Session>>>,
}
#[derive(Clone)]
struct Session {
user_id: String,
role: String,
expires_at: u64,
}
impl SessionStore {
fn new() -> Self {
Self {
sessions: Arc::new(Mutex::new(HashMap::new())),
}
}
fn create_session(&self, user_id: String, role: String) -> String {
let session_id = generate_session_id();
let expires_at = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() + (24 * 3600); // 24 hours
let session = Session {
user_id,
role,
expires_at,
};
self.sessions.lock().unwrap().insert(session_id.clone(), session);
session_id
}
fn validate_session(&self, session_id: &str) -> Option<Session> {
let sessions = self.sessions.lock().unwrap();
if let Some(session) = sessions.get(session_id) {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
if now < session.expires_at {
Some(session.clone())
} else {
// Session expired, remove it
drop(sessions); // Release the lock
self.sessions.lock().unwrap().remove(session_id);
None
}
} else {
None
}
}
fn destroy_session(&self, session_id: &str) {
self.sessions.lock().unwrap().remove(session_id);
}
}
fn generate_session_id() -> String {
use uuid::Uuid;
Uuid::new_v4().to_string()
}
// Session authentication middleware
async fn session_auth_middleware(
req: Request,
next: Next,
State(session_store): State<Arc<SessionStore>>
) -> Result<Response> {
// Get session ID from cookies
let cookies = Cookies::from_request(&req).await?;
let session_id = cookies.get("session_id");
match session_id {
Some(id) => {
if let Some(session) = session_store.validate_session(id) {
let mut req = req;
req.extensions_mut().insert(AuthenticatedUser {
id: session.user_id,
role: session.role,
});
next.run(req).await
} else {
Err(Error::Unauthorized("Invalid or expired session".to_string()))
}
}
None => Err(Error::Unauthorized("No session found".to_string())),
}
}
// Login handler for session-based auth
async fn session_login(
Json(credentials): Json<LoginRequest>,
State(session_store): State<Arc<SessionStore>>
) -> Result<Response> {
if verify_credentials(&credentials.username, &credentials.password).await {
let session_id = session_store.create_session(
credentials.username,
"user".to_string()
);
// Create response with session cookie
let mut response = Response::json(serde_json::json!({
"message": "Login successful",
"session_id": session_id
}));
// Add session cookie
use http::header::{SET_COOKIE, HeaderValue};
let cookie_header = format!("session_id={}; HttpOnly; Secure; Max-Age={}; Path=/",
session_id, 24 * 3600); // 24 hours
response.headers_mut().insert(SET_COOKIE, HeaderValue::from_str(&cookie_header).unwrap());
Ok(response)
} else {
Err(Error::Unauthorized("Invalid credentials".to_string()))
}
}
API Key Authentication
API key authentication for service-to-service communication:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
#[derive(Clone)]
struct ApiKeyStore {
keys: Arc<Mutex<HashMap<String, ApiKey>>>,
}
#[derive(Clone)]
struct ApiKey {
user_id: String,
permissions: Vec<String>,
created_at: String,
}
impl ApiKeyStore {
fn new() -> Self {
let mut keys = HashMap::new();
// Add some example keys (in a real app, load from database)
keys.insert(
"sk_live_abc123".to_string(),
ApiKey {
user_id: "user123".to_string(),
permissions: vec!["read".to_string(), "write".to_string()],
created_at: chrono::Utc::now().to_rfc3339(),
}
);
Self {
keys: Arc::new(Mutex::new(keys)),
}
}
fn validate_key(&self, key: &str) -> Option<ApiKey> {
let keys = self.keys.lock().unwrap();
keys.get(key).cloned()
}
}
// API key authentication middleware
async fn api_key_auth_middleware(
req: Request,
next: Next,
State(api_keys): State<Arc<ApiKeyStore>>
) -> Result<Response> {
// Check for API key in header
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
if let Some(auth) = auth_header {
let api_key = if auth.starts_with("Bearer ") {
auth.trim_start_matches("Bearer ").trim()
} else {
auth
};
if let Some(key_info) = api_keys.validate_key(api_key) {
let mut req = req;
req.extensions_mut().insert(ApiKeyUser {
user_id: key_info.user_id,
permissions: key_info.permissions,
});
return next.run(req).await;
}
}
// Check for API key in query parameter as fallback
use serde::Deserialize;
#[derive(Deserialize)]
struct ApiKeyQuery {
api_key: Option<String>,
}
if let Ok(Query(query)) = Query::<ApiKeyQuery>::from_request(&req).await {
if let Some(api_key) = query.api_key {
if let Some(key_info) = api_keys.validate_key(&api_key) {
let mut req = req;
req.extensions_mut().insert(ApiKeyUser {
user_id: key_info.user_id,
permissions: key_info.permissions,
});
return next.run(req).await;
}
}
}
Err(Error::Unauthorized("Invalid API key".to_string()))
}
#[derive(Clone)]
struct ApiKeyUser {
user_id: String,
permissions: Vec<String>,
}
// Protected endpoint for API key users
async fn api_protected_route(user: ApiKeyUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "API access granted",
"user_id": user.user_id,
"permissions": user.permissions
})))
}
Password Hashing
Secure password handling with hashing:
use oxidite::prelude::*;
pub struct PasswordHasher;
impl PasswordHasher {
pub fn hash(password: &str) -> Result<String> {
// In a real app, use a proper hashing library like argon2 or bcrypt
// This is a placeholder implementation
use sha2::{Sha256, Digest};
let salt = generate_salt();
let mut hasher = Sha256::new();
hasher.update(password.as_bytes());
hasher.update(&salt);
let hash = hasher.finalize();
let hash_hex = format!("{:x}", hash);
Ok(format!("sha256:{}:{}", base64::encode(&salt), hash_hex))
}
pub fn verify(password: &str, hashed: &str) -> Result<bool> {
if !hashed.starts_with("sha256:") {
return Err(Error::InternalServerError("Unsupported hash format".to_string()));
}
let parts: Vec<&str> = hashed.split(':').collect();
if parts.len() != 3 {
return Err(Error::InternalServerError("Invalid hash format".to_string()));
}
let salt = base64::decode(parts[1]).map_err(|_| Error::InternalServerError("Invalid salt".to_string()))?;
let mut hasher = sha2::Sha256::new();
hasher.update(password.as_bytes());
hasher.update(&salt);
let hash = hasher.finalize();
let hash_hex = format!("{:x}", hash);
Ok(hash_hex == parts[2])
}
}
fn generate_salt() -> Vec<u8> {
use rand::RngCore;
let mut salt = [0u8; 32];
rand::thread_rng().fill_bytes(&mut salt);
salt.to_vec()
}
// Example usage in user registration
#[derive(Deserialize)]
struct RegisterRequest {
username: String,
email: String,
password: String,
}
async fn register_user(Json(registration): Json<RegisterRequest>) -> Result<Response> {
// Hash the password
let password_hash = PasswordHasher::hash(®istration.password)?;
// Save user to database (simplified)
let user = UserRegistration {
username: registration.username,
email: registration.email,
password_hash,
created_at: chrono::Utc::now().to_rfc3339(),
};
// In a real app, save to database
save_user_to_db(user).await?;
Ok(Response::json(serde_json::json!({
"message": "User registered successfully"
})))
}
#[derive(Clone)]
struct UserRegistration {
username: String,
email: String,
password_hash: String,
created_at: String,
}
async fn save_user_to_db(_user: UserRegistration) -> Result<()> {
// Implementation would save to database
Ok(())
}
OAuth2 Integration
OAuth2 support for third-party authentication:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct OAuthCallback {
code: String,
state: Option<String>,
}
// OAuth2 redirect for Google (example)
async fn google_oauth_redirect(_req: Request) -> Result<Response> {
let client_id = std::env::var("GOOGLE_CLIENT_ID").unwrap();
let redirect_uri = "http://localhost:3000/auth/google/callback";
let scopes = "email profile";
let state = generate_state();
let auth_url = format!(
"https://accounts.google.com/o/oauth2/auth?client_id={}&redirect_uri={}&scope={}&response_type=code&state={}",
client_id, redirect_uri, scopes, state
);
// In a real app, redirect to the auth URL
Ok(Response::json(serde_json::json!({
"redirect_url": auth_url,
"state": state
})))
}
// OAuth2 callback handler
async fn google_oauth_callback(
Query(params): Query<OAuthCallback>,
State(session_store): State<Arc<SessionStore>>
) -> Result<Response> {
// Verify state parameter (security measure)
if let Some(expected_state) = params.state {
if !validate_state(&expected_state).await {
return Err(Error::BadRequest("Invalid state parameter".to_string()));
}
}
// Exchange code for token
let token_response = exchange_code_for_token(¶ms.code).await?;
// Get user info from Google
let user_info = get_google_user_info(&token_response.access_token).await?;
// Create session for the user
let session_id = session_store.create_session(
user_info.id.clone(),
"oauth_user".to_string()
);
// Return session info or redirect
Ok(Response::json(serde_json::json!({
"message": "Authentication successful",
"session_id": session_id,
"user": user_info
})))
}
struct TokenResponse {
access_token: String,
token_type: String,
expires_in: u32,
}
struct GoogleUserInfo {
id: String,
email: String,
name: String,
verified_email: bool,
}
async fn exchange_code_for_token(_code: &str) -> Result<TokenResponse> {
// In a real app, make HTTP request to token endpoint
Ok(TokenResponse {
access_token: "fake_access_token".to_string(),
token_type: "Bearer".to_string(),
expires_in: 3600,
})
}
async fn get_google_user_info(_access_token: &str) -> Result<GoogleUserInfo> {
// In a real app, make HTTP request to userinfo endpoint
Ok(GoogleUserInfo {
id: "google_user_123".to_string(),
email: "user@example.com".to_string(),
name: "Google User".to_string(),
verified_email: true,
})
}
fn generate_state() -> String {
use uuid::Uuid;
Uuid::new_v4().to_string()
}
async fn validate_state(_state: &str) -> bool {
// In a real app, verify against stored states
true
}
Role-Based Access Control (RBAC)
Implement role-based access control:
use oxidite::prelude::*;
use std::collections::HashSet;
#[derive(Clone)]
struct Permission {
resource: String,
action: String,
}
#[derive(Clone)]
struct Role {
name: String,
permissions: Vec<Permission>,
}
#[derive(Clone)]
struct UserRole {
user_id: String,
role: String,
}
#[derive(Clone)]
struct RbacStore {
roles: Vec<Role>,
user_roles: Vec<UserRole>,
}
impl RbacStore {
fn new() -> Self {
// Define roles and permissions
let admin_role = Role {
name: "admin".to_string(),
permissions: vec![
Permission { resource: "users".to_string(), action: "read".to_string() },
Permission { resource: "users".to_string(), action: "write".to_string() },
Permission { resource: "users".to_string(), action: "delete".to_string() },
Permission { resource: "posts".to_string(), action: "read".to_string() },
Permission { resource: "posts".to_string(), action: "write".to_string() },
],
};
let user_role = Role {
name: "user".to_string(),
permissions: vec![
Permission { resource: "users".to_string(), action: "read".to_string() },
Permission { resource: "posts".to_string(), action: "read".to_string() },
Permission { resource: "posts".to_string(), action: "write".to_string() },
],
};
Self {
roles: vec![admin_role, user_role],
user_roles: vec![
UserRole { user_id: "admin123".to_string(), role: "admin".to_string() },
UserRole { user_id: "user456".to_string(), role: "user".to_string() },
],
}
}
fn user_has_permission(&self, user_id: &str, resource: &str, action: &str) -> bool {
// Get user's roles
let user_roles: Vec<&str> = self.user_roles
.iter()
.filter(|ur| ur.user_id == user_id)
.map(|ur| ur.role.as_str())
.collect();
// Check if any role grants the required permission
for role_name in user_roles {
if let Some(role) = self.roles.iter().find(|r| r.name == role_name) {
if role.permissions.iter().any(|perm| {
perm.resource == resource && perm.action == action
}) {
return true;
}
}
}
false
}
}
// RBAC middleware
async fn rbac_middleware(
req: Request,
next: Next,
State(rbac): State<Arc<RbacStore>>
) -> Result<Response> {
// Get authenticated user from request extensions
if let Some(user) = req.extensions().get::<AuthenticatedUser>() {
// Extract resource and action from the request
let resource = extract_resource_from_path(req.uri().path());
let action = req.method().as_str().to_lowercase();
if rbac.user_has_permission(&user.id, &resource, &action) {
return next.run(req).await;
}
return Err(Error::Forbidden("Insufficient permissions".to_string()));
}
Err(Error::Unauthorized("User not authenticated".to_string()))
}
fn extract_resource_from_path(path: &str) -> String {
// Simplified resource extraction
// In a real app, you'd have more sophisticated routing
path.split('/').nth(1).unwrap_or("").to_string()
}
// Example route with RBAC protection
async fn admin_only_route(user: AuthenticatedUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "Admin access granted",
"user_id": user.id
})))
}
Two-Factor Authentication (2FA)
Implement two-factor authentication:
use oxidite::prelude::*;
use qrcode::QrCode;
use base32;
#[derive(Clone)]
struct TotpSecret {
secret: String,
user_id: String,
verified: bool,
}
#[derive(Clone)]
struct TotpStore {
secrets: std::sync::Arc<Mutex<HashMap<String, TotpSecret>>>,
}
impl TotpStore {
fn new() -> Self {
Self {
secrets: std::sync::Arc::new(Mutex::new(HashMap::new())),
}
}
fn generate_secret(&self, user_id: &str) -> String {
// Generate a random secret
let secret: Vec<u8> = (0..32).map(|_| rand::random::<u8>()).collect();
let secret_base32 = base32::encode(base32::Alphabet::RFC4648 { padding: false }, &secret);
// Store the secret
self.secrets.lock().unwrap().insert(
user_id.to_string(),
TotpSecret {
secret: secret_base32.clone(),
user_id: user_id.to_string(),
verified: false,
}
);
secret_base32
}
fn verify_token(&self, user_id: &str, token: &str) -> bool {
// In a real app, verify the TOTP token using the stored secret
// This is a simplified check
token.len() == 6 && token.chars().all(|c| c.is_ascii_digit())
}
fn enable_2fa(&self, user_id: &str) {
let mut secrets = self.secrets.lock().unwrap();
if let Some(secret) = secrets.get_mut(user_id) {
secret.verified = true;
}
}
}
// Generate 2FA setup
async fn generate_2fa_setup(
user: AuthenticatedUser,
State(totp_store): State<Arc<TotpStore>>
) -> Result<Response> {
let secret = totp_store.generate_secret(&user.id);
let issuer = "Oxidite App";
let account = &user.id;
// Generate QR code URL
let otpauth_url = format!(
"otpauth://totp/{}:{}?secret={}&issuer={}",
issuer, account, secret, issuer
);
// Generate QR code
let qr_code = QrCode::new(otpauth_url.as_bytes()).unwrap();
let qr_string = qr_code
.render::<unicode_art::Dense1x2>()
.dark_color(unicode_art::Color::Ascii(' '))
.light_color(unicode_art::Color::Ascii('█'))
.build();
Ok(Response::json(serde_json::json!({
"secret": secret,
"qr_code": qr_string,
"manual_entry": format!("{} {}", issuer, account)
})))
}
// Verify 2FA token
#[derive(Deserialize)]
struct Verify2faRequest {
token: String,
}
async fn verify_2fa(
user: AuthenticatedUser,
Json(payload): Json<Verify2faRequest>,
State(totp_store): State<Arc<TotpStore>>
) -> Result<Response> {
if totp_store.verify_token(&user.id, &payload.token) {
totp_store.enable_2fa(&user.id);
Ok(Response::json(serde_json::json!({
"message": "2FA enabled successfully"
})))
} else {
Err(Error::BadRequest("Invalid 2FA token".to_string()))
}
}
// 2FA middleware
async fn twofa_middleware(
req: Request,
next: Next,
State(totp_store): State<Arc<TotpStore>>
) -> Result<Response> {
// Check if user has 2FA enabled
if let Some(user) = req.extensions().get::<AuthenticatedUser>() {
let secrets = totp_store.secrets.lock().unwrap();
if let Some(secret) = secrets.get(&user.id) {
if !secret.verified {
// 2FA not verified yet
return Err(Error::Unauthorized("2FA verification required".to_string()));
}
}
}
next.run(req).await
}
Security Best Practices
1. Secure Token Storage
// Store tokens securely, never in plain text
const TOKEN_LENGTH: usize = 32; // 256 bits
fn generate_secure_token() -> String {
use rand::RngCore;
let mut bytes = [0u8; TOKEN_LENGTH];
rand::thread_rng().fill_bytes(&mut bytes);
hex::encode(bytes)
}
2. Rate Limiting for Auth Attempts
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
#[derive(Clone)]
struct RateLimiter {
attempts: Arc<Mutex<HashMap<String, Vec<Instant>>>>,
max_attempts: u32,
window: Duration,
}
impl RateLimiter {
fn new(max_attempts: u32, window_minutes: u64) -> Self {
Self {
attempts: Arc::new(Mutex::new(HashMap::new())),
max_attempts,
window: Duration::from_secs(window_minutes * 60),
}
}
fn is_allowed(&self, identifier: &str) -> bool {
let mut attempts = self.attempts.lock().unwrap();
let now = Instant::now();
let window_start = now - self.window;
// Clean old attempts
if let Some(times) = attempts.get_mut(identifier) {
times.retain(|time| *time > window_start);
}
// Check limit
let count = attempts.entry(identifier.to_string())
.or_insert_with(Vec::new)
.len();
if count < self.max_attempts as usize {
attempts.get_mut(identifier).unwrap().push(now);
true
} else {
false
}
}
}
3. Session Regeneration
// Regenerate session ID after login to prevent session fixation
async fn regenerate_session(
old_session_id: &str,
new_user_info: AuthenticatedUser,
session_store: &SessionStore
) -> String {
session_store.destroy_session(old_session_id);
// Create new session with new ID
session_store.create_session(new_user_info.id, new_user_info.role)
}
Summary
Authentication in Oxidite provides multiple secure methods:
- JWT: Stateless authentication with tokens
- Sessions: Server-side state management
- API Keys: Service-to-service authentication
- Password Hashing: Secure credential storage
- OAuth2: Third-party authentication integration
- RBAC: Role-based access control
- 2FA: Two-factor authentication support
Key security practices include:
- Using strong password hashing
- Implementing rate limiting
- Securing token storage and transmission
- Validating all inputs
- Following authentication best practices
Choose the appropriate authentication method based on your application’s requirements and security needs.
Template Engine
Oxidite provides a powerful template engine for server-side rendering. The engine supports Jinja2-style syntax with features like variable interpolation, control structures, and template inheritance.
Basic Template Usage
Setting Up the Template Engine
First, you need to set up the template engine:
use oxidite::prelude::*;
use oxidite_template::{TemplateEngine, Context};
async fn setup_template_example(_req: Request) -> Result<Response> {
// Create a new template engine
let mut engine = TemplateEngine::new();
// Add a simple template
engine.add_template(
"hello",
"<h1>Hello {{ name }}!</h1><p>Welcome to {{ framework }}.</p>"
)?;
// Create context with data
let mut context = Context::new();
context.set("name", "Developer");
context.set("framework", "Oxidite");
// Render the template as an HTML response
let response = engine.render_response("hello", &context)?;
Ok(response)
}
Loading Templates from Files
You can load templates from a directory structure:
use std::path::PathBuf;
async fn file_templates_example(_req: Request) -> Result<Response> {
let mut engine = TemplateEngine::new();
// Load all templates from a directory (assuming you have template files)
let templates_dir = PathBuf::from("templates");
let count = engine.load_dir(&templates_dir)?;
println!("Loaded {} templates", count);
let mut context = Context::new();
context.set("title", "My Page");
context.set("content", "Page content here");
let response = engine.render_response("index.html", &context)?;
Ok(response)
}
Template Syntax
Variables
Variables in templates are wrapped in {{ }}:
<p>Hello {{ name }}!</p>
<p>Your email is {{ user.email }}.</p> <!-- Dotted notation -->
Control Structures
The template engine supports basic control structures:
<!-- Conditionals -->
{% if user.admin %}
<p>Welcome, administrator!</p>
{% else %}
<p>Welcome, {{ user.name }}!</p>
{% endif %}
<!-- Loops -->
<ul>
{% for item in items %}
<li>{{ item }}</li>
{% endfor %}
</ul>
Template Context
The Context struct is used to pass data to templates:
use oxidite_template::Context;
use serde_json::json;
// Create context in different ways
let mut context = Context::new();
// Set simple values
context.set("name", "Alice");
context.set("age", 30);
// Set complex objects
context.set("user", json!({
"name": "Bob",
"email": "bob@example.com",
"active": true
}));
// Set arrays
context.set("items", vec!["apple", "banana", "cherry"]);
// Create context from JSON
let json_data = json!({
"title": "My Blog",
"posts": [
{"title": "Post 1", "content": "Content 1"},
{"title": "Post 2", "content": "Content 2"}
]
});
let context = Context::from_json(json_data);
Rendering Templates
You can render templates in several ways:
Render to String
use oxidite_template::{TemplateEngine, Context};
let mut engine = TemplateEngine::new();
engine.add_template("greeting", "Hello {{ name }}!")?;
let mut context = Context::new();
context.set("name", "World");
let html = engine.render("greeting", &context)?;
assert_eq!(html, "Hello World!");
Render Directly as Response
use oxidite::prelude::*;
use oxidite_template::{TemplateEngine, Context};
async fn render_as_response(_req: Request) -> Result<Response> {
let mut engine = TemplateEngine::new();
engine.add_template("page", "<h1>{{ title }}</h1><div>{{ content }}</div>")?;
let mut context = Context::new();
context.set("title", "My Page");
context.set("content", "Page content");
// Render directly as HTML response
let response = engine.render_response("page", &context)?;
Ok(response)
}
Template Inheritance
Template inheritance allows you to create base templates that other templates can extend:
Base template (base.html):
<!DOCTYPE html>
<html>
<head>
<title>{% block title %}Default Title{% endblock %}</title>
</head>
<body>
<header>
{% block header %}
<h1>Default Header</h1>
{% endblock %}
</header>
<main>
{% block content %}{% endblock %}
</main>
<footer>
{% block footer %}
<p>© 2025</p>
{% endblock %}
</footer>
</body>
</html>
Child template (page.html):
{% extends "base.html" %}
{% block title %}My Page Title{% endblock %}
{% block content %}
<h2>Page Content</h2>
<p>This is the main content of the page.</p>
{% endblock %}
Filters
Filters allow you to transform variables:
<!-- Uppercase filter -->
<p>{{ name | upper }}</p>
<!-- Length filter -->
<p>Items count: {{ items | length }}</p>
<!-- Default value if variable is not set -->
<p>Name: {{ user.name | default("Anonymous") }}</p>
Static File Serving
The template engine also includes utilities for serving static files:
use oxidite::prelude::*;
use oxidite_template::serve_static;
// In your router, register the static file handler
// Note: This should be registered last to avoid blocking other routes
// router.get("/*", serve_static); // Serves files from "public" directory
Complete Example
Here’s a complete example showing template usage in a web application:
use oxidite::prelude::*;
use oxidite_template::{TemplateEngine, Context};
use serde_json::json;
struct AppState {
template_engine: TemplateEngine,
}
async fn home_page(state: State<AppState>) -> Result<Response> {
let mut context = Context::new();
context.set("title", "Home Page");
context.set("welcome_message", "Welcome to our application!");
context.set("features", vec![
"Fast performance",
"Easy to use",
"Type-safe",
"Full-featured"
]);
let response = state.template_engine
.render_response("home.html", &context)?;
Ok(response)
}
async fn blog_page(state: State<AppState>) -> Result<Response> {
let posts = vec![
json!({"title": "First Post", "date": "2025-01-01", "excerpt": "This is the first post"}),
json!({"title": "Second Post", "date": "2025-01-02", "excerpt": "This is the second post"}),
];
let mut context = Context::new();
context.set("title", "Blog");
context.set("posts", posts);
let response = state.template_engine
.render_response("blog.html", &context)?;
Ok(response)
}
#[tokio::main]
async fn main() -> Result<()> {
// Set up template engine
let mut template_engine = TemplateEngine::new();
// Add some templates
template_engine.add_template("home", r#"
<!DOCTYPE html>
<html>
<head><title>{{ title }}</title></head>
<body>
<h1>{{ welcome_message }}</h1>
<ul>
{% for feature in features %}
<li>{{ feature }}</li>
{% endfor %}
</ul>
</body>
</html>
"#)?;
template_engine.add_template("blog", r#"
<!DOCTYPE html>
<html>
<head><title>{{ title }}</title></head>
<body>
<h1>{{ title }}</h1>
{% for post in posts %}
<article>
<h2>{{ post.title }}</h2>
<small>{{ post.date }}</small>
<p>{{ post.excerpt }}</p>
</article>
{% endfor %}
</body>
</html>
"#)?;
let app_state = AppState { template_engine };
let mut router = Router::new();
router.get("/", {
let state = app_state.clone();
move |_| home_page(State(state))
});
router.get("/blog", {
let state = app_state.clone();
move |_| blog_page(State(state))
});
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Best Practices
- Organize Templates: Keep templates in a dedicated directory (usually
templates/) - Use Base Templates: Create base templates with common layout elements
- Context Management: Use structured context data rather than individual variables
- Error Handling: Always handle template rendering errors appropriately
- Caching: Consider implementing template caching for production applications
- Security: The template engine automatically escapes HTML to prevent XSS
Security Considerations
The Oxidite template engine includes built-in security features:
- Automatic HTML escaping to prevent XSS
- Context isolation between different template renders
- Input validation for template variables
Remember to always validate and sanitize user input before passing it to templates, especially when dealing with dynamic content.
Features
This chapter consolidates all the features of the Oxidite framework into a single comprehensive overview, as requested in the documentation consolidation rule.
Core Features
1. High-Performance Web Server
- Built on top of Hyper and Tokio for async/await support
- Supports HTTP/1.1, HTTP/2, and HTTP/3 protocols
- Zero-copy transfers for optimal performance
- Concurrent request handling with async runtime
use oxidite::prelude::*;
async fn hello_world(_req: Request) -> Result<Response> {
Ok(Response::text("Hello, World!"))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
router.get("/", hello_world);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
2. Type-Safe Request Handling
- Strongly typed request extractors
- Compile-time validation of route parameters
- Automatic serialization/deserialization with Serde
- Error handling with custom error types
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct QueryParams {
page: Option<u32>,
limit: Option<u32>,
}
async fn api_handler(
Path(user_id): Path<u32>,
Query(params): Query<QueryParams>,
Json(payload): Json<serde_json::Value>
) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"user_id": user_id,
"query_params": params,
"payload": payload
})))
}
3. Comprehensive Response System
- Multiple response types (JSON, HTML, text, etc.)
- Consistent API with
Response::method()pattern - Template engine integration
- Proper HTTP status codes
use oxidite::prelude::*;
async fn various_responses(_req: Request) -> Result<Response> {
// JSON response
let json_resp = Response::json(serde_json::json!({ "type": "json" }));
// HTML response
let html_resp = Response::html("<h1>Hello HTML</h1>");
// Text response
let text_resp = Response::text("Plain text");
// Empty responses
let ok_resp = Response::ok();
let no_content_resp = Response::no_content();
// Return one of them
Ok(json_resp)
}
Advanced Features
4. Database ORM
- Model definitions with derive macros
- Type-safe database operations
- Relationship management
- Migrations and schema management
- Validation and hooks
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Model, Deserialize)]
#[model(table = "users")]
pub struct User {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null)]
pub email: String,
#[model(not_null)]
pub name: String,
#[model(default = "now")]
pub created_at: String,
}
fn now() -> String {
chrono::Utc::now().to_rfc3339()
}
async fn user_operations() -> Result<()> {
// Create
let user = User {
id: 0,
email: "john@example.com".to_string(),
name: "John Doe".to_string(),
created_at: now(),
};
let saved_user = user.save().await?;
// Read
let users = User::find_all().await?;
// Update
let mut user = saved_user;
user.name = "John Updated".to_string();
user.save().await?;
// Delete
user.delete().await?;
Ok(())
}
5. Authentication & Authorization
- JWT token support
- Session-based authentication
- API key authentication
- OAuth2 integration
- Role-based access control (RBAC)
- Two-factor authentication (2FA)
use oxidite::prelude::*;
// JWT authentication middleware
async fn jwt_auth_middleware(req: Request, next: Next) -> Result<Response> {
// Extract and validate JWT token
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
match auth_header {
Some(auth) if auth.starts_with("Bearer ") => {
let token = auth.trim_start_matches("Bearer ").trim();
if verify_jwt(token).await.is_ok() {
next.run(req).await
} else {
Err(Error::Unauthorized("Invalid token".to_string()))
}
}
_ => Err(Error::Unauthorized("Missing token".to_string())),
}
}
async fn verify_jwt(_token: &str) -> Result<()> {
// Implementation would verify the JWT
Ok(())
}
6. Middleware System
- Global and route-specific middleware
- Request/response modification
- Cross-cutting concerns
- Built-in middleware for common tasks
use oxidite::prelude::*;
async fn logging_middleware(req: Request, next: Next) -> Result<Response> {
println!("Request: {} {}", req.method(), req.uri());
let response = next.run(req).await?;
println!("Response: {}", response.status());
Ok(response)
}
async fn cors_middleware(req: Request, next: Next) -> Result<Response> {
let mut response = if req.method() == http::Method::OPTIONS {
Response::ok()
} else {
next.run(req).await?
};
// Add CORS headers
use http::header::{HeaderName, HeaderValue};
response.headers_mut().insert(
HeaderName::from_static("access-control-allow-origin"),
HeaderValue::from_static("*")
);
Ok(response)
}
7. Template Engine
- Server-side template rendering
- Template inheritance and composition
- Context variable binding
- Direct integration with Response system
use oxidite::prelude::*;
use oxidite_template::{TemplateEngine, Context};
async fn template_example(_req: Request) -> Result<Response> {
let mut template_engine = TemplateEngine::new();
// Add a template
template_engine.add_template("welcome", r#"
<html>
<head><title>{{ title }}</title></head>
<body>
<h1>{{ greeting }}</h1>
<p>Welcome, {{ name }}!</p>
<ul>
{% for item in items %}
<li>{{ item }}</li>
{% endfor %}
</ul>
</body>
</html>
"#)?;
// Create context
let mut context = Context::new();
context.set("title", "Welcome Page");
context.set("greeting", "Hello!");
context.set("name", "User");
context.set("items", vec!["Feature 1", "Feature 2", "Feature 3"]);
// Render as response
let response = template_engine.render_response("welcome", &context)
.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(response)
}
8. Background Jobs & Queues
- Asynchronous job processing
- Multiple backend support (Redis, PostgreSQL, memory)
- Job scheduling and retries
- Worker management
use oxidite_queue::{Job, Queue, Worker};
// Define a job
#[derive(serde::Serialize, serde::Deserialize)]
struct EmailJob {
recipient: String,
subject: String,
body: String,
}
impl Job for EmailJob {
type Output = Result<(), String>;
async fn execute(self) -> Self::Output {
// Send email logic here
println!("Sending email to {} with subject: {}",
self.recipient, self.subject);
Ok(())
}
}
async fn queue_example() -> Result<()> {
// Create a queue
let queue = Queue::memory();
// Enqueue a job
let email_job = EmailJob {
recipient: "user@example.com".to_string(),
subject: "Welcome!".to_string(),
body: "Thank you for joining.".to_string(),
};
queue.enqueue(email_job).await?;
// Start a worker
let worker = Worker::new(queue.clone());
worker.start().await?;
Ok(())
}
9. Real-time Features
- WebSocket support
- Server-Sent Events (SSE)
- Pub/Sub messaging
- Live updates and notifications
use oxidite::prelude::*;
use oxidite_realtime::websocket::{WebSocket, Message};
async fn websocket_handler(ws: WebSocket) -> Result<()> {
ws.on_message(|msg| async move {
match msg {
Message::Text(text) => {
println!("Received: {}", text);
// Echo back
Ok(Message::Text(format!("Echo: {}", text)))
}
Message::Binary(data) => {
println!("Received binary: {} bytes", data.len());
Ok(Message::Binary(data))
}
}
}).await?;
Ok(())
}
async fn sse_example(_req: Request) -> Result<Response> {
use oxidite_realtime::sse::EventStream;
let mut stream = EventStream::new();
stream.send("Connected", Some("connection"), None).await?;
// Send periodic updates
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
stream.send("Update", Some("data"), None).await.ok();
}
});
Ok(stream.response())
}
10. File Upload & Storage
- Multipart form handling
- File validation and sanitization
- Multiple storage backends (local, S3, etc.)
- Streaming uploads for large files
use oxidite::prelude::*;
async fn upload_handler(_req: Request) -> Result<Response> {
// In a real implementation, handle multipart form data
// and save files to configured storage backend
Ok(Response::json(serde_json::json!({
"status": "uploaded",
"files": []
})))
}
11. Security Features
- Rate limiting
- CSRF protection
- XSS prevention
- SQL injection prevention
- Input validation
- Secure headers
use oxidite::prelude::*;
async fn security_middleware(req: Request, next: Next) -> Result<Response> {
// Rate limiting
if !is_request_allowed(&req).await {
return Err(Error::RateLimited);
}
// Add security headers
let mut response = next.run(req).await?;
use http::header::{HeaderName, HeaderValue};
response.headers_mut().insert(
HeaderName::from_static("x-content-type-options"),
HeaderValue::from_static("nosniff")
);
response.headers_mut().insert(
HeaderName::from_static("x-frame-options"),
HeaderValue::from_static("DENY")
);
Ok(response)
}
async fn is_request_allowed(_req: &Request) -> bool {
// Implementation would check rate limits
true
}
Enterprise Features
12. Configuration Management
- Environment-based configuration
- Multiple configuration sources
- Type-safe configuration loading
- Hot reloading support
use oxidite_config::Config;
#[derive(serde::Deserialize)]
struct AppConfig {
database_url: String,
server_port: u16,
jwt_secret: String,
#[serde(default)]
debug: bool,
}
async fn load_config() -> Result<AppConfig> {
let config = Config::builder()
.add_source(ConfigSource::Env)
.add_source(ConfigSource::File("config.json"))
.build()
.await?;
let app_config: AppConfig = config.try_deserialize()?;
Ok(app_config)
}
enum ConfigSource {
Env,
File(String),
}
13. CLI Tools
- Project scaffolding
- Code generation
- Database migrations
- Development server with hot reload
# Create a new project
oxidite new my-app
# Generate a model
oxidite generate model User email:string name:string
# Run migrations
oxidite migrate
# Start development server
oxidite dev
14. Testing Utilities
- Built-in test utilities
- Mock request/response objects
- Test server for integration tests
- Fixture management
use oxidite::prelude::*;
use oxidite_testing::{TestServer, RequestBuilder};
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_hello_world() {
let server = TestServer::new(|router| {
router.get("/", hello_world);
}).await;
let response = server.get("/").send().await;
assert_eq!(response.status(), 200);
let body = response.text().await;
assert_eq!(body, "Hello, World!");
}
}
15. OpenAPI Integration
- Automatic API documentation generation
- Schema inference from types
- Interactive API explorer
- Validation against OpenAPI spec
use oxidite::prelude::*;
use oxidite_openapi::OpenApi;
#[derive(oxidite_macros::RouteInfo)]
#[openapi(path = "/users/{id}", method = "GET")]
async fn get_user(Path(id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"id": id,
"name": format!("User {}", id)
})))
}
async fn setup_openapi_docs() -> Result<()> {
let mut openapi = OpenApi::new();
openapi.add_route(get_user).await?;
// Serve documentation at /docs
// Implementation would serve the OpenAPI JSON and UI
Ok(())
}
16. Plugin System
- Extensible architecture
- Hooks and lifecycle events
- Third-party integrations
- Custom middleware and handlers
use oxidite::prelude::*;
trait Plugin {
fn name(&self) -> &str;
fn initialize(&self, _router: &mut Router) -> Result<()>;
fn on_request(&self, _req: &mut Request) -> Result<()>;
fn on_response(&self, _resp: &mut Response) -> Result<()>;
}
struct LoggingPlugin;
impl Plugin for LoggingPlugin {
fn name(&self) -> &str { "logging" }
fn initialize(&self, _router: &mut Router) -> Result<()> {
println!("Logging plugin initialized");
Ok(())
}
fn on_request(&self, req: &mut Request) -> Result<()> {
println!("Processing request: {} {}", req.method(), req.uri());
Ok(())
}
fn on_response(&self, resp: &mut Response) -> Result<()> {
println!("Sending response: {}", resp.status());
Ok(())
}
}
Summary
Oxidite is a comprehensive web framework that combines:
- Performance: Built on async/await with Hyper and Tokio
- Safety: Type-safe request handling with compile-time validation
- Flexibility: Extensible architecture with middleware and plugins
- Security: Built-in security features and best practices
- Productivity: Rich ecosystem with ORM, authentication, etc.
- Scalability: Designed for high-concurrency applications
The framework provides everything needed to build modern web applications, from basic routing to enterprise-level features like authentication, real-time communication, and background jobs.
Background Jobs
Background jobs allow you to process tasks asynchronously outside of the main request-response cycle. This chapter covers how to create, queue, and process background jobs in Oxidite.
Overview
Background jobs are essential for:
- Processing long-running tasks
- Sending emails
- Processing files
- Integrating with external services
- Periodic maintenance tasks
Job Definition
Define jobs by implementing the Job trait:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
pub struct SendEmailJob {
pub recipient: String,
pub subject: String,
pub body: String,
}
#[async_trait::async_trait]
impl Job for SendEmailJob {
type Output = Result<(), String>;
async fn execute(self) -> Self::Output {
// Simulate sending an email
println!("Sending email to: {}", self.recipient);
println!("Subject: {}", self.subject);
println!("Body: {}", self.body);
// In a real app, this would connect to an email service
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(())
}
}
// Another example: Image processing job
#[derive(Serialize, Deserialize)]
pub struct ProcessImageJob {
pub image_path: String,
pub width: u32,
pub height: u32,
}
#[async_trait::async_trait]
impl Job for ProcessImageJob {
type Output = Result<String, String>;
async fn execute(self) -> Self::Output {
println!("Processing image: {}", self.image_path);
println!("Resizing to {}x{}", self.width, self.height);
// Simulate image processing
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
Ok(format!("processed_{}", self.image_path))
}
}
Queue Configuration
Configure queues for job processing:
use oxidite::prelude::*;
use oxidite_queue::{Queue, QueueBackend, RedisBackend};
async fn configure_queues() -> Result<()> {
// Configure Redis backend
let redis_backend = RedisBackend::new("redis://127.0.0.1:6379").await?;
// Create queues
let email_queue = Queue::new(redis_backend.clone());
let image_queue = Queue::new(redis_backend.clone());
let default_queue = Queue::new(redis_backend);
// Store queues in application state
// This would typically be done during app initialization
Ok(())
}
Enqueuing Jobs
Add jobs to the queue for processing:
use oxidite::prelude::*;
async fn enqueue_examples() -> Result<()> {
// Get the queue (in a real app, this would come from state)
let queue = get_queue("emails").await?;
// Create and enqueue an email job
let email_job = SendEmailJob {
recipient: "user@example.com".to_string(),
subject: "Welcome!".to_string(),
body: "Thank you for joining our platform.".to_string(),
};
// Enqueue immediately
let job_id = queue.enqueue(email_job).await?;
println!("Enqueued email job with ID: {}", job_id);
// Enqueue with delay (for scheduled tasks)
let delayed_job = SendEmailJob {
recipient: "user@example.com".to_string(),
subject: "Reminder".to_string(),
body: "This is a reminder about your account.".to_string(),
};
let delayed_job_id = queue.enqueue_delayed(delayed_job, std::time::Duration::from_secs(3600)).await?;
println!("Enqueued delayed job with ID: {}", delayed_job_id);
// Batch enqueue multiple jobs
let jobs = vec![
SendEmailJob {
recipient: "user1@example.com".to_string(),
subject: "Newsletter".to_string(),
body: "Here's your weekly newsletter.".to_string(),
},
SendEmailJob {
recipient: "user2@example.com".to_string(),
subject: "Newsletter".to_string(),
body: "Here's your weekly newsletter.".to_string(),
},
];
let batch_ids = queue.enqueue_batch(jobs).await?;
println!("Enqueued {} jobs in batch", batch_ids.len());
Ok(())
}
async fn get_queue(_name: &str) -> Result<Queue> {
// In a real app, this would return the configured queue
Ok(Queue::memory())
}
pub struct Queue {
name: String,
}
impl Queue {
pub fn new(name: &str) -> Self {
Self { name: name.to_string() }
}
pub async fn enqueue<T: Job>(&self, _job: T) -> Result<String> {
Ok("job_id".to_string())
}
pub async fn enqueue_delayed<T: Job>(&self, _job: T, _delay: std::time::Duration) -> Result<String> {
Ok("delayed_job_id".to_string())
}
pub async fn enqueue_batch<T: Job>(&self, _jobs: Vec<T>) -> Result<Vec<String>> {
Ok(vec!["job1".to_string(), "job2".to_string()])
}
}
#[async_trait::async_trait]
pub trait Job: Send + Sync + serde::Serialize + serde::de::DeserializeOwned {
type Output;
async fn execute(self) -> Self::Output;
}
Worker Configuration
Set up workers to process jobs:
use oxidite::prelude::*;
use oxidite_queue::{Worker, Queue};
async fn start_workers() -> Result<()> {
let queue = get_queue("emails").await?;
// Create a worker
let mut worker = Worker::new(queue);
// Configure worker settings
worker
.set_concurrency(5) // Process up to 5 jobs concurrently
.set_poll_interval(std::time::Duration::from_millis(100)) // Poll every 100ms
.set_max_retries(3) // Retry failed jobs up to 3 times
.set_timeout(std::time::Duration::from_secs(30)); // Timeout after 30 seconds
// Add error handling
worker.on_error(|job_id, error| {
eprintln!("Job {} failed: {}", job_id, error);
// In a real app, log to monitoring system
});
// Start processing jobs
worker.start().await?;
Ok(())
}
// Graceful shutdown example
async fn graceful_shutdown_worker() -> Result<()> {
let queue = get_queue("emails").await?;
let mut worker = Worker::new(queue);
worker.set_concurrency(3);
// Handle shutdown signal
let shutdown_signal = tokio::signal::ctrl_c();
tokio::select! {
result = worker.start() => {
result?;
}
_ = shutdown_signal => {
println!("Shutdown signal received, stopping worker...");
worker.stop().await?;
println!("Worker stopped gracefully");
}
}
Ok(())
}
Job Monitoring
Monitor job queues and their status:
use oxidite::prelude::*;
async fn monitor_jobs() -> Result<()> {
let queue = get_queue("emails").await?;
// Get queue statistics
let stats = queue.stats().await?;
println!("Queue Stats:");
println!(" Pending: {}", stats.pending);
println!(" Running: {}", stats.running);
println!(" Completed: {}", stats.completed);
println!(" Failed: {}", stats.failed);
// Get specific job status
let job_status = queue.get_job_status("some-job-id").await?;
println!("Job Status: {:?}", job_status);
// List recent jobs
let recent_jobs = queue.list_recent_jobs(10).await?;
for job in recent_jobs {
println!("Recent Job: {} - {}", job.id, job.status);
}
Ok(())
}
pub struct QueueStats {
pub pending: u64,
pub running: u64,
pub completed: u64,
pub failed: u64,
}
impl Queue {
pub async fn stats(&self) -> Result<QueueStats> {
Ok(QueueStats {
pending: 5,
running: 2,
completed: 50,
failed: 1,
})
}
pub async fn get_job_status(&self, _job_id: &str) -> Result<JobStatus> {
Ok(JobStatus::Completed)
}
pub async fn list_recent_jobs(&self, _limit: usize) -> Result<Vec<ListedJob>> {
Ok(vec![
ListedJob { id: "job1".to_string(), status: JobStatus::Completed },
ListedJob { id: "job2".to_string(), status: JobStatus::Pending },
])
}
}
pub enum JobStatus {
Pending,
Running,
Completed,
Failed,
Cancelled,
}
pub struct ListedJob {
pub id: String,
pub status: JobStatus,
}
Retry Logic and Error Handling
Implement robust error handling and retry mechanisms:
use oxidite::prelude::*;
#[derive(Serialize, Deserialize)]
pub struct RobustJob {
pub attempt_number: u32,
pub data: String,
}
#[async_trait::async_trait]
impl Job for RobustJob {
type Output = Result<(), JobError>;
async fn execute(self) -> Self::Output {
// Simulate a job that might fail occasionally
if self.attempt_number < 2 && rand::random::<bool>() {
return Err(JobError::TemporaryFailure(
"Random failure for demonstration".to_string()
));
}
// Job succeeded
println!("Job executed successfully on attempt {}", self.attempt_number);
Ok(())
}
}
#[derive(Debug, Serialize, Deserialize)]
pub enum JobError {
TemporaryFailure(String),
PermanentFailure(String),
ValidationError(String),
}
impl std::fmt::Display for JobError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
JobError::TemporaryFailure(msg) => write!(f, "Temporary failure: {}", msg),
JobError::PermanentFailure(msg) => write!(f, "Permanent failure: {}", msg),
JobError::ValidationError(msg) => write!(f, "Validation error: {}", msg),
}
}
}
impl std::error::Error for JobError {}
// Retry strategy
pub struct RetryStrategy {
pub max_attempts: u32,
pub base_delay: std::time::Duration,
pub backoff_multiplier: f64,
}
impl RetryStrategy {
pub fn calculate_delay(&self, attempt: u32) -> std::time::Duration {
let multiplier = self.backoff_multiplier.powf(attempt as f64 - 1.0);
let delay_ms = (self.base_delay.as_millis() as f64 * multiplier) as u64;
std::time::Duration::from_millis(delay_ms.min(300_000)) // Cap at 5 minutes
}
}
// Example usage with retry strategy
async fn execute_with_retry(job: RobustJob, strategy: &RetryStrategy) -> Result<()> {
let mut attempt = 1;
loop {
match job.clone().execute().await {
Ok(_) => return Ok(()),
Err(JobError::PermanentFailure(_)) => {
eprintln!("Permanent failure, not retrying");
return Err(Error::InternalServerError("Permanent job failure".to_string()));
}
Err(JobError::TemporaryFailure(_)) | Err(JobError::ValidationError(_)) => {
if attempt >= strategy.max_attempts {
eprintln!("Max attempts reached, failing permanently");
return Err(Error::InternalServerError("Job failed after max retries".to_string()));
}
let delay = strategy.calculate_delay(attempt);
println!("Attempt {} failed, retrying in {:?}", attempt, delay);
tokio::time::sleep(delay).await;
attempt += 1;
}
}
}
}
Scheduled Jobs
Schedule jobs to run at specific times:
use oxidite::prelude::*;
#[derive(Serialize, Deserialize)]
pub struct ScheduledReportJob {
pub report_type: String,
pub recipient: String,
pub schedule_time: String, // ISO 8601 formatted
}
#[async_trait::async_trait]
impl Job for ScheduledReportJob {
type Output = Result<(), String>;
async fn execute(self) -> Self::Output {
println!("Generating {} report for {}", self.report_type, self.recipient);
// Generate and send report
// In a real app, this would connect to reporting systems
Ok(())
}
}
// Schedule recurring jobs
pub struct Scheduler {
queue: Queue,
}
impl Scheduler {
pub fn new(queue: Queue) -> Self {
Self { queue }
}
pub async fn schedule_daily_report(&self, recipient: String) -> Result<()> {
// Calculate next occurrence (tomorrow at 9 AM)
let tomorrow = chrono::Local::now()
.date_naive()
.succ_opt()
.unwrap()
.and_hms_opt(9, 0, 0)
.unwrap();
let job = ScheduledReportJob {
report_type: "daily_summary".to_string(),
recipient,
schedule_time: tomorrow.and_utc().to_rfc3339(),
};
// Enqueue for tomorrow morning
self.queue.enqueue_delayed(
job,
std::time::Duration::from_secs(24 * 3600) // 24 hours
).await?;
Ok(())
}
pub async fn schedule_weekly_report(&self, recipient: String) -> Result<()> {
// Schedule for next Monday at 10 AM
let now = chrono::Local::now();
let days_until_monday = (7 - now.weekday().num_days_from_monday()) % 7;
let next_monday = now.date_naive()
.with_days_added(days_until_monday as u32)
.and_hms_opt(10, 0, 0)
.unwrap();
let job = ScheduledReportJob {
report_type: "weekly_summary".to_string(),
recipient,
schedule_time: next_monday.and_utc().to_rfc3339(),
};
let delay_seconds = (next_monday.and_utc() - chrono::Utc::now()).num_seconds() as u64;
self.queue.enqueue_delayed(job, std::time::Duration::from_secs(delay_seconds)).await?;
Ok(())
}
}
Job Dependencies
Chain jobs that depend on each other:
use oxidite::prelude::*;
#[derive(Serialize, Deserialize)]
pub struct ProcessUserDataJob {
pub user_id: String,
}
#[async_trait::async_trait]
impl Job for ProcessUserDataJob {
type Output = Result<String, String>; // Returns processed data ID
async fn execute(self) -> Self::Output {
println!("Processing user data for: {}", self.user_id);
tokio::time::sleep(std::time::Duration::from_millis(200)).await;
Ok(format!("processed_data_{}", self.user_id))
}
}
#[derive(Serialize, Deserialize)]
pub struct SendNotificationJob {
pub user_id: String,
pub processed_data_id: String,
}
#[async_trait::async_trait]
impl Job for SendNotificationJob {
type Output = Result<(), String>;
async fn execute(self) -> Self::Output {
println!("Sending notification to {} about {}",
self.user_id, self.processed_data_id);
Ok(())
}
}
// Chain jobs with dependencies
pub struct JobChainer {
queue: Queue,
}
impl JobChainer {
pub fn new(queue: Queue) -> Self {
Self { queue }
}
pub async fn process_user_with_notification(&self, user_id: String) -> Result<()> {
// First job processes user data and returns an ID
let process_job = ProcessUserDataJob {
user_id: user_id.clone(),
};
let process_job_id = self.queue.enqueue(process_job).await?;
// Second job waits for the first to complete
// In a real implementation, this would use job callbacks or a workflow system
tokio::spawn({
let queue = self.queue.clone();
let user_id_clone = user_id;
async move {
// Poll for job completion (simplified)
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
let notification_job = SendNotificationJob {
user_id: user_id_clone,
processed_data_id: format!("processed_data_{}", user_id_clone),
};
queue.enqueue(notification_job).await.ok();
}
});
Ok(())
}
}
Performance Considerations
Optimize job processing for performance:
use oxidite::prelude::*;
pub struct JobProcessorConfig {
pub concurrency: usize,
pub batch_size: usize,
pub memory_limit_mb: usize,
pub timeout_seconds: u64,
}
impl JobProcessorConfig {
pub fn production_defaults() -> Self {
Self {
concurrency: num_cpus::get(), // Use all CPU cores
batch_size: 10, // Process jobs in batches
memory_limit_mb: 512, // Limit memory usage
timeout_seconds: 300, // 5 minute timeout
}
}
pub fn development_defaults() -> Self {
Self {
concurrency: 2,
batch_size: 5,
memory_limit_mb: 128,
timeout_seconds: 60,
}
}
}
// Memory-efficient job processor
pub struct MemoryEfficientProcessor<J: Job> {
queue: Queue,
config: JobProcessorConfig,
phantom: std::marker::PhantomData<J>,
}
impl<J: Job> MemoryEfficientProcessor<J> {
pub fn new(queue: Queue, config: JobProcessorConfig) -> Self {
Self {
queue,
config,
phantom: std::marker::PhantomData,
}
}
pub async fn process_batch(&self) -> Result<()> {
// Fetch and process jobs in memory-conscious way
for _ in 0..self.config.batch_size {
// Process individual job with memory limits
// Implementation would handle memory monitoring
}
Ok(())
}
}
Error Recovery and Monitoring
Implement robust error recovery:
use oxidite::prelude::*;
pub struct JobRecoverySystem {
dead_letter_queue: Queue,
monitoring_client: MonitoringClient,
}
impl JobRecoverySystem {
pub fn new(dead_letter_queue: Queue, monitoring_client: MonitoringClient) -> Self {
Self {
dead_letter_queue,
monitoring_client,
}
}
pub async fn handle_failed_job<T: Job>(&self, job: T, error: JobError) -> Result<()> {
// Log the error
self.monitoring_client.log_error(&error.to_string()).await;
// Move to dead letter queue for manual inspection
self.dead_letter_queue.enqueue(DeadLetterJob {
original_job: serde_json::to_value(&job)?,
error: error.to_string(),
failed_at: chrono::Utc::now().to_rfc3339(),
}).await?;
Ok(())
}
}
#[derive(Serialize, Deserialize)]
pub struct DeadLetterJob {
pub original_job: serde_json::Value,
pub error: String,
pub failed_at: String,
}
pub struct MonitoringClient;
impl MonitoringClient {
pub async fn log_error(&self, _error: &str) {
// In a real app, send to monitoring system like Sentry, Datadog, etc.
println!("Error logged: {}", _error);
}
}
Integration with HTTP Handlers
Trigger jobs from HTTP requests:
use oxidite::prelude::*;
#[derive(Deserialize)]
pub struct EmailRequest {
pub to: String,
pub subject: String,
pub body: String,
}
// HTTP handler that triggers a background job
async fn send_email_handler(
Json(request): Json<EmailRequest>,
State(queue): State<Queue>
) -> Result<Response> {
let job = SendEmailJob {
recipient: request.to,
subject: request.subject,
body: request.body,
};
let job_id = queue.enqueue(job).await
.map_err(|e| Error::InternalServerError(format!("Failed to queue email: {}", e)))?;
Ok(Response::json(serde_json::json!({
"status": "queued",
"job_id": job_id,
"message": "Email queued for sending"
})))
}
// Check job status endpoint
async fn check_job_status(
Path(job_id): Path<String>,
State(queue): State<Queue>
) -> Result<Response> {
let status = queue.get_job_status(&job_id).await
.map_err(|e| Error::InternalServerError(format!("Failed to get job status: {}", e)))?;
Ok(Response::json(serde_json::json!({
"job_id": job_id,
"status": match status {
JobStatus::Pending => "pending",
JobStatus::Running => "running",
JobStatus::Completed => "completed",
JobStatus::Failed => "failed",
JobStatus::Cancelled => "cancelled",
}
})))
}
Summary
Background jobs in Oxidite provide:
- Asynchronous Processing: Handle long-running tasks without blocking requests
- Reliability: Built-in retry logic and error handling
- Scalability: Concurrency controls and resource management
- Monitoring: Job status tracking and statistics
- Scheduling: Delayed execution and recurring tasks
- Integration: Easy to trigger from HTTP handlers
Jobs are essential for building responsive applications that need to handle time-consuming operations while keeping the user experience smooth.
Real-time Features
Real-time features enable live updates, bidirectional communication, and interactive experiences in your Oxidite applications. This chapter covers WebSocket support, Server-Sent Events (SSE), and pub/sub messaging.
Overview
Real-time features in Oxidite include:
- WebSocket connections for bidirectional communication
- Server-Sent Events for unidirectional server-to-client updates
- Pub/Sub messaging for event distribution
- Live updates and notifications
- Real-time collaboration features
WebSocket Support
WebSockets provide full-duplex communication channels over a single TCP connection:
use oxidite::prelude::*;
use oxidite_realtime::websocket::{WebSocket, Message, WebSocketHandler};
async fn websocket_handler(ws: WebSocket) -> Result<()> {
// Set up message handler
ws.on_message(|msg| async move {
match msg {
Message::Text(text) => {
println!("Received text: {}", text);
// Echo the message back
Ok(Message::Text(format!("Echo: {}", text)))
}
Message::Binary(data) => {
println!("Received binary: {} bytes", data.len());
// Echo the binary data back
Ok(Message::Binary(data))
}
Message::Ping(data) => {
// Respond with pong
Ok(Message::Pong(data))
}
Message::Pong(_) => {
// Pong received, usually no action needed
Ok(Message::Pong(vec![]))
}
Message::Close(frame) => {
// Close frame received
Ok(Message::Close(frame))
}
}
}).await?;
// Handle connection close
ws.on_close(|reason| async move {
println!("WebSocket closed: {:?}", reason);
}).await?;
Ok(())
}
// WebSocket upgrade endpoint
async fn websocket_upgrade(_req: Request) -> Result<Response> {
// This would typically be handled by the framework
// The actual WebSocket handler is registered separately
Ok(Response::text("WebSocket endpoint".to_string()))
}
WebSocket with State Management
Manage WebSocket connections with shared state:
use oxidite::prelude::*;
use oxidite_realtime::websocket::{WebSocket, Message};
use std::sync::Arc;
use tokio::sync::broadcast;
#[derive(Clone)]
struct ChatState {
users: Arc<tokio::sync::Mutex<std::collections::HashMap<String, WebSocket>>>,
broadcast_tx: broadcast::Sender<String>,
}
async fn chat_websocket_handler(ws: WebSocket, state: ChatState) -> Result<()> {
// Generate a unique user ID
let user_id = uuid::Uuid::new_v4().to_string();
// Add user to chat room
{
let mut users = state.users.lock().await;
users.insert(user_id.clone(), ws.clone());
}
// Send welcome message
ws.send(Message::Text(format!("Welcome to chat, {}!", user_id))).await?;
// Listen for messages
ws.on_message(move |msg| {
let state = state.clone();
let user_id = user_id.clone();
async move {
match msg {
Message::Text(text) => {
// Broadcast message to all users
let message = format!("[{}] {}", user_id, text);
if state.broadcast_tx.send(message.clone()).is_err() {
// Channel is closed, return error
return Err("Broadcast channel closed".to_string());
}
Ok(Message::Text("Message sent".to_string()))
}
Message::Binary(_) => {
Ok(Message::Text("Binary messages not supported".to_string()))
}
_ => Ok(msg), // Return other messages as-is
}
}
}).await?;
// Listen for broadcasts
let mut rx = state.broadcast_tx.subscribe();
let ws_clone = ws.clone();
tokio::spawn(async move {
while let Ok(message) = rx.recv().await {
if let Err(e) = ws_clone.send(Message::Text(message)).await {
eprintln!("Failed to send broadcast: {}", e);
break;
}
}
});
// Handle connection close
ws.on_close({
let state = state.clone();
let user_id = user_id.clone();
move |_| {
let state = state.clone();
let user_id = user_id.clone();
async move {
let mut users = state.users.lock().await;
users.remove(&user_id);
println!("User {} left chat", user_id);
}
}
}).await?;
Ok(())
}
Server-Sent Events (SSE)
Server-Sent Events provide unidirectional server-to-client communication:
use oxidite::prelude::*;
use oxidite_realtime::sse::EventStream;
async fn sse_handler(_req: Request) -> Result<Response> {
let mut stream = EventStream::new();
// Send initial connection event
stream.send("Connected", Some("connection"), None).await?;
// Send periodic updates
let stream_clone = stream.clone();
tokio::spawn(async move {
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(5));
loop {
interval.tick().await;
let timestamp = chrono::Utc::now().to_rfc3339();
let data = format!("{{\"timestamp\": \"{}\", \"message\": \"Periodic update\"}}", timestamp);
if let Err(e) = stream_clone.send(&data, Some("periodic"), None).await {
eprintln!("Failed to send SSE: {}", e);
break;
}
}
});
// Send live metrics
let stream_clone = stream.clone();
tokio::spawn(async move {
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(10));
loop {
interval.tick().await;
// Simulate some metrics
let metrics = serde_json::json!({
"users_active": 42,
"messages_sent": 1234,
"server_uptime": "24h"
});
if let Err(e) = stream_clone.send(&metrics.to_string(), Some("metrics"), None).await {
eprintln!("Failed to send metrics SSE: {}", e);
break;
}
}
});
Ok(stream.response())
}
// SSE endpoint with authentication
async fn authenticated_sse_handler(
_req: Request,
_user: AuthenticatedUser // Assume this comes from auth middleware
) -> Result<Response> {
let mut stream = EventStream::new();
// Send user-specific data
stream.send(
&format!("{{\"user_id\": \"{}\", \"message\": \"Welcome to personalized feed\"}}", _user.id),
Some("welcome"),
None
).await?;
Ok(stream.response())
}
#[derive(Clone)]
struct AuthenticatedUser {
id: String,
role: String,
}
Pub/Sub Messaging
Implement publish-subscribe messaging for event distribution:
use oxidite::prelude::*;
use std::sync::Arc;
use tokio::sync::broadcast;
#[derive(Clone)]
pub struct PubSub {
channels: Arc<tokio::sync::RwLock<std::collections::HashMap<String, broadcast::Sender<Event>>>>,
}
#[derive(serde::Serialize, serde::Deserialize, Clone)]
pub struct Event {
pub topic: String,
pub data: serde_json::Value,
pub timestamp: String,
pub sender: Option<String>,
}
impl PubSub {
pub fn new() -> Self {
Self {
channels: Arc::new(tokio::sync::RwLock::new(std::collections::HashMap::new())),
}
}
pub async fn subscribe(&self, topic: &str) -> broadcast::Receiver<Event> {
let mut channels = self.channels.write().await;
if !channels.contains_key(topic) {
let (tx, _) = broadcast::channel(100); // Buffer 100 messages
channels.insert(topic.to_string(), tx);
}
let tx = channels.get(topic).unwrap();
tx.subscribe()
}
pub async fn publish(&self, event: Event) -> Result<()> {
let channels = self.channels.read().await;
if let Some(tx) = channels.get(&event.topic) {
if tx.send(event).is_err() {
// Receiver dropped, channel is empty
}
}
Ok(())
}
pub async fn create_topic(&self, topic: &str) -> Result<()> {
let mut channels = self.channels.write().await;
if !channels.contains_key(topic) {
let (tx, _) = broadcast::channel(100);
channels.insert(topic.to_string(), tx);
}
Ok(())
}
}
// Example usage in a handler
async fn pubsub_example(
Json(payload): Json<serde_json::Value>,
State(pubsub): State<Arc<PubSub>>
) -> Result<Response> {
let event = Event {
topic: "user_activity".to_string(),
data: payload,
timestamp: chrono::Utc::now().to_rfc3339(),
sender: Some("api".to_string()),
};
pubsub.publish(event).await?;
Ok(Response::json(serde_json::json!({ "status": "published" })))
}
// Subscribe to events in a WebSocket
async fn event_stream_websocket_handler(
ws: WebSocket,
State(pubsub): State<Arc<PubSub>>
) -> Result<()> {
let mut receiver = pubsub.subscribe("notifications").await;
// Forward events to WebSocket
let ws_clone = ws.clone();
tokio::spawn(async move {
while let Ok(event) = receiver.recv().await {
if let Err(e) = ws_clone.send(Message::Text(serde_json::to_string(&event).unwrap())).await {
eprintln!("Failed to send event to WebSocket: {}", e);
break;
}
}
});
Ok(())
}
Real-time Notifications
Build a notification system with real-time delivery:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::broadcast;
#[derive(Clone)]
pub struct NotificationService {
subscribers: Arc<tokio::sync::RwLock<HashMap<String, broadcast::Sender<Notification>>>>,
pubsub: Arc<PubSub>,
}
#[derive(serde::Serialize, serde::Deserialize, Clone)]
pub struct Notification {
pub id: String,
pub user_id: String,
pub title: String,
pub body: String,
pub category: String,
pub timestamp: String,
pub read: bool,
}
impl NotificationService {
pub fn new(pubsub: Arc<PubSub>) -> Self {
Self {
subscribers: Arc::new(tokio::sync::RwLock::new(HashMap::new())),
pubsub,
}
}
pub async fn subscribe_user(&self, user_id: &str) -> broadcast::Receiver<Notification> {
let mut subscribers = self.subscribers.write().await;
if !subscribers.contains_key(user_id) {
let (tx, _) = broadcast::channel(50);
subscribers.insert(user_id.to_string(), tx);
}
let tx = subscribers.get(user_id).unwrap();
tx.subscribe()
}
pub async fn send_notification(&self, notification: Notification) -> Result<()> {
// Publish to user-specific channel
let user_event = Event {
topic: format!("notifications:{}", notification.user_id),
data: serde_json::to_value(¬ification)?,
timestamp: chrono::Utc::now().to_rfc3339(),
sender: Some("notification_service".to_string()),
};
self.pubsub.publish(user_event).await?;
// Also send to user's subscription if online
if let Some(tx) = self.subscribers.read().await.get(¬ification.user_id) {
let _ = tx.send(notification.clone());
}
Ok(())
}
pub async fn get_user_notifications(&self, user_id: &str) -> Result<Vec<Notification>> {
// In a real app, this would fetch from database
Ok(vec![])
}
pub async fn mark_as_read(&self, user_id: &str, notification_id: &str) -> Result<()> {
// In a real app, this would update database
Ok(())
}
}
// Notification WebSocket handler
async fn notification_websocket_handler(
ws: WebSocket,
State(notification_service): State<Arc<NotificationService>>,
user: AuthenticatedUser
) -> Result<()> {
let mut receiver = notification_service.subscribe_user(&user.id).await;
// Send existing unread notifications
let existing_notifications = notification_service.get_user_notifications(&user.id).await?;
for notification in existing_notifications {
ws.send(Message::Text(serde_json::to_string(¬ification)?)).await?;
}
// Listen for new notifications
let ws_clone = ws.clone();
tokio::spawn(async move {
while let Ok(notification) = receiver.recv().await {
if let Err(e) = ws_clone.send(Message::Text(serde_json::to_string(¬ification).unwrap())).await {
eprintln!("Failed to send notification: {}", e);
break;
}
}
});
Ok(())
}
Real-time Analytics
Track real-time metrics and analytics:
use oxidite::prelude::*;
use std::sync::Arc;
use tokio::sync::mpsc;
#[derive(Clone)]
pub struct AnalyticsService {
event_sender: mpsc::UnboundedSender<AnalyticsEvent>,
metrics: Arc<tokio::sync::RwLock<Metrics>>,
}
#[derive(serde::Serialize, serde::Deserialize, Clone)]
pub struct AnalyticsEvent {
pub event_type: String,
pub user_id: Option<String>,
pub properties: std::collections::HashMap<String, serde_json::Value>,
pub timestamp: String,
pub session_id: Option<String>,
}
#[derive(Default, Clone)]
pub struct Metrics {
pub page_views: u64,
pub unique_visitors: std::collections::HashSet<String>,
pub active_users: std::collections::HashMap<String, chrono::DateTime<chrono::Utc>>,
pub event_counts: std::collections::HashMap<String, u64>,
}
impl AnalyticsService {
pub fn new() -> (Self, mpsc::UnboundedReceiver<AnalyticsEvent>) {
let (sender, receiver) = mpsc::unbounded_channel();
let service = Self {
event_sender: sender,
metrics: Arc::new(tokio::sync::RwLock::new(Metrics::default())),
};
(service, receiver)
}
pub fn track_event(&self, event: AnalyticsEvent) -> Result<()> {
self.event_sender.send(event)
.map_err(|e| Error::InternalServerError(format!("Failed to track event: {}", e)))?;
Ok(())
}
pub async fn get_metrics(&self) -> Metrics {
self.metrics.read().await.clone()
}
pub async fn start_processing(&self) {
let metrics = self.metrics.clone();
// Spawn a task to process events
let mut event_receiver = {
let (sender, receiver) = mpsc::unbounded_channel();
// We'd need to clone the original sender to return the receiver
// This is a simplified example
receiver
};
tokio::spawn(async move {
while let Ok(event) = event_receiver.recv().await {
let mut metrics = metrics.write().await;
// Update metrics based on event
match event.event_type.as_str() {
"page_view" => metrics.page_views += 1,
"user_login" => {
if let Some(user_id) = event.user_id {
metrics.unique_visitors.insert(user_id);
}
}
_ => {
*metrics.event_counts.entry(event.event_type).or_insert(0) += 1;
}
}
// Track active users (within last 5 minutes)
if let Some(user_id) = event.user_id {
metrics.active_users.insert(
user_id,
chrono::Utc::now()
);
}
// Clean up old active users periodically
let now = chrono::Utc::now();
metrics.active_users.retain(|_, timestamp| {
(now - *timestamp).num_minutes() < 5
});
}
});
}
}
// Analytics tracking endpoint
async fn track_analytics(
Json(event): Json<AnalyticsEvent>,
State(analytics): State<Arc<AnalyticsService>>
) -> Result<Response> {
analytics.track_event(event)?;
Ok(Response::json(serde_json::json!({ "status": "tracked" })))
}
// Real-time metrics endpoint
async fn real_time_metrics(State(analytics): State<Arc<AnalyticsService>>) -> Result<Response> {
let metrics = analytics.get_metrics().await;
Ok(Response::json(serde_json::json!({
"page_views": metrics.page_views,
"unique_visitors": metrics.unique_visitors.len(),
"active_users": metrics.active_users.len(),
"event_counts": metrics.event_counts,
"timestamp": chrono::Utc::now().to_rfc3339()
})))
}
Performance Optimization
Optimize real-time features for performance:
use oxidite::prelude::*;
pub struct RealTimeConfig {
pub websocket_max_connections: usize,
pub sse_buffer_size: usize,
pub broadcast_channel_capacity: usize,
pub heartbeat_interval: std::time::Duration,
pub connection_timeout: std::time::Duration,
}
impl RealTimeConfig {
pub fn default_production() -> Self {
Self {
websocket_max_connections: 10_000,
sse_buffer_size: 100,
broadcast_channel_capacity: 1000,
heartbeat_interval: std::time::Duration::from_secs(30),
connection_timeout: std::time::Duration::from_secs(60),
}
}
pub fn default_development() -> Self {
Self {
websocket_max_connections: 100,
sse_buffer_size: 10,
broadcast_channel_capacity: 100,
heartbeat_interval: std::time::Duration::from_secs(60),
connection_timeout: std::time::Duration::from_secs(300),
}
}
}
// Connection pool for WebSockets
pub struct WebSocketPool {
connections: std::collections::HashMap<String, WebSocket>,
config: RealTimeConfig,
}
impl WebSocketPool {
pub fn new(config: RealTimeConfig) -> Self {
Self {
connections: std::collections::HashMap::new(),
config,
}
}
pub fn add_connection(&mut self, id: String, ws: WebSocket) -> Result<()> {
if self.connections.len() >= self.config.websocket_max_connections {
return Err(Error::InternalServerError("Maximum connections reached".to_string()));
}
self.connections.insert(id, ws);
Ok(())
}
pub fn remove_connection(&mut self, id: &str) -> Option<WebSocket> {
self.connections.remove(id)
}
pub fn broadcast_to_all(&self, message: Message) -> Result<()> {
for (_, ws) in &self.connections {
// Note: This is simplified; in practice, you'd handle errors per connection
let _ = ws.send(message.clone());
}
Ok(())
}
}
Security Considerations
Secure real-time features properly:
use oxidite::prelude::*;
// Secure WebSocket middleware
async fn secure_websocket_middleware(
req: Request,
next: Next,
State(rate_limiter): State<Arc<RateLimiter>>
) -> Result<Response> {
// Rate limiting for WebSocket connections
let client_ip = req.headers()
.get("x-forwarded-for")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown");
if !rate_limiter.is_allowed(client_ip, "websocket").await {
return Err(Error::TooManyRequests);
}
// Validate origin for WebSocket upgrades
if let Some(origin) = req.headers().get("origin") {
if !is_valid_origin(origin)? {
return Err(Error::Forbidden("Invalid origin".to_string()));
}
}
next.run(req).await
}
fn is_valid_origin(origin: &http::HeaderValue) -> Result<bool> {
let origin_str = origin.to_str().map_err(|_| Error::BadRequest("Invalid origin header".to_string()))?;
// In a real app, validate against allowed origins
let allowed_origins = ["http://localhost:3000", "https://yourdomain.com"];
Ok(allowed_origins.iter().any(|&allowed| origin_str.starts_with(allowed)))
}
// Rate limiter for real-time features
#[derive(Clone)]
pub struct RateLimiter {
limits: Arc<tokio::sync::Mutex<std::collections::HashMap<String, ClientLimits>>>,
max_messages_per_minute: u32,
}
#[derive(Default)]
struct ClientLimits {
message_count: u32,
last_reset: std::time::Instant,
}
impl RateLimiter {
pub fn new(max_messages_per_minute: u32) -> Self {
Self {
limits: Arc::new(tokio::sync::Mutex::new(std::collections::HashMap::new())),
max_messages_per_minute,
}
}
pub async fn is_allowed(&self, client_id: &str, _feature: &str) -> bool {
let mut limits = self.limits.lock().await;
let now = std::time::Instant::now();
let client_limit = limits.entry(client_id.to_string()).or_default();
// Reset counter if more than a minute has passed
if now.duration_since(client_limit.last_reset).as_secs() >= 60 {
client_limit.message_count = 0;
client_limit.last_reset = now;
}
if client_limit.message_count >= self.max_messages_per_minute {
return false;
}
client_limit.message_count += 1;
true
}
}
// Message validation
pub struct MessageValidator;
impl MessageValidator {
pub fn validate_websocket_message(&self, msg: &Message) -> Result<()> {
match msg {
Message::Text(text) => {
// Check message size
if text.len() > 64 * 1024 { // 64KB limit
return Err(Error::PayloadTooLarge);
}
// Check for malicious content
if contains_malicious_content(text) {
return Err(Error::BadRequest("Malicious content detected".to_string()));
}
Ok(())
}
Message::Binary(data) => {
// Check binary size
if data.len() > 1024 * 1024 { // 1MB limit
return Err(Error::PayloadTooLarge);
}
Ok(())
}
_ => Ok(()),
}
}
}
fn contains_malicious_content(text: &str) -> bool {
// Simple check for potential malicious patterns
let dangerous_patterns = ["<script", "javascript:", "vbscript:", "onload=", "onerror="];
dangerous_patterns.iter().any(|pattern| text.to_lowercase().contains(pattern))
}
Integration with Frontend
Provide frontend integration examples:
use oxidite::prelude::*;
// Endpoint to get WebSocket connection details
async fn websocket_config(_user: AuthenticatedUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"websocket_url": "ws://localhost:3000/ws",
"heartbeat_interval": 30000,
"reconnect_attempts": 5,
"reconnect_delay": 1000
})))
}
// Frontend JavaScript example (as documentation):
/*
// Connect to WebSocket
const wsUrl = 'ws://localhost:3000/ws';
const socket = new WebSocket(wsUrl);
socket.onopen = function(event) {
console.log('Connected to WebSocket');
// Send initial authentication
socket.send(JSON.stringify({
type: 'auth',
token: localStorage.getItem('authToken')
}));
};
socket.onmessage = function(event) {
const data = JSON.parse(event.data);
console.log('Received:', data);
// Handle different message types
switch(data.type) {
case 'notification':
showNotification(data.payload);
break;
case 'chat':
displayChatMessage(data.payload);
break;
case 'analytics':
updateAnalytics(data.payload);
break;
}
};
socket.onclose = function(event) {
console.log('WebSocket closed:', event.code, event.reason);
// Attempt to reconnect
setTimeout(() => {
// Reconnection logic here
}, 1000);
};
*/
// SSE connection helper
async fn sse_connection_helper(_user: AuthenticatedUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"sse_url": "/api/events",
"retry_interval": 3000,
"supported_events": ["notifications", "chat", "analytics"]
})))
}
Summary
Real-time features in Oxidite provide:
- WebSocket Support: Full-duplex communication for interactive applications
- Server-Sent Events: Unidirectional server-to-client updates
- Pub/Sub Messaging: Event distribution system
- Live Notifications: Real-time alert delivery
- Real-time Analytics: Live metrics and tracking
- Performance Optimization: Connection pooling and rate limiting
- Security: Authentication, validation, and rate limiting
- Frontend Integration: Easy client-side implementation
These features enable building highly interactive and responsive web applications with real-time updates and bidirectional communication capabilities.
Observability
Production services need structured logs, traces, and metrics.
Logging
Use request-scoped IDs and structured JSON logs for correlation.
Recommended fields:
- request id
- route pattern
- HTTP method/status
- duration
- user id (if authenticated)
Tracing
Instrument critical paths:
- DB queries and transactions
- outbound HTTP calls
- queue enqueue/dequeue
- websocket room operations
Use span boundaries around handlers and service-layer operations.
Metrics
Track at minimum:
- request rate
- latency percentiles
- error rate by route
- queue depth and retry count
- cache hit/miss ratio
Practical Deployment Notes
- Keep high-cardinality labels out of metric keys.
- Sample traces in high-throughput environments.
- Tie request IDs across logs and traces.
API Versioning
API versioning allows you to manage changes to your API over time while maintaining backward compatibility. This chapter covers various approaches to API versioning in Oxidite.
Overview
API versioning strategies include:
- URL-based versioning (e.g.,
/api/v1/users) - Header-based versioning (e.g.,
Accept: application/vnd.api.v1+json) - Query parameter versioning (e.g.,
?version=1) - Media type versioning
- Semantic versioning practices
URL-Based Versioning
The most common approach is to include the version in the URL path:
use oxidite::prelude::*;
// V1 API routes
async fn v1_get_users(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!([
{"id": 1, "name": "John", "email": "john@example.com"}
])))
}
async fn v1_get_user(Path(user_id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"id": user_id,
"name": format!("User {}", user_id),
"email": format!("user{}@example.com", user_id)
})))
}
// V2 API routes - with breaking changes
async fn v2_get_users(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!([
{
"id": 1,
"name": "John",
"email": "john@example.com",
"profile": {
"bio": "Software developer",
"avatar_url": "https://example.com/avatar.jpg"
}
}
])))
}
async fn v2_get_user(Path(user_id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"id": user_id,
"name": format!("User {}", user_id),
"email": format!("user{}@example.com", user_id),
"profile": {
"bio": "User bio",
"avatar_url": format!("https://example.com/avatar/{}.jpg", user_id),
"preferences": {
"theme": "light",
"notifications": true
}
}
})))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut router = Router::new();
// V1 API
router.get("/api/v1/users", v1_get_users);
router.get("/api/v1/users/:id", v1_get_user);
// V2 API
router.get("/api/v2/users", v2_get_users);
router.get("/api/v2/users/:id", v2_get_user);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Header-Based Versioning
Use HTTP headers to specify the API version:
use oxidite::prelude::*;
// Middleware to extract version from headers
async fn version_middleware(req: Request, next: Next) -> Result<Response> {
// Extract version from Accept header
let version = req.headers()
.get("accept")
.and_then(|hv| hv.to_str().ok())
.and_then(|accept| {
// Look for version in vendor media type
// e.g., application/vnd.myapi.v1+json
if accept.contains("application/vnd.myapi.v") {
accept.find("v").and_then(|pos| {
accept[pos + 1..].chars().take_while(|c| c.is_ascii_digit()).collect::<String>().parse::<u32>().ok()
})
} else {
None
}
})
.or_else(|| {
// Fallback to custom header
req.headers()
.get("x-api-version")
.and_then(|hv| hv.to_str().ok())
.and_then(|version_str| version_str.parse::<u32>().ok())
})
.unwrap_or(1); // Default to v1
// Add version to request extensions
let mut req = req;
req.extensions_mut().insert(ApiVersion(version));
next.run(req).await
}
#[derive(Clone)]
struct ApiVersion(u32);
// Route handlers that check the version
async fn get_users_by_version(req: Request) -> Result<Response> {
if let Some(ApiVersion(version)) = req.extensions().get::<ApiVersion>() {
match version {
1 => v1_get_users(req).await,
2 => v2_get_users(req).await,
_ => Err(Error::NotImplemented),
}
} else {
v1_get_users(req).await // Default to v1
}
}
async fn v1_get_users(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!([
{"id": 1, "name": "John", "email": "john@example.com"}
])))
}
async fn v2_get_users(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!([
{
"id": 1,
"name": "John",
"email": "john@example.com",
"profile": {
"bio": "Software developer",
"avatar_url": "https://example.com/avatar.jpg"
}
}
])))
}
Query Parameter Versioning
Use query parameters to specify the API version:
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct VersionedQuery {
version: Option<u32>,
}
async fn versioned_handler(Query(params): Query<VersionedQuery>) -> Result<Response> {
let version = params.version.unwrap_or(1);
match version {
1 => v1_response(),
2 => v2_response(),
_ => Err(Error::NotImplemented),
}
}
fn v1_response() -> Result<Response> {
Ok(Response::json(serde_json::json!({
"data": [
{"id": 1, "name": "John"}
],
"version": "v1"
})))
}
fn v2_response() -> Result<Response> {
Ok(Response::json(serde_json::json!({
"data": [
{
"id": 1,
"name": "John",
"metadata": {
"created_at": "2023-01-01T00:00:00Z",
"updated_at": "2023-01-02T00:00:00Z"
}
}
],
"version": "v2",
"pagination": {
"page": 1,
"per_page": 10,
"total": 100
}
})))
}
// Alternative: Middleware approach for query versioning
async fn query_version_middleware(req: Request, next: Next) -> Result<Response> {
// Extract version from query parameters
let version = req.uri().query()
.and_then(|q| {
q.split('&')
.find(|param| param.starts_with("version="))
.map(|param| param.split('=').nth(1)?.parse::<u32>().ok())
})
.flatten()
.unwrap_or(1);
// Add version to request extensions
let mut req = req;
req.extensions_mut().insert(ApiVersion(version));
next.run(req).await
}
Versioned Models and Serializers
Handle different versions of data models:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
// V1 User Model
#[derive(Serialize, Deserialize)]
pub struct UserV1 {
pub id: u32,
pub name: String,
pub email: String,
}
// V2 User Model - with additional fields
#[derive(Serialize, Deserialize)]
pub struct UserV2 {
pub id: u32,
pub name: String,
pub email: String,
pub profile: UserProfile,
}
#[derive(Serialize, Deserialize)]
pub struct UserProfile {
pub bio: Option<String>,
pub avatar_url: Option<String>,
pub preferences: UserPreferences,
}
#[derive(Serialize, Deserialize)]
pub struct UserPreferences {
pub theme: String,
pub notifications: bool,
}
// Version-aware handler
async fn get_user_versioned(
Path(user_id): Path<u32>,
req: Request
) -> Result<Response> {
// Fetch user from database (simplified)
let user = fetch_user_from_db(user_id).await?;
if let Some(ApiVersion(version)) = req.extensions().get::<ApiVersion>() {
match version {
1 => {
let v1_user = UserV1 {
id: user.id,
name: user.name,
email: user.email,
};
Ok(Response::json(v1_user))
}
2 => {
let v2_user = UserV2 {
id: user.id,
name: user.name,
email: user.email,
profile: UserProfile {
bio: Some("Default bio".to_string()),
avatar_url: Some(format!("https://example.com/avatar/{}.jpg", user.id)),
preferences: UserPreferences {
theme: "light".to_string(),
notifications: true,
},
},
};
Ok(Response::json(v2_user))
}
_ => Err(Error::NotImplemented),
}
} else {
// Default to v1
let v1_user = UserV1 {
id: user.id,
name: user.name,
email: user.email,
};
Ok(Response::json(v1_user))
}
}
// Simulated database fetch
async fn fetch_user_from_db(id: u32) -> Result<UserV2> {
Ok(UserV2 {
id,
name: format!("User {}", id),
email: format!("user{}@example.com", id),
profile: UserProfile {
bio: Some("Sample bio".to_string()),
avatar_url: Some(format!("https://example.com/avatar/{}.jpg", id)),
preferences: UserPreferences {
theme: "light".to_string(),
notifications: true,
},
},
})
}
// Convert between versions
impl UserV2 {
pub fn to_v1(self) -> UserV1 {
UserV1 {
id: self.id,
name: self.name,
email: self.email,
}
}
}
impl UserV1 {
pub fn to_v2(self) -> UserV2 {
UserV2 {
id: self.id,
name: self.name,
email: self.email,
profile: UserProfile {
bio: None,
avatar_url: None,
preferences: UserPreferences {
theme: "light".to_string(),
notifications: false,
},
},
}
}
}
Version Negotiation
Implement automatic version negotiation:
use oxidite::prelude::*;
#[derive(Clone)]
struct ApiVersionManager {
supported_versions: std::collections::HashSet<u32>,
default_version: u32,
}
impl ApiVersionManager {
fn new() -> Self {
let mut supported = std::collections::HashSet::new();
supported.insert(1);
supported.insert(2);
supported.insert(3);
Self {
supported_versions: supported,
default_version: 1,
}
}
fn negotiate_version(&self, req: &Request) -> u32 {
// Try header version first
if let Some(version) = self.extract_header_version(req) {
if self.supported_versions.contains(&version) {
return version;
}
}
// Try query parameter
if let Some(version) = self.extract_query_version(req) {
if self.supported_versions.contains(&version) {
return version;
}
}
// Fall back to default
self.default_version
}
fn extract_header_version(&self, req: &Request) -> Option<u32> {
req.headers()
.get("accept")
.and_then(|hv| hv.to_str().ok())
.and_then(|accept| {
if accept.contains("application/vnd.myapi.v") {
accept.find("v").and_then(|pos| {
accept[pos + 1..].chars().take_while(|c| c.is_ascii_digit()).collect::<String>().parse::<u32>().ok()
})
} else {
None
}
})
.or_else(|| {
req.headers()
.get("x-api-version")
.and_then(|hv| hv.to_str().ok())
.and_then(|version_str| version_str.parse::<u32>().ok())
})
}
fn extract_query_version(&self, req: &Request) -> Option<u32> {
req.uri().query()
.and_then(|q| {
q.split('&')
.find(|param| param.starts_with("version="))
.map(|param| param.split('=').nth(1)?.parse::<u32>().ok())
})
.flatten()
}
}
// Version negotiation middleware
async fn version_negotiation_middleware(
req: Request,
next: Next,
State(version_manager): State<Arc<ApiVersionManager>>
) -> Result<Response> {
let negotiated_version = version_manager.negotiate_version(&req);
let mut req = req;
req.extensions_mut().insert(ApiVersion(negotiated_version));
// Add version to response headers
let mut response = next.run(req).await?;
response.headers_mut().insert(
"X-API-Version",
format!("{}", negotiated_version).parse().unwrap()
);
Ok(response)
}
// Get version info endpoint
async fn api_version_info(State(version_manager): State<Arc<ApiVersionManager>>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"current_version": version_manager.default_version,
"supported_versions": version_manager.supported_versions.iter().collect::<Vec<_>>(),
"latest_stable": 2,
"deprecation_warning": null
})))
}
Deprecation and Sunset Policies
Manage deprecated versions:
use oxidite::prelude::*;
#[derive(Clone)]
struct VersionDeprecationPolicy {
deprecated_versions: std::collections::HashMap<u32, DeprecationInfo>,
}
#[derive(Clone)]
struct DeprecationInfo {
deprecation_date: chrono::DateTime<chrono::Utc>,
sunset_date: chrono::DateTime<chrono::Utc>,
migration_guide_url: String,
alternative_endpoints: Vec<String>,
}
impl VersionDeprecationPolicy {
fn new() -> Self {
let mut deprecated = std::collections::HashMap::new();
// Example: deprecate v1 on 2024-01-01, sunset on 2024-07-01
deprecated.insert(1, DeprecationInfo {
deprecation_date: chrono::DateTime::parse_from_rfc3339("2024-01-01T00:00:00Z").unwrap().into(),
sunset_date: chrono::DateTime::parse_from_rfc3339("2024-07-01T00:00:00Z").unwrap().into(),
migration_guide_url: "https://docs.example.com/v1-to-v2-migration".to_string(),
alternative_endpoints: vec!["/api/v2/users".to_string()],
});
Self {
deprecated_versions: deprecated,
}
}
fn check_deprecation(&self, version: u32) -> Option<&DeprecationInfo> {
self.deprecated_versions.get(&version)
}
fn is_sunset(&self, version: u32) -> bool {
if let Some(info) = self.check_deprecation(version) {
chrono::Utc::now() > info.sunset_date
} else {
false
}
}
}
// Deprecation middleware
async fn deprecation_middleware(
req: Request,
next: Next,
State(policy): State<Arc<VersionDeprecationPolicy>>
) -> Result<Response> {
if let Some(ApiVersion(version)) = req.extensions().get::<ApiVersion>() {
if policy.is_sunset(*version) {
return Err(Error::Gone("This API version has been sunset. Please upgrade to a newer version.".to_string()));
}
if let Some(deprecation_info) = policy.check_deprecation(*version) {
let mut response = next.run(req).await?;
// Add deprecation headers
response.headers_mut().insert(
"X-API-Deprecated",
"true".parse().unwrap()
);
response.headers_mut().insert(
"X-API-Deprecation-Date",
deprecation_info.deprecation_date.to_rfc3339().parse().unwrap()
);
response.headers_mut().insert(
"X-API-Sunset-Date",
deprecation_info.sunset_date.to_rfc3339().parse().unwrap()
);
response.headers_mut().insert(
"X-API-Migration-Guide",
deprecation_info.migration_guide_url.parse().unwrap()
);
return Ok(response);
}
}
next.run(req).await
}
Content Negotiation
Handle different content types based on version:
use oxidite::prelude::*;
// Content negotiation middleware
async fn content_negotiation_middleware(req: Request, next: Next) -> Result<Response> {
// Determine response format based on Accept header and version
let accept_header = req.headers()
.get("accept")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("*/*");
let mut response = next.run(req).await?;
// Set content type based on requested format
if accept_header.contains("application/json") {
response.headers_mut().insert(
"Content-Type",
"application/json".parse().unwrap()
);
} else if accept_header.contains("text/html") {
response.headers_mut().insert(
"Content-Type",
"text/html".parse().unwrap()
);
} else {
response.headers_mut().insert(
"Content-Type",
"application/json".parse().unwrap()
);
}
Ok(response)
}
// Version-specific content types
async fn versioned_content_handler(req: Request) -> Result<Response> {
let accept_header = req.headers()
.get("accept")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("*/*");
if let Some(ApiVersion(version)) = req.extensions().get::<ApiVersion>() {
match (version, accept_header) {
(1, accept) if accept.contains("application/vnd.myapi.v1+json") => {
// V1 JSON response
Ok(Response::json(serde_json::json!({
"users": [
{"id": 1, "name": "John"}
]
})))
}
(2, accept) if accept.contains("application/vnd.myapi.v2+json") => {
// V2 JSON response with more fields
Ok(Response::json(serde_json::json!({
"data": {
"users": [
{
"id": 1,
"name": "John",
"meta": {
"total": 1
}
}
]
},
"links": {
"self": "/api/v2/users",
"next": "/api/v2/users?page=2"
}
})))
}
_ => {
// Fallback response
Ok(Response::json(serde_json::json!({
"error": "Unsupported version or content type"
})))
}
}
} else {
// Default response
Ok(Response::json(serde_json::json!({
"users": [
{"id": 1, "name": "John"}
]
})))
}
}
Version-Specific Middleware
Apply different middleware based on API version:
use oxidite::prelude::*;
// Version-specific rate limiting
async fn v1_rate_limit_middleware(req: Request, next: Next) -> Result<Response> {
// V1 has stricter limits
let max_requests = 100; // per hour for v1
check_rate_limit(&req, max_requests, "v1")?;
next.run(req).await
}
async fn v2_rate_limit_middleware(req: Request, next: Next) -> Result<Response> {
// V2 has higher limits
let max_requests = 1000; // per hour for v2
check_rate_limit(&req, max_requests, "v2")?;
next.run(req).await
}
fn check_rate_limit(_req: &Request, _limit: usize, _version: &str) -> Result<()> {
// Implementation would check rate limits
Ok(())
}
// Version-aware router
async fn versioned_router(req: Request) -> Result<Response> {
if let Some(ApiVersion(version)) = req.extensions().get::<ApiVersion>() {
match version {
1 => {
// Apply v1-specific middleware and handlers
v1_rate_limit_middleware(req, Next::new(|req| async {
// V1 handler
Ok(Response::json(serde_json::json!({"version": "v1"})))
})).await
}
2 => {
// Apply v2-specific middleware and handlers
v2_rate_limit_middleware(req, Next::new(|req| async {
// V2 handler
Ok(Response::json(serde_json::json!({"version": "v2"})))
})).await
}
_ => Err(Error::NotImplemented),
}
} else {
// Default to v1
v1_rate_limit_middleware(req, Next::new(|req| async {
Ok(Response::json(serde_json::json!({"version": "v1", "default": true})))
})).await
}
}
// Next type for middleware chaining
struct Next<F> {
handler: F,
}
impl<F> Next<F> {
fn new(handler: F) -> Self {
Self { handler }
}
async fn run(self, req: Request) -> Result<Response> {
(self.handler)(req).await
}
}
Testing Versioned APIs
Write tests for versioned APIs:
use oxidite::prelude::*;
use oxidite_testing::{TestServer, RequestBuilder};
#[cfg(test)]
mod version_tests {
use super::*;
#[tokio::test]
async fn test_v1_api() {
let server = TestServer::new(|router| {
router.get("/api/v1/users", v1_get_users);
}).await;
let response = server
.get("/api/v1/users")
.header("Accept", "application/json")
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert!(json.as_array().unwrap().first().unwrap()["email"].is_string());
}
#[tokio::test]
async fn test_v2_api() {
let server = TestServer::new(|router| {
router.get("/api/v2/users", v2_get_users);
}).await;
let response = server
.get("/api/v2/users")
.header("Accept", "application/json")
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert!(json.as_array().unwrap().first().unwrap()["profile"].isObject());
}
#[tokio::test]
async fn test_header_versioning() {
let server = TestServer::new(|router| {
router.get("/users")
.middleware(version_middleware)
.handler(get_users_by_version);
}).await;
let response = server
.get("/users")
.header("Accept", "application/vnd.myapi.v2+json")
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert!(json.as_array().unwrap().first().unwrap()["profile"].isObject());
}
#[tokio::test]
async fn test_deprecated_version() {
let policy = Arc::new(VersionDeprecationPolicy::new());
let server = TestServer::new(move |router| {
let policy_clone = policy.clone();
router.get("/users")
.with_state(policy_clone)
.middleware(deprecation_middleware)
.handler(|_| async { Ok(Response::json(serde_json::json!({"test": true}))) });
}).await;
let response = server
.get("/users")
.header("X-API-Version", "1")
.send()
.await;
assert_eq!(response.status(), 200);
assert!(response.headers().get("X-API-Deprecated").is_some());
}
}
Migration Strategies
Plan for API migrations:
use oxidite::prelude::*;
// Migration guide endpoint
async fn migration_guide(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"v1_to_v2": {
"breaking_changes": [
"User objects now include profile information",
"Response format changed to include metadata",
"New pagination structure"
],
"migration_steps": [
"Update client libraries",
"Modify data processing logic",
"Update error handling",
"Test with v2 endpoints"
],
"timeline": {
"deprecation_date": "2024-01-01",
"sunset_date": "2024-07-01",
"recommended_action": "Migrate to v2 before deprecation date"
}
}
})))
}
// Feature flags for gradual rollout
#[derive(Clone)]
struct FeatureFlags {
enabled_features: std::collections::HashSet<String>,
}
impl FeatureFlags {
fn new() -> Self {
let mut features = std::collections::HashSet::new();
features.insert("new_user_format".to_string());
features.insert("enhanced_pagination".to_string());
Self {
enabled_features: features,
}
}
fn is_enabled(&self, feature: &str) -> bool {
self.enabled_features.contains(feature)
}
}
// Version with feature flags
async fn feature_flagged_handler(
req: Request,
State(flags): State<Arc<FeatureFlags>>
) -> Result<Response> {
if let Some(ApiVersion(version)) = req.extensions().get::<ApiVersion>() {
match version {
2 => {
let mut response_data = serde_json::json!({
"users": [
{
"id": 1,
"name": "John"
}
]
});
// Conditionally add features based on flags
if flags.is_enabled("enhanced_pagination") {
response_data["pagination"] = serde_json::json!({
"page": 1,
"per_page": 10,
"total": 100,
"pages": 10
});
}
if flags.is_enabled("new_user_format") {
if let Some(users) = response_data["users"].as_array_mut() {
for user in users {
user["profile"] = serde_json::json!({
"bio": "Software developer",
"avatar_url": "https://example.com/avatar.jpg"
});
}
}
}
Ok(Response::json(response_data))
}
_ => v1_response(), // Default to v1 behavior
}
} else {
v1_response()
}
}
Summary
API versioning in Oxidite supports multiple strategies:
- URL-based:
/api/v1/resource(most common) - Header-based:
Accept: application/vnd.api.v1+json - Query parameter:
?version=1 - Media type versioning: Custom content types
Best practices include:
- Clear deprecation policies with advance notice
- Automated version negotiation
- Proper error handling for unsupported versions
- Comprehensive testing across versions
- Gradual migration strategies
- Feature flags for controlled rollouts
Choose the versioning strategy that best fits your API consumers’ needs and your team’s capabilities.
CLI Tools
The Oxidite CLI package is oxidite-cli, and the installed executable is oxidite.
Installation
# Install from crates.io
cargo install oxidite-cli
# Install this generated build explicitly
cargo install oxidite-cli --version 2.1.0-gen
# Install from the workspace checkout
cargo install --path oxidite-cli
Verify the binary:
oxidite --version
oxidite version
Project Scaffolding
Create a new project:
# Interactive project creation
oxidite new my_app
# Explicit project type
oxidite new my_api --project-type api
oxidite new my_api --type api
# Template aliases
oxidite new my_web --template web
oxidite new my_fullstack --template fullstack
oxidite new my_minimal --template minimal
Supported project kinds:
apifullstackwebas an alias forfullstackmicroserviceminimalas an alias forapiserverless
The generated project includes the directories the CLI expects for development:
my_app/
├── Cargo.toml
├── README.md
├── oxidite.toml
├── migrations/
├── seeds/
├── src/
│ ├── main.rs
│ ├── controllers/
│ ├── events/
│ ├── jobs/
│ ├── middleware/
│ ├── models/
│ ├── policies/
│ ├── routes/
│ ├── services/
│ └── validators/
└── tests/
Code Generation
Use generate for new workflows. make remains as a hidden compatibility alias.
# Models
oxidite generate model User
oxidite generate model User email:string age:integer
# Route modules
oxidite generate route users
# Controllers and middleware
oxidite generate controller UserController
oxidite generate middleware AuthMiddleware
# Other supported generators
oxidite generate service Billing
oxidite generate validator CreateUser
oxidite generate job SendDigest
oxidite generate policy Post
oxidite generate event UserSignedUp
# File-based database artifacts
oxidite generate migration create_users_table
oxidite generate seeder users_seed
Supported model field types:
stringtextintegerfloatdecimalbooleanuuidjsontimestamp
Example generated model:
use serde::{Deserialize, Serialize};
use oxidite::db::{Model, sqlx};
#[derive(Debug, Clone, Serialize, Deserialize, Model, sqlx::FromRow)]
#[model(table = "users")]
pub struct User {
pub id: i64,
pub email: String,
pub age: i64,
}
Database Migrations
Create a migration file:
oxidite migrate create create_users_table
oxidite generate migration create_users_table
The generated file uses file-based SQL sections:
-- migrate:up
CREATE TABLE users (
id INTEGER PRIMARY KEY,
email TEXT NOT NULL
);
-- migrate:down
DROP TABLE users;
Run migrations:
# Canonical command
oxidite migrate run
# Bare command also runs pending migrations
oxidite migrate
Check or revert migrations:
oxidite migrate status
oxidite migrate revert
# Compatibility alias retained by the CLI
oxidite migrate:rollback
Seeders
# Create a seeder file
oxidite seed create users_seed
oxidite generate seeder users_seed
# Run seeders
oxidite seed run
oxidite seed
# Compatibility alias
oxidite db:seed
Queue Commands
Canonical queue commands:
oxidite queue work --workers 4
oxidite queue list
oxidite queue dlq
oxidite queue clear
Compatibility aliases that still work:
oxidite queue:work --workers 4
oxidite queue:list
oxidite queue:dlq
oxidite queue:clear
Development Workflow
Start the development server with hot reload:
oxidite dev
oxidite dev --port 8080
oxidite dev --host 0.0.0.0 --env development
oxidite dev --watch src --watch templates
oxidite dev --ignore dist
oxidite dev --no-hot-reload
The CLI forwards these overrides to the generated app via:
SERVER_HOSTSERVER_PORTOXIDITE_ENV
Start the current project in release mode:
oxidite serve
oxidite serve --addr 0.0.0.0:8080
oxidite serve --env production
Build the current project:
oxidite build
oxidite build --release
oxidite build --profile release
oxidite build --target x86_64-unknown-linux-musl
oxidite build --features "database,queue"
oxidite build --verbose
Diagnostics
oxidite doctor
The doctor command checks:
- Rust and Cargo availability
- project files
- migration directory presence
- common environment variables
Help
oxidite --help
oxidite migrate --help
oxidite generate --help
Testing
Testing is a crucial part of the Oxidite framework, providing comprehensive tools for unit testing, integration testing, and end-to-end testing. This chapter covers all aspects of testing in Oxidite applications.
Overview
Oxidite provides:
- Unit testing for individual components
- Integration testing for routes and middleware
- End-to-end testing with simulated HTTP requests
- Test utilities for mocking dependencies
- Test fixtures and factories
- Property-based testing support
Setting Up Tests
Basic test setup in your project:
// In your Cargo.toml
[dev-dependencies]
tokio = { version = "1.0", features = ["full"] }
oxidite-testing = "2.0.0"
serial_test = "3.0"
// In your src/lib.rs or src/main.rs
#[cfg(test)]
mod tests {
use super::*;
use oxidite_testing::TestServer;
#[tokio::test]
async fn test_basic_functionality() {
assert_eq!(2 + 2, 4);
}
}
Unit Testing
Test individual functions and components:
use oxidite::prelude::*;
// Function to test
pub fn calculate_discount(price: f64, discount_percent: f64) -> f64 {
if discount_percent <= 0.0 || discount_percent > 100.0 {
return price;
}
price * (1.0 - discount_percent / 100.0)
}
pub fn is_valid_email(email: &str) -> bool {
email.contains('@') && email.contains('.') && email.len() > 5
}
#[cfg(test)]
mod unit_tests {
use super::*;
#[test]
fn test_calculate_discount() {
assert_eq!(calculate_discount(100.0, 10.0), 90.0);
assert_eq!(calculate_discount(50.0, 20.0), 40.0);
assert_eq!(calculate_discount(100.0, 0.0), 100.0);
assert_eq!(calculate_discount(100.0, 100.0), 0.0);
}
#[test]
fn test_calculate_discount_edge_cases() {
assert_eq!(calculate_discount(100.0, -10.0), 100.0); // Invalid discount
assert_eq!(calculate_discount(100.0, 150.0), 100.0); // Too high discount
assert_eq!(calculate_discount(0.0, 50.0), 0.0); // Zero price
}
#[test]
fn test_is_valid_email() {
assert!(is_valid_email("user@example.com"));
assert!(is_valid_email("test.user@domain.co.uk"));
assert!(!is_valid_email("invalid-email"));
assert!(!is_valid_email("missing@dot"));
assert!(!is_valid_email("short@x"));
}
}
Integration Testing
Test routes and middleware integration:
use oxidite::prelude::*;
use oxidite_testing::{TestServer, RequestBuilder};
// Sample route handler
async fn hello_handler(_req: Request) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "Hello, World!",
"status": "success"
})))
}
async fn user_handler(Path(user_id): Path<u32>) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"id": user_id,
"name": format!("User {}", user_id),
"email": format!("user{}@example.com", user_id)
})))
}
#[cfg(test)]
mod integration_tests {
use super::*;
#[tokio::test]
async fn test_hello_endpoint() {
let server = TestServer::new(|router| {
router.get("/hello", hello_handler);
}).await;
let response = server.get("/hello").send().await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert_eq!(json["message"], "Hello, World!");
assert_eq!(json["status"], "success");
}
#[tokio::test]
async fn test_user_endpoint() {
let server = TestServer::new(|router| {
router.get("/users/:id", user_handler);
}).await;
let response = server.get("/users/123").send().await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert_eq!(json["id"], 123);
assert_eq!(json["name"], "User 123");
assert_eq!(json["email"], "user123@example.com");
}
#[tokio::test]
async fn test_not_found() {
let server = TestServer::new(|router| {
router.get("/hello", hello_handler);
}).await;
let response = server.get("/nonexistent").send().await;
assert_eq!(response.status(), 404);
}
}
Testing with State and Dependencies
Test routes that use application state:
use oxidite::prelude::*;
use std::sync::Arc;
#[derive(Clone)]
struct AppState {
app_name: String,
version: String,
}
async fn stateful_handler(
_req: Request,
State(state): State<Arc<AppState>>
) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"app_name": state.app_name,
"version": state.version
})))
}
#[cfg(test)]
mod state_tests {
use super::*;
#[tokio::test]
async fn test_stateful_handler() {
let app_state = Arc::new(AppState {
app_name: "Test App".to_string(),
version: "1.0.0".to_string(),
});
let server = TestServer::new(move |router| {
let state_clone = app_state.clone();
router.with_state(state_clone);
router.get("/info", stateful_handler);
}).await;
let response = server.get("/info").send().await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert_eq!(json["app_name"], "Test App");
assert_eq!(json["version"], "1.0.0");
}
}
Testing Middleware
Test middleware functionality:
use oxidite::prelude::*;
async fn logging_middleware(req: Request, next: Next) -> Result<Response> {
println!("Request: {} {}", req.method(), req.uri());
let response = next.run(req).await?;
println!("Response: {}", response.status());
Ok(response)
}
async fn auth_middleware(req: Request, next: Next) -> Result<Response> {
// Check for auth header
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
if auth_header.is_none() {
return Err(Error::Unauthorized("Missing authorization header".to_string()));
}
next.run(req).await
}
#[cfg(test)]
mod middleware_tests {
use super::*;
#[tokio::test]
async fn test_auth_middleware_success() {
let server = TestServer::new(|router| {
router.get("/protected")
.middleware(auth_middleware)
.handler(|_req| async { Ok(Response::text("Protected content".to_string())) });
}).await;
let response = server
.get("/protected")
.header("Authorization", "Bearer token123")
.send()
.await;
assert_eq!(response.status(), 200);
assert_eq!(response.text().await, "Protected content");
}
#[tokio::test]
async fn test_auth_middleware_failure() {
let server = TestServer::new(|router| {
router.get("/protected")
.middleware(auth_middleware)
.handler(|_req| async { Ok(Response::text("Protected content".to_string())) });
}).await;
let response = server.get("/protected").send().await;
assert_eq!(response.status(), 401);
}
}
Database Testing
Test database operations with test databases:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Model, Serialize, Deserialize)]
#[model(table = "test_users")]
pub struct TestUser {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null)]
pub email: String,
#[model(not_null)]
pub name: String,
}
#[cfg(test)]
mod database_tests {
use super::*;
async fn setup_test_db() -> Result<()> {
// Create test database schema
// This would typically run migrations or create tables
Ok(())
}
async fn teardown_test_db() -> Result<()> {
// Clean up test database
Ok(())
}
#[tokio::test]
async fn test_user_crud_operations() {
setup_test_db().await.unwrap();
// Test create
let user = TestUser {
id: 0,
email: "test@example.com".to_string(),
name: "Test User".to_string(),
};
let saved_user = user.save().await.unwrap();
assert!(!saved_user.id == 0);
// Test read
let found_user = TestUser::find_by_id(saved_user.id).await.unwrap().unwrap();
assert_eq!(found_user.email, "test@example.com");
assert_eq!(found_user.name, "Test User");
// Test update
let mut updated_user = found_user;
updated_user.name = "Updated Name".to_string();
let updated_user = updated_user.save().await.unwrap();
assert_eq!(updated_user.name, "Updated Name");
// Test delete
updated_user.delete().await.unwrap();
let deleted_user = TestUser::find_by_id(updated_user.id).await.unwrap();
assert!(deleted_user.is_none());
teardown_test_db().await.unwrap();
}
#[tokio::test]
async fn test_duplicate_email_fails() {
setup_test_db().await.unwrap();
let user1 = TestUser {
id: 0,
email: "duplicate@example.com".to_string(),
name: "User 1".to_string(),
};
user1.save().await.unwrap();
let user2 = TestUser {
id: 0,
email: "duplicate@example.com".to_string(), // Same email
name: "User 2".to_string(),
};
// This should fail due to unique constraint
let result = user2.save().await;
assert!(result.is_err());
teardown_test_db().await.unwrap();
}
}
Mocking and Test Doubles
Create mocks for external dependencies:
use oxidite::prelude::*;
// Service to be mocked
pub trait EmailService: Send + Sync {
async fn send_email(&self, to: &str, subject: &str, body: &str) -> Result<(), String>;
}
pub struct RealEmailService;
impl EmailService for RealEmailService {
async fn send_email(&self, to: &str, subject: &str, body: &str) -> Result<(), String> {
// Actually send email
println!("Sending email to: {}, subject: {}, body: {}", to, subject, body);
Ok(())
}
}
// Handler that uses the service
async fn contact_handler(
Json(payload): Json<ContactRequest>,
State(email_service): State<Arc<dyn EmailService>>
) -> Result<Response> {
email_service
.send_email(&payload.email, &payload.subject, &payload.message)
.await
.map_err(|e| Error::InternalServerError(e))?;
Ok(Response::json(serde_json::json!({
"status": "sent",
"message": "Email sent successfully"
})))
}
#[derive(serde::Deserialize)]
struct ContactRequest {
email: String,
subject: String,
message: String,
}
// Mock implementation for testing
pub struct MockEmailService {
pub sent_emails: std::sync::Arc<tokio::sync::Mutex<Vec<SentEmail>>>,
}
#[derive(Clone)]
pub struct SentEmail {
pub to: String,
pub subject: String,
pub body: String,
}
impl MockEmailService {
pub fn new() -> Self {
Self {
sent_emails: std::sync::Arc::new(tokio::sync::Mutex::new(Vec::new())),
}
}
pub async fn get_sent_emails(&self) -> Vec<SentEmail> {
self.sent_emails.lock().await.clone()
}
}
#[async_trait::async_trait]
impl EmailService for MockEmailService {
async fn send_email(&self, to: &str, subject: &str, body: &str) -> Result<(), String> {
let mut emails = self.sent_emails.lock().await;
emails.push(SentEmail {
to: to.to_string(),
subject: subject.to_string(),
body: body.to_string(),
});
Ok(())
}
}
#[cfg(test)]
mod mock_tests {
use super::*;
#[tokio::test]
async fn test_contact_handler_with_mock() {
let mock_service = std::sync::Arc::new(MockEmailService::new());
let service_clone = mock_service.clone();
let server = TestServer::new(move |router| {
router.post("/contact")
.with_state(service_clone.clone() as Arc<dyn EmailService>)
.handler(contact_handler);
}).await;
let response = server
.post("/contact")
.json(&serde_json::json!({
"email": "user@example.com",
"subject": "Test Subject",
"message": "Test message"
}))
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert_eq!(json["status"], "sent");
// Verify email was sent via mock
let sent_emails = mock_service.get_sent_emails().await;
assert_eq!(sent_emails.len(), 1);
assert_eq!(sent_emails[0].to, "user@example.com");
assert_eq!(sent_emails[0].subject, "Test Subject");
assert_eq!(sent_emails[0].body, "Test message");
}
}
Property-Based Testing
Use property-based testing for comprehensive validation:
use oxidite::prelude::*;
// Function to test with property-based testing
pub fn reverse_string(s: &str) -> String {
s.chars().rev().collect()
}
pub fn is_palindrome(s: &str) -> bool {
let cleaned: String = s.chars()
.filter(|c| c.is_alphanumeric())
.map(|c| c.to_lowercase().next().unwrap())
.collect();
cleaned == reverse_string(&cleaned)
}
#[cfg(test)]
mod property_tests {
use super::*;
use proptest::prelude::*;
// Test that reversing a string twice gives the original
proptest! {
#[test]
fn test_reverse_twice_is_identity(s in ".*") {
let reversed_once = reverse_string(&s);
let reversed_twice = reverse_string(&reversed_once);
prop_assert_eq!(s, reversed_twice);
}
}
// Test palindrome properties
proptest! {
#[test]
fn test_palindromes(s in "[a-zA-Z]{1,10}") {
// A string concatenated with its reverse should be a palindrome
let reversed = reverse_string(&s);
let palindrome = format!("{}{}", s, reversed);
prop_assert!(is_palindrome(&palindrome));
}
}
}
Test Fixtures and Factories
Create reusable test data:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
#[derive(Model, Serialize, Deserialize, Clone)]
#[model(table = "test_posts")]
pub struct TestPost {
#[model(primary_key)]
pub id: i32,
#[model(not_null)]
pub title: String,
#[model(not_null)]
pub content: String,
pub user_id: i32,
}
#[derive(Model, Serialize, Deserialize, Clone)]
#[model(table = "test_comments")]
pub struct TestComment {
#[model(primary_key)]
pub id: i32,
#[model(not_null)]
pub content: String,
pub post_id: i32,
pub user_id: i32,
}
// Test factory for creating test data
pub struct TestFactory;
impl TestFactory {
pub fn create_user(email: &str, name: &str) -> TestUser {
TestUser {
id: 0,
email: email.to_string(),
name: name.to_string(),
}
}
pub fn create_post(title: &str, content: &str, user_id: i32) -> TestPost {
TestPost {
id: 0,
title: title.to_string(),
content: content.to_string(),
user_id,
}
}
pub fn create_comment(content: &str, post_id: i32, user_id: i32) -> TestComment {
TestComment {
id: 0,
content: content.to_string(),
post_id,
user_id,
}
}
}
#[cfg(test)]
mod fixture_tests {
use super::*;
#[tokio::test]
async fn test_blog_post_with_comments() {
setup_test_db().await.unwrap();
// Create test data using factory
let user = TestFactory::create_user("author@example.com", "Author Name");
let saved_user = user.save().await.unwrap();
let post = TestFactory::create_post("Test Post", "Post content", saved_user.id);
let saved_post = post.save().await.unwrap();
let comment = TestFactory::create_comment("Great post!", saved_post.id, saved_user.id);
let saved_comment = comment.save().await.unwrap();
// Verify relationships
assert_eq!(saved_comment.post_id, saved_post.id);
assert_eq!(saved_comment.user_id, saved_user.id);
// Clean up
saved_comment.delete().await.unwrap();
saved_post.delete().await.unwrap();
saved_user.delete().await.unwrap();
teardown_test_db().await.unwrap();
}
}
Test Configuration
Configure test-specific settings:
// In your Cargo.toml
[features]
test_utils = []
// Test utilities module
#[cfg(any(test, feature = "test_utils"))]
pub mod test_utils {
use oxidite::prelude::*;
use std::sync::Arc;
use tokio::sync::Mutex;
#[derive(Clone)]
pub struct TestContext {
pub db_url: String,
pub temp_dir: tempfile::TempDir,
pub cleanup_hooks: Arc<Mutex<Vec<Box<dyn FnMut() -> () + Send>>>>,
}
impl TestContext {
pub async fn new() -> Self {
let temp_dir = tempfile::tempdir().expect("Failed to create temp dir");
Self {
db_url: format!("sqlite://{}/test.db", temp_dir.path().display()),
temp_dir,
cleanup_hooks: Arc::new(Mutex::new(Vec::new())),
}
}
pub async fn add_cleanup_hook<F>(&self, hook: F)
where
F: FnMut() -> () + Send + 'static
{
let mut hooks = self.cleanup_hooks.lock().await;
hooks.push(Box::new(hook));
}
pub async fn run_cleanup(&self) {
let mut hooks = self.cleanup_hooks.lock().await;
for hook in hooks.iter_mut() {
hook();
}
}
}
// Test server wrapper with context
pub struct TestServerWithContext {
pub server: TestServer,
pub context: TestContext,
}
impl TestServerWithContext {
pub async fn new<F>(setup_fn: F) -> Self
where
F: FnOnce(&mut Router, TestContext) + Send + 'static
{
let context = TestContext::new().await;
let context_clone = context.clone();
let server = TestServer::new(move |router| {
setup_fn(router, context_clone);
}).await;
Self { server, context }
}
}
}
#[cfg(test)]
mod configured_tests {
use super::*;
use test_utils::*;
#[tokio::test]
async fn test_with_context() {
let test_server = TestServerWithContext::new(|router, _ctx| {
router.get("/test", |_req| async {
Ok(Response::text("Test response".to_string()))
});
}).await;
let response = test_server.server.get("/test").send().await;
assert_eq!(response.status(), 200);
assert_eq!(response.text().await, "Test response");
test_server.context.run_cleanup().await;
}
}
Parallel Test Execution
Handle parallel test execution safely:
use oxidite::prelude::*;
use serial_test::serial;
// Use serial_test attribute for tests that can't run in parallel
#[tokio::test]
#[serial]
async fn test_shared_resource() {
// This test accesses a shared resource and must run serially
// For example, a test that modifies global configuration
println!("Running serial test");
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
}
#[tokio::test]
async fn test_independent_functionality() {
// This test can run in parallel with others
assert_eq!(2 + 2, 4);
}
// Test isolation utilities
pub mod test_isolation {
use std::sync::atomic::{AtomicUsize, Ordering};
static TEST_COUNTER: AtomicUsize = AtomicUsize::new(0);
pub fn get_unique_test_id() -> String {
let id = TEST_COUNTER.fetch_add(1, Ordering::SeqCst);
format!("test_{}", id)
}
pub fn get_unique_table_name() -> String {
format!("test_table_{}", get_unique_test_id())
}
pub fn get_unique_db_name() -> String {
format!("test_db_{}.db", get_unique_test_id())
}
}
#[cfg(test)]
mod isolated_tests {
use super::*;
use test_isolation::*;
#[tokio::test]
async fn test_with_unique_resources() {
let unique_id = get_unique_test_id();
let table_name = get_unique_table_name();
println!("Using unique resources: {} - {}", unique_id, table_name);
// Test using isolated resources
assert!(table_name.starts_with("test_table_test_"));
}
}
Test Coverage
Measure and improve test coverage:
// In your .cargo/config.toml
// [target.'cfg(coverage)']
// rustflags = ["-Zinstrument-coverage"]
use oxidite::prelude::*;
// Complex function to test thoroughly
pub fn process_order(
amount: f64,
tax_rate: f64,
discount_percent: f64,
shipping_cost: f64,
is_international: bool
) -> Result<OrderSummary, String> {
if amount <= 0.0 {
return Err("Amount must be positive".to_string());
}
if tax_rate < 0.0 || tax_rate > 1.0 {
return Err("Tax rate must be between 0 and 1".to_string());
}
if discount_percent < 0.0 || discount_percent > 100.0 {
return Err("Discount percent must be between 0 and 100".to_string());
}
let discount_amount = amount * (discount_percent / 100.0);
let subtotal = amount - discount_amount;
let tax_amount = subtotal * tax_rate;
let total = subtotal + tax_amount + shipping_cost;
let international_fee = if is_international { total * 0.05 } else { 0.0 };
let final_total = total + international_fee;
Ok(OrderSummary {
subtotal,
tax_amount,
shipping_cost,
discount_amount,
international_fee,
total: final_total,
})
}
#[derive(Debug, PartialEq)]
pub struct OrderSummary {
pub subtotal: f64,
pub tax_amount: f64,
pub shipping_cost: f64,
pub discount_amount: f64,
pub international_fee: f64,
pub total: f64,
}
#[cfg(test)]
mod coverage_tests {
use super::*;
#[test]
fn test_process_order_normal_case() {
let result = process_order(100.0, 0.1, 10.0, 5.0, false).unwrap();
assert_eq!(result.subtotal, 90.0); // 100 - 10% discount
assert_eq!(result.tax_amount, 9.0); // 90 * 10% tax
assert_eq!(result.shipping_cost, 5.0);
assert_eq!(result.discount_amount, 10.0);
assert_eq!(result.international_fee, 0.0);
assert_eq!(result.total, 104.0); // 90 + 9 + 5 + 0
}
#[test]
fn test_process_order_international() {
let result = process_order(100.0, 0.1, 0.0, 5.0, true).unwrap();
assert_eq!(result.subtotal, 100.0);
assert_eq!(result.tax_amount, 10.0);
assert_eq!(result.international_fee, 5.75); // (100 + 10 + 5) * 5%
assert_eq!(result.total, 120.75);
}
#[test]
fn test_process_order_zero_values() {
let result = process_order(100.0, 0.0, 0.0, 0.0, false).unwrap();
assert_eq!(result.subtotal, 100.0);
assert_eq!(result.tax_amount, 0.0);
assert_eq!(result.total, 100.0);
}
#[test]
fn test_process_order_edge_cases() {
// Test with very small values
let result = process_order(0.01, 0.01, 0.01, 0.01, false).unwrap();
assert!(result.total > 0.0);
// Test maximum values within bounds
let result = process_order(1000000.0, 0.99, 99.99, 1000.0, true).unwrap();
assert!(result.total > 0.0);
}
#[test]
fn test_process_order_errors() {
// Test negative amount
assert!(process_order(-1.0, 0.1, 10.0, 5.0, false).is_err());
// Test invalid tax rate
assert!(process_order(100.0, -0.1, 10.0, 5.0, false).is_err());
assert!(process_order(100.0, 1.5, 10.0, 5.0, false).is_err());
// Test invalid discount percent
assert!(process_order(100.0, 0.1, -1.0, 5.0, false).is_err());
assert!(process_order(100.0, 0.1, 101.0, 5.0, false).is_err());
}
}
Test Reporting
Generate test reports and summaries:
use oxidite::prelude::*;
// Test result aggregator
#[derive(Default)]
pub struct TestResults {
pub passed: usize,
pub failed: usize,
pub ignored: usize,
pub measured: usize,
}
impl TestResults {
pub fn add_result(&mut self, result: TestResult) {
match result.status {
TestStatus::Passed => self.passed += 1,
TestStatus::Failed => self.failed += 1,
TestStatus::Ignored => self.ignored += 1,
TestStatus::Measured => self.measured += 1,
}
}
pub fn total(&self) -> usize {
self.passed + self.failed + self.ignored + self.measured
}
pub fn success_rate(&self) -> f64 {
if self.total() == 0 {
0.0
} else {
(self.passed as f64 / self.total() as f64) * 100.0
}
}
pub fn print_summary(&self) {
println!("Test Results Summary:");
println!(" Total: {}", self.total());
println!(" Passed: {} ({:.1}%)", self.passed, self.success_rate());
println!(" Failed: {}", self.failed);
println!(" Ignored: {}", self.ignored);
println!(" Measured: {}", self.measured);
}
}
pub struct TestResult {
pub name: String,
pub status: TestStatus,
pub duration: std::time::Duration,
pub error: Option<String>,
}
pub enum TestStatus {
Passed,
Failed,
Ignored,
Measured,
}
// Example of integrating with a test runner
pub struct TestRunner {
pub results: TestResults,
}
impl TestRunner {
pub fn new() -> Self {
Self {
results: TestResults::default(),
}
}
pub async fn run_test<F>(&mut self, name: &str, test_fn: F)
where
F: std::future::Future<Output = Result<(), String>>
{
let start = std::time::Instant::now();
match test_fn.await {
Ok(()) => {
let result = TestResult {
name: name.to_string(),
status: TestStatus::Passed,
duration: start.elapsed(),
error: None,
};
self.results.add_result(result);
}
Err(error) => {
let result = TestResult {
name: name.to_string(),
status: TestStatus::Failed,
duration: start.elapsed(),
error: Some(error),
};
self.results.add_result(result);
}
}
}
}
#[cfg(test)]
mod runner_tests {
use super::*;
#[tokio::test]
async fn test_runner_functionality() {
let mut runner = TestRunner::new();
// Run a passing test
runner.run_test("passing_test", async { Ok(()) }).await;
// Run a failing test
runner.run_test("failing_test", async {
Err("Test failed intentionally".to_string())
}).await;
// Run another passing test
runner.run_test("another_passing_test", async { Ok(()) }).await;
assert_eq!(runner.results.passed, 2);
assert_eq!(runner.results.failed, 1);
assert_eq!(runner.results.total(), 3);
let success_rate = runner.results.success_rate();
assert_eq!(success_rate, 66.66666666666666);
}
}
Summary
Testing in Oxidite provides comprehensive tools for:
- Unit Testing: Individual function and component testing
- Integration Testing: Route and middleware integration
- Database Testing: ORM and database operation testing
- Mocking: External dependency simulation
- Property-Based Testing: Comprehensive validation
- Fixtures: Reusable test data creation
- Parallel Execution: Safe concurrent test running
- Coverage Analysis: Thorough testing measurement
- Reporting: Detailed test results and summaries
Following testing best practices ensures reliable, maintainable Oxidite applications with high quality and confidence in code changes.
Plugins
Plugins in Oxidite provide a way to extend the framework’s functionality with modular, reusable components. This chapter covers creating, configuring, and using plugins in your Oxidite applications.
Overview
Oxidite plugins allow you to:
- Extend framework functionality
- Share common features across applications
- Create modular, reusable components
- Hook into framework lifecycle events
- Customize request/response processing
Plugin Architecture
The plugin system is built around traits and hooks:
use oxidite::prelude::*;
use std::sync::Arc;
/// Core plugin trait that all plugins must implement
#[async_trait::async_trait]
pub trait Plugin: Send + Sync {
/// Plugin name for identification
fn name(&self) -> &str;
/// Plugin version
fn version(&self) -> &str {
"1.0.0"
}
/// Initialize the plugin
async fn initialize(&self, _router: &mut Router) -> Result<()> {
Ok(())
}
/// Called before request processing
async fn before_request(&self, _req: &mut Request) -> Result<()> {
Ok(())
}
/// Called after request processing
async fn after_request(&self, _req: &Request, _resp: &mut Response) -> Result<()> {
Ok(())
}
/// Called on application shutdown
async fn shutdown(&self) -> Result<()> {
Ok(())
}
}
/// Plugin manager to handle multiple plugins
pub struct PluginManager {
plugins: Vec<Arc<dyn Plugin>>,
}
impl PluginManager {
pub fn new() -> Self {
Self {
plugins: Vec::new(),
}
}
pub fn register_plugin(&mut self, plugin: Arc<dyn Plugin>) {
self.plugins.push(plugin);
}
pub async fn initialize_plugins(&self, router: &mut Router) -> Result<()> {
for plugin in &self.plugins {
plugin.initialize(router).await?;
}
Ok(())
}
pub async fn before_request(&self, req: &mut Request) -> Result<()> {
for plugin in &self.plugins {
plugin.before_request(req).await?;
}
Ok(())
}
pub async fn after_request(&self, req: &Request, resp: &mut Response) -> Result<()> {
for plugin in &self.plugins {
plugin.after_request(req, resp).await?;
}
Ok(())
}
pub async fn shutdown_plugins(&self) -> Result<()> {
for plugin in &self.plugins {
plugin.shutdown().await?;
}
Ok(())
}
}
Creating a Basic Plugin
Create your first plugin:
use oxidite::prelude::*;
use std::sync::Arc;
/// A simple logging plugin
pub struct LoggingPlugin {
log_level: String,
}
impl LoggingPlugin {
pub fn new(log_level: &str) -> Self {
Self {
log_level: log_level.to_string(),
}
}
}
#[async_trait::async_trait]
impl Plugin for LoggingPlugin {
fn name(&self) -> &str {
"logging"
}
fn version(&self) -> &str {
"1.0.0"
}
async fn before_request(&self, req: &mut Request) -> Result<()> {
println!("[{}] {} {}", self.log_level, req.method(), req.uri());
Ok(())
}
async fn after_request(&self, _req: &Request, resp: &mut Response) -> Result<()> {
println!("[{}] Response: {}", self.log_level, resp.status());
Ok(())
}
}
// Usage example
#[tokio::main]
async fn main() -> Result<()> {
let mut plugin_manager = PluginManager::new();
// Register the logging plugin
plugin_manager.register_plugin(Arc::new(LoggingPlugin::new("INFO")));
let mut router = Router::new();
// Initialize plugins
plugin_manager.initialize_plugins(&mut router).await?;
// Add routes
router.get("/", |_req| async { Ok(Response::text("Hello, World!".to_string())) });
// Create server with plugin middleware
let server = Server::new(router)
.with_plugin_manager(plugin_manager);
server.listen("127.0.0.1:3000".parse()?).await
}
// Extend Server to support plugins
impl Server {
pub fn with_plugin_manager(mut self, plugin_manager: PluginManager) -> Self {
self.plugin_manager = Some(plugin_manager);
self
}
}
Middleware Plugins
Create plugins that act as middleware:
use oxidite::prelude::*;
use std::sync::Arc;
/// CORS plugin for handling cross-origin requests
pub struct CorsPlugin {
allowed_origins: Vec<String>,
allowed_methods: Vec<String>,
allowed_headers: Vec<String>,
}
impl CorsPlugin {
pub fn new() -> Self {
Self {
allowed_origins: vec!["*".to_string()],
allowed_methods: vec![
"GET".to_string(),
"POST".to_string(),
"PUT".to_string(),
"DELETE".to_string(),
"OPTIONS".to_string(),
],
allowed_headers: vec![
"Content-Type".to_string(),
"Authorization".to_string(),
"X-Requested-With".to_string(),
],
}
}
pub fn with_origins(mut self, origins: Vec<&str>) -> Self {
self.allowed_origins = origins.iter().map(|s| s.to_string()).collect();
self
}
pub fn with_methods(mut self, methods: Vec<&str>) -> Self {
self.allowed_methods = methods.iter().map(|s| s.to_string()).collect();
self
}
pub fn with_headers(mut self, headers: Vec<&str>) -> Self {
self.allowed_headers = headers.iter().map(|s| s.to_string()).collect();
self
}
}
#[async_trait::async_trait]
impl Plugin for CorsPlugin {
fn name(&self) -> &str {
"cors"
}
async fn after_request(&self, _req: &Request, resp: &mut Response) -> Result<()> {
// Handle preflight requests
if _req.method() == http::Method::OPTIONS {
*resp = Response::ok();
}
// Add CORS headers
resp.headers_mut().insert(
"Access-Control-Allow-Origin",
self.allowed_origins.join(", ").parse().unwrap()
);
resp.headers_mut().insert(
"Access-Control-Allow-Methods",
self.allowed_methods.join(", ").parse().unwrap()
);
resp.headers_mut().insert(
"Access-Control-Allow-Headers",
self.allowed_headers.join(", ").parse().unwrap()
);
Ok(())
}
}
// Rate limiting plugin
use std::collections::HashMap;
use std::time::{Duration, Instant};
use tokio::sync::RwLock;
pub struct RateLimitPlugin {
max_requests: u32,
window_duration: Duration,
requests: Arc<RwLock<HashMap<String, Vec<Instant>>>>,
}
impl RateLimitPlugin {
pub fn new(max_requests: u32, window_seconds: u64) -> Self {
Self {
max_requests,
window_duration: Duration::from_secs(window_seconds),
requests: Arc::new(RwLock::new(HashMap::new())),
}
}
}
#[async_trait::async_trait]
impl Plugin for RateLimitPlugin {
fn name(&self) -> &str {
"rate_limit"
}
async fn before_request(&self, req: &mut Request) -> Result<()> {
// Extract client identifier (IP address)
let client_id = req.headers()
.get("x-forwarded-for")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown")
.to_string();
let now = Instant::now();
let window_start = now - self.window_duration;
{
let mut requests = self.requests.write().await;
// Clean old requests
if let Some(times) = requests.get_mut(&client_id) {
times.retain(|time| *time > window_start);
}
// Check rate limit
let current_count = requests
.entry(client_id.clone())
.or_insert_with(Vec::new)
.len();
if current_count >= self.max_requests as usize {
return Err(Error::TooManyRequests);
}
// Record request
requests.get_mut(&client_id).unwrap().push(now);
}
Ok(())
}
}
Database Plugins
Create plugins that integrate with databases:
use oxidite::prelude::*;
use std::sync::Arc;
/// Database connection plugin
pub struct DatabasePlugin {
connection_string: String,
pool_size: usize,
}
impl DatabasePlugin {
pub fn new(connection_string: &str, pool_size: usize) -> Self {
Self {
connection_string: connection_string.to_string(),
pool_size,
}
}
}
#[async_trait::async_trait]
impl Plugin for DatabasePlugin {
fn name(&self) -> &str {
"database"
}
async fn initialize(&self, _router: &mut Router) -> Result<()> {
// Initialize database connection pool
// This would typically connect to the actual database
println!("Initializing database connection to: {}", self.connection_string);
// Store connection in router state
// _router.with_state(Arc::new(DatabaseConnection::new(&self.connection_string)?));
Ok(())
}
async fn shutdown(&self) -> Result<()> {
// Close database connections
println!("Closing database connections");
Ok(())
}
}
// Example database connection wrapper
pub struct DatabaseConnection {
// Connection pool or client
}
impl DatabaseConnection {
pub fn new(_connection_string: &str) -> Result<Self> {
// In a real implementation, this would create the actual connection
Ok(Self {})
}
}
// Migration plugin
pub struct MigrationPlugin {
migrations_path: String,
}
impl MigrationPlugin {
pub fn new(migrations_path: &str) -> Self {
Self {
migrations_path: migrations_path.to_string(),
}
}
}
#[async_trait::async_trait]
impl Plugin for MigrationPlugin {
fn name(&self) -> &str {
"migrations"
}
async fn initialize(&self, _router: &mut Router) -> Result<()> {
println!("Running migrations from: {}", self.migrations_path);
// Run pending migrations
Ok(())
}
}
Authentication Plugins
Create authentication plugins:
use oxidite::prelude::*;
use std::sync::Arc;
/// JWT authentication plugin
pub struct JwtAuthPlugin {
secret: String,
expiration: std::time::Duration,
}
impl JwtAuthPlugin {
pub fn new(secret: &str, expiration_hours: u64) -> Self {
Self {
secret: secret.to_string(),
expiration: std::time::Duration::from_secs(expiration_hours * 3600),
}
}
}
#[async_trait::async_trait]
impl Plugin for JwtAuthPlugin {
fn name(&self) -> &str {
"jwt_auth"
}
async fn before_request(&self, req: &mut Request) -> Result<()> {
// Check for JWT token in Authorization header
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
if let Some(auth) = auth_header {
if auth.starts_with("Bearer ") {
let token = auth.trim_start_matches("Bearer ").trim();
if !self.verify_token(token).await {
return Err(Error::Unauthorized("Invalid token".to_string()));
}
} else {
return Err(Error::Unauthorized("Invalid authorization format".to_string()));
}
} else {
// For public endpoints, this might be acceptable
// Return Ok(()) to continue processing
}
Ok(())
}
}
impl JwtAuthPlugin {
async fn verify_token(&self, _token: &str) -> bool {
// In a real implementation, verify the JWT token
// This is a placeholder
_token == "valid_token"
}
}
/// API Key authentication plugin
pub struct ApiKeyPlugin {
valid_keys: Vec<String>,
}
impl ApiKeyPlugin {
pub fn new(valid_keys: Vec<&str>) -> Self {
Self {
valid_keys: valid_keys.iter().map(|k| k.to_string()).collect(),
}
}
}
#[async_trait::async_trait]
impl Plugin for ApiKeyPlugin {
fn name(&self) -> &str {
"api_key_auth"
}
async fn before_request(&self, req: &mut Request) -> Result<()> {
// Check for API key in header or query parameter
let api_key = req.headers()
.get("x-api-key")
.and_then(|hv| hv.to_str().ok())
.or_else(|| {
req.uri().query().and_then(|q| {
q.split('&')
.find(|param| param.starts_with("api_key="))
.map(|param| param.split('=').nth(1).unwrap_or(""))
})
});
if let Some(key) = api_key {
if self.valid_keys.contains(&key.to_string()) {
// Add user info to request extensions
req.extensions_mut().insert(ApiKeyUser {
key: key.to_string(),
permissions: vec!["read".to_string(), "write".to_string()],
});
return Ok(());
}
}
Err(Error::Unauthorized("Invalid or missing API key".to_string()))
}
}
#[derive(Clone)]
struct ApiKeyUser {
key: String,
permissions: Vec<String>,
}
Template Engine Plugins
Create plugins that integrate with template engines:
use oxidite::prelude::*;
use std::sync::Arc;
/// Template engine plugin
pub struct TemplatePlugin {
templates_dir: String,
cache_enabled: bool,
}
impl TemplatePlugin {
pub fn new(templates_dir: &str) -> Self {
Self {
templates_dir: templates_dir.to_string(),
cache_enabled: true,
}
}
pub fn with_cache(mut self, enabled: bool) -> Self {
self.cache_enabled = enabled;
self
}
}
#[async_trait::async_trait]
impl Plugin for TemplatePlugin {
fn name(&self) -> &str {
"template_engine"
}
async fn initialize(&self, router: &mut Router) -> Result<()> {
// Initialize template engine
let mut template_engine = oxidite_template::TemplateEngine::new();
// Load templates from directory
// This would scan the templates directory and load all templates
println!("Loading templates from: {}", self.templates_dir);
// Store template engine in router state
router.with_state(Arc::new(template_engine));
Ok(())
}
async fn after_request(&self, _req: &Request, resp: &mut Response) -> Result<()> {
// Template rendering happens in route handlers
// This plugin primarily manages the template engine
Ok(())
}
}
Plugin Configuration
Configure plugins with options:
use oxidite::prelude::*;
use serde::Deserialize;
/// Configuration for plugins
#[derive(Deserialize, Clone)]
pub struct PluginConfig {
pub enabled: bool,
pub settings: std::collections::HashMap<String, serde_json::Value>,
}
impl PluginConfig {
pub fn get<T>(&self, key: &str) -> Option<T>
where
T: serde::de::DeserializeOwned,
{
self.settings.get(key)
.and_then(|value| serde_json::from_value(value.clone()).ok())
}
pub fn get_or<T>(&self, key: &str, default: T) -> T
where
T: serde::de::DeserializeOwned,
{
self.get(key).unwrap_or(default)
}
}
/// Configurable plugin base
pub struct ConfigurablePlugin {
name: String,
config: PluginConfig,
}
impl ConfigurablePlugin {
pub fn new(name: &str, config: PluginConfig) -> Self {
Self {
name: name.to_string(),
config,
}
}
pub fn get_config(&self) -> &PluginConfig {
&self.config
}
}
#[async_trait::async_trait]
impl Plugin for ConfigurablePlugin {
fn name(&self) -> &str {
&self.name
}
async fn initialize(&self, _router: &mut Router) -> Result<()> {
if !self.config.enabled {
return Ok(());
}
println!("Initializing configurable plugin: {}", self.name);
Ok(())
}
}
// Example configuration file
/*
plugins:
cors:
enabled: true
settings:
allowed_origins: ["http://localhost:3000", "https://myapp.com"]
allowed_methods: ["GET", "POST", "PUT", "DELETE"]
rate_limit:
enabled: true
settings:
max_requests: 100
window_seconds: 60
jwt_auth:
enabled: true
settings:
secret: "my_secret_key"
expiration_hours: 24
*/
Plugin Registry
Create a registry for managing plugins:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::Arc;
/// Plugin registry to manage plugin lifecycle
pub struct PluginRegistry {
plugins: HashMap<String, Arc<dyn Plugin>>,
initialized: bool,
}
impl PluginRegistry {
pub fn new() -> Self {
Self {
plugins: HashMap::new(),
initialized: false,
}
}
pub fn register(&mut self, plugin: Arc<dyn Plugin>) -> Result<()> {
let name = plugin.name().to_string();
if self.plugins.contains_key(&name) {
return Err(Error::InternalServerError(format!("Plugin '{}' already registered", name)));
}
self.plugins.insert(name, plugin);
Ok(())
}
pub fn get(&self, name: &str) -> Option<&Arc<dyn Plugin>> {
self.plugins.get(name)
}
pub async fn initialize_all(&mut self, router: &mut Router) -> Result<()> {
if self.initialized {
return Ok(());
}
for plugin in self.plugins.values() {
plugin.initialize(router).await?;
}
self.initialized = true;
Ok(())
}
pub async fn shutdown_all(&self) -> Result<()> {
for plugin in self.plugins.values() {
plugin.shutdown().await?;
}
Ok(())
}
pub fn list_plugins(&self) -> Vec<String> {
self.plugins.keys().cloned().collect()
}
}
// Plugin factory for creating plugins from configuration
pub struct PluginFactory;
impl PluginFactory {
pub fn create_from_config(config: &PluginConfig) -> Result<Vec<Arc<dyn Plugin>>> {
let mut plugins = Vec::new();
// Example: create CORS plugin if configured
if config.enabled {
// In a real implementation, this would check the plugin type
// and create the appropriate plugin instance
}
Ok(plugins)
}
pub fn create_cors_plugin(settings: &serde_json::Value) -> Result<Arc<dyn Plugin>> {
let cors_plugin = CorsPlugin::new()
.with_origins(
settings.get("allowed_origins")
.and_then(|origins| origins.as_array())
.map(|arr| arr.iter().filter_map(|v| v.as_str()).collect())
.unwrap_or_else(|| vec!["*"])
);
Ok(Arc::new(cors_plugin))
}
pub fn create_rate_limit_plugin(settings: &serde_json::Value) -> Result<Arc<dyn Plugin>> {
let max_requests = settings.get("max_requests")
.and_then(|v| v.as_u64())
.unwrap_or(100) as u32;
let window_seconds = settings.get("window_seconds")
.and_then(|v| v.as_u64())
.unwrap_or(60);
Ok(Arc::new(RateLimitPlugin::new(max_requests, window_seconds)))
}
}
Plugin Dependencies
Handle plugin dependencies and ordering:
use oxidite::prelude::*;
use std::sync::Arc;
/// Plugin with dependencies
pub struct DependencyAwarePlugin {
name: String,
dependencies: Vec<String>,
plugin: Arc<dyn Plugin>,
}
impl DependencyAwarePlugin {
pub fn new(name: &str, plugin: Arc<dyn Plugin>, dependencies: Vec<String>) -> Self {
Self {
name: name.to_string(),
dependencies,
plugin,
}
}
pub fn get_dependencies(&self) -> &[String] {
&self.dependencies
}
pub fn get_plugin(&self) -> &Arc<dyn Plugin> {
&self.plugin
}
}
/// Topological sorter for plugin dependencies
pub struct PluginDependencySorter;
impl PluginDependencySorter {
pub fn sort_plugins(plugins: Vec<DependencyAwarePlugin>) -> Result<Vec<DependencyAwarePlugin>> {
let mut sorted = Vec::new();
let mut remaining: Vec<_> = plugins.into_iter().enumerate().collect();
let mut processed = std::collections::HashSet::new();
while !remaining.is_empty() {
let mut progress = false;
let mut i = 0;
while i < remaining.len() {
let (_, plugin) = &remaining[i];
// Check if all dependencies are satisfied
let all_deps_satisfied = plugin.get_dependencies()
.iter()
.all(|dep| processed.contains(dep));
if all_deps_satisfied {
let (_, plugin) = remaining.remove(i);
sorted.push(plugin);
processed.insert(plugin.name.to_string());
progress = true;
} else {
i += 1;
}
}
if !progress && !remaining.is_empty() {
return Err(Error::InternalServerError("Circular dependency detected in plugins".to_string()));
}
}
Ok(sorted)
}
}
// Plugin with explicit dependency example
pub struct DatabaseDependentPlugin {
db_plugin_name: String,
}
#[async_trait::async_trait]
impl Plugin for DatabaseDependentPlugin {
fn name(&self) -> &str {
"db_dependent"
}
async fn initialize(&self, router: &mut Router) -> Result<()> {
// This plugin expects a database connection to be available
// It would access the database connection from router state
println!("Initializing plugin that depends on database");
Ok(())
}
}
// Create a dependency-aware version
pub fn create_db_dependent_plugin() -> DependencyAwarePlugin {
DependencyAwarePlugin::new(
"db_dependent",
Arc::new(DatabaseDependentPlugin {
db_plugin_name: "database".to_string(),
}),
vec!["database".to_string()]
)
}
Plugin Marketplace Concept
Concept for a plugin marketplace:
use oxidite::prelude::*;
use std::sync::Arc;
/// Plugin manifest for distribution
#[derive(serde::Deserialize, serde::Serialize)]
pub struct PluginManifest {
pub name: String,
pub version: String,
pub description: String,
pub author: String,
pub license: String,
pub repository: Option<String>,
pub homepage: Option<String>,
pub dependencies: Vec<PluginDependency>,
pub hooks: Vec<String>, // Events the plugin hooks into
pub config_schema: Option<serde_json::Value>, // JSON Schema for configuration
}
#[derive(serde::Deserialize, serde::Serialize)]
pub struct PluginDependency {
pub name: String,
pub version_requirement: String,
}
/// Plugin loader for external plugins
pub struct PluginLoader {
plugin_dirs: Vec<String>,
}
impl PluginLoader {
pub fn new(plugin_dirs: Vec<String>) -> Self {
Self { plugin_dirs }
}
pub async fn load_external_plugin(&self, name: &str) -> Result<Arc<dyn Plugin>> {
// In a real implementation, this would:
// 1. Locate the plugin file in plugin directories
// 2. Load the dynamic library (if compiled as dylib)
// 3. Validate the plugin manifest
// 4. Instantiate the plugin
// For now, return a dummy plugin
Ok(Arc::new(DummyPlugin::new(name)))
}
pub fn validate_manifest(&self, manifest: &PluginManifest) -> Result<()> {
// Validate plugin manifest
if manifest.name.is_empty() {
return Err(Error::InternalServerError("Plugin name is required".to_string()));
}
if manifest.version.is_empty() {
return Err(Error::InternalServerError("Plugin version is required".to_string()));
}
Ok(())
}
}
// Dummy plugin for demonstration
struct DummyPlugin {
name: String,
}
impl DummyPlugin {
fn new(name: &str) -> Self {
Self { name: name.to_string() }
}
}
#[async_trait::async_trait]
impl Plugin for DummyPlugin {
fn name(&self) -> &str {
&self.name
}
}
Testing Plugins
Test your plugins properly:
use oxidite::prelude::*;
use oxidite_testing::TestServer;
#[cfg(test)]
mod plugin_tests {
use super::*;
// Test plugin for testing purposes
#[derive(Default)]
pub struct TestPlugin {
pub before_request_called: std::sync::Arc<tokio::sync::Mutex<bool>>,
pub after_request_called: std::sync::Arc<tokio::sync::Mutex<bool>>,
pub initialize_called: std::sync::Arc<tokio::sync::Mutex<bool>>,
}
#[async_trait::async_trait]
impl Plugin for TestPlugin {
fn name(&self) -> &str {
"test_plugin"
}
async fn before_request(&self, _req: &mut Request) -> Result<()> {
let mut called = self.before_request_called.lock().await;
*called = true;
Ok(())
}
async fn after_request(&self, _req: &Request, _resp: &mut Response) -> Result<()> {
let mut called = self.after_request_called.lock().await;
*called = true;
Ok(())
}
async fn initialize(&self, _router: &mut Router) -> Result<()> {
let mut called = self.initialize_called.lock().await;
*called = true;
Ok(())
}
}
#[tokio::test]
async fn test_plugin_execution() {
let test_plugin = TestPlugin::default();
let plugin_arc = Arc::new(test_plugin);
let before_called = plugin_arc.before_request_called.clone();
let after_called = plugin_arc.after_request_called.clone();
let init_called = plugin_arc.initialize_called.clone();
let server = TestServer::new(move |router| {
router.get("/test", |_req| async {
Ok(Response::text("test response".to_string()))
});
}).await;
// Manually test plugin methods
let mut req = Request::builder()
.uri("/test")
.body(Default::default())
.unwrap();
// Test before_request
plugin_arc.before_request(&mut req).await.unwrap();
assert!(*before_called.lock().await);
// Test initialize
let mut router = Router::new();
plugin_arc.initialize(&mut router).await.unwrap();
assert!(*init_called.lock().await);
// Test after_request
let mut resp = Response::ok();
plugin_arc.after_request(&req, &mut resp).await.unwrap();
assert!(*after_called.lock().await);
}
#[tokio::test]
async fn test_cors_plugin() {
let cors_plugin = Arc::new(CorsPlugin::new());
let server = TestServer::new(move |router| {
router.get("/api/test")
.handler(|_req| async { Ok(Response::text("API response".to_string())) });
}).await;
// Test preflight request
let response = server
.request(http::Method::OPTIONS, "/api/test")
.header("Origin", "http://localhost:3000")
.header("Access-Control-Request-Method", "POST")
.send()
.await;
assert_eq!(response.status(), 200);
}
#[tokio::test]
async fn test_rate_limit_plugin() {
let rate_limit_plugin = Arc::new(RateLimitPlugin::new(2, 1)); // 2 requests per 1 second
// Test rate limiting by calling the plugin directly
let mut req = Request::builder()
.uri("/test")
.header("X-Forwarded-For", "127.0.0.1")
.body(Default::default())
.unwrap();
// First request should succeed
assert!(rate_limit_plugin.before_request(&mut req).await.is_ok());
// Second request should succeed
assert!(rate_limit_plugin.before_request(&mut req).await.is_ok());
// Third request should be rate limited
match rate_limit_plugin.before_request(&mut req).await {
Err(Error::TooManyRequests) => (), // Expected
_ => panic!("Expected TooManyRequests error"),
}
}
}
Plugin Best Practices
Follow these best practices when creating plugins:
use oxidite::prelude::*;
/// Well-designed plugin example
pub struct WellDesignedPlugin {
config: PluginConfig,
// Use appropriate data structures for state
state: Arc<tokio::sync::RwLock<PluginState>>,
}
#[derive(Default)]
struct PluginState {
initialized: bool,
stats: PluginStats,
}
#[derive(Default)]
struct PluginStats {
requests_processed: u64,
errors_encountered: u64,
}
impl WellDesignedPlugin {
pub fn new(config: PluginConfig) -> Self {
Self {
config,
state: Arc::new(tokio::sync::RwLock::new(PluginState::default())),
}
}
/// Plugin should have clear, descriptive name
fn name(&self) -> &str {
"well_designed"
}
/// Document what the plugin does
/// This plugin demonstrates best practices for plugin development
async fn before_request(&self, req: &mut Request) -> Result<()> {
// Use proper error handling
if !self.config.enabled {
return Ok(());
}
// Update statistics safely
{
let mut state = self.state.write().await;
state.stats.requests_processed += 1;
}
// Implement proper logging
println!("WellDesignedPlugin processing request: {} {}",
req.method(), req.uri());
Ok(())
}
/// Clean shutdown is important
async fn shutdown(&self) -> Result<()> {
let state = self.state.read().await;
println!("Plugin stats - processed: {}, errors: {}",
state.stats.requests_processed, state.stats.errors_encountered);
Ok(())
}
}
// Plugin documentation checklist:
// ✓ Clear purpose and functionality
// ✓ Proper error handling
// ✓ Configuration options
// ✓ Performance considerations
// ✓ Security best practices
// ✓ Proper shutdown/cleanup
// ✓ Testing strategy
// ✓ Documentation
// ✓ Dependency management
// ✓ Compatibility considerations
Summary
Oxidite plugins provide a powerful way to:
- Extend functionality: Add new features to the framework
- Modular design: Keep applications organized and maintainable
- Share components: Reuse code across multiple applications
- Hook into lifecycle: Intercept and modify request/response flow
- Configure behavior: Customize plugin behavior through settings
- Manage dependencies: Handle plugin interdependencies
- Ensure testability: Make plugins easy to test in isolation
The plugin system enables building rich, extensible Oxidite applications while maintaining clean separation of concerns and promoting code reuse.
GraphQL Integration
GraphQL provides a powerful alternative to REST APIs, allowing clients to request exactly the data they need. This chapter covers how to integrate GraphQL into your Oxidite applications.
Overview
Oxidite’s GraphQL integration includes:
- Schema definition with Rust types
- Query and mutation resolvers
- Subscription support
- Integration with Oxidite’s routing system
- Type safety with Juniper integration
- Real-time subscriptions
Basic GraphQL Setup
Set up a basic GraphQL endpoint:
use oxidite::prelude::*;
use juniper::{EmptyMutation, EmptySubscription, RootNode};
// Define a simple user object
#[derive(juniper::GraphQLObject)]
#[graphql(description = "A user in the system")]
struct User {
id: juniper::ID,
name: String,
email: String,
created_at: String,
}
// Define the query root
struct QueryRoot;
#[juniper::graphql_object]
impl QueryRoot {
/// Get a user by ID
async fn user(id: juniper::ID) -> Option<User> {
// In a real app, fetch from database
if id == juniper::ID::from("1") {
Some(User {
id: id.clone(),
name: "John Doe".to_string(),
email: "john@example.com".to_string(),
created_at: chrono::Utc::now().to_rfc3339(),
})
} else {
None
}
}
/// Get all users
async fn users() -> Vec<User> {
vec![
User {
id: juniper::ID::from("1"),
name: "John Doe".to_string(),
email: "john@example.com".to_string(),
created_at: chrono::Utc::now().to_rfc3339(),
},
User {
id: juniper::ID::from("2"),
name: "Jane Smith".to_string(),
email: "jane@example.com".to_string(),
created_at: chrono::Utc::now().to_rfc3339(),
},
]
}
}
// Create the schema
type Schema = juniper::RootNode<'static, QueryRoot, EmptyMutation, EmptySubscription>;
fn create_schema() -> Schema {
Schema::new(QueryRoot, EmptyMutation::new(), EmptySubscription::new())
}
// GraphQL endpoint handler
async fn graphql_handler(
mut req: Request,
State(schema): State<Schema>
) -> Result<Response> {
// Collect the request body
use http_body_util::BodyExt;
let body_bytes = req
.body_mut()
.collect()
.await
.map_err(|e| Error::InternalServerError(e.to_string()))?
.to_bytes();
let body_str = String::from_utf8_lossy(&body_bytes);
// Parse GraphQL request
let gql_request: juniper::http::GraphQLRequest =
serde_json::from_str(&body_str)
.map_err(|e| Error::BadRequest(format!("Invalid GraphQL request: {}", e)))?;
// Execute the request
let context = DatabaseContext {}; // Context for resolvers
let response = gql_request.execute(&schema, &context).await;
// Return response
let json_response = serde_json::to_string(&response)
.map_err(|e| Error::InternalServerError(format!("Serialization error: {}", e)))?;
Ok(Response::json(serde_json::Value::from(response)))
}
// Context for GraphQL resolvers
struct DatabaseContext;
// In a real app, implement your database access here
Advanced Schema Definition
Define more complex schemas with mutations and relationships:
use oxidite::prelude::*;
use juniper::{FieldResult, GraphQLInputObject};
// Enhanced user with more fields
#[derive(juniper::GraphQLObject, Clone)]
#[graphql(description = "A user in the system")]
struct User {
id: juniper::ID,
name: String,
email: String,
age: i32,
posts: Vec<Post>,
created_at: String,
}
// Post object
#[derive(juniper::GraphQLObject, Clone)]
#[graphql(description = "A blog post")]
struct Post {
id: juniper::ID,
title: String,
content: String,
author: User,
published: bool,
created_at: String,
}
// Input object for mutations
#[derive(GraphQLInputObject)]
#[graphql(description = "Properties for creating a new user")]
struct NewUser {
name: String,
email: String,
age: i32,
}
// Input object for creating a post
#[derive(GraphQLInputObject)]
#[graphql(description = "Properties for creating a new post")]
struct NewPost {
title: String,
content: String,
author_id: juniper::ID,
}
// Enhanced query root
struct QueryRoot;
#[juniper::graphql_object]
impl QueryRoot {
/// Get a user by ID
async fn user(id: juniper::ID, context: &DatabaseContext) -> FieldResult<Option<User>> {
// In a real app, fetch from database
Ok(Some(User {
id: id.clone(),
name: "John Doe".to_string(),
email: "john@example.com".to_string(),
age: 30,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
}))
}
/// Get all users
async fn users(context: &DatabaseContext) -> FieldResult<Vec<User>> {
Ok(vec![
User {
id: juniper::ID::from("1"),
name: "John Doe".to_string(),
email: "john@example.com".to_string(),
age: 30,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
},
User {
id: juniper::ID::from("2"),
name: "Jane Smith".to_string(),
email: "jane@example.com".to_string(),
age: 25,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
},
])
}
/// Get a post by ID
async fn post(id: juniper::ID, context: &DatabaseContext) -> FieldResult<Option<Post>> {
Ok(Some(Post {
id: id.clone(),
title: "Sample Post".to_string(),
content: "This is a sample post content.".to_string(),
author: User {
id: juniper::ID::from("1"),
name: "John Doe".to_string(),
email: "john@example.com".to_string(),
age: 30,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
},
published: true,
created_at: chrono::Utc::now().to_rfc3339(),
}))
}
/// Get all posts
async fn posts(context: &DatabaseContext) -> FieldResult<Vec<Post>> {
Ok(vec![
Post {
id: juniper::ID::from("1"),
title: "First Post".to_string(),
content: "Content of the first post.".to_string(),
author: User {
id: juniper::ID::from("1"),
name: "John Doe".to_string(),
email: "john@example.com".to_string(),
age: 30,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
},
published: true,
created_at: chrono::Utc::now().to_rfc3339(),
},
])
}
}
// Mutation root
struct MutationRoot;
#[juniper::graphql_object]
impl MutationRoot {
/// Create a new user
async fn create_user(
new_user: NewUser,
context: &DatabaseContext,
) -> FieldResult<User> {
// In a real app, save to database
Ok(User {
id: juniper::ID::from(uuid::Uuid::new_v4().to_string()),
name: new_user.name,
email: new_user.email,
age: new_user.age,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
})
}
/// Create a new post
async fn create_post(
new_post: NewPost,
context: &DatabaseContext,
) -> FieldResult<Post> {
// In a real app, save to database
Ok(Post {
id: juniper::ID::from(uuid::Uuid::new_v4().to_string()),
title: new_post.title,
content: new_post.content,
author: User {
id: new_post.author_id,
name: "Author Name".to_string(),
email: "author@example.com".to_string(),
age: 30,
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
},
published: false,
created_at: chrono::Utc::now().to_rfc3339(),
})
}
/// Update a user
async fn update_user(
id: juniper::ID,
name: Option<String>,
email: Option<String>,
age: Option<i32>,
context: &DatabaseContext,
) -> FieldResult<Option<User>> {
// In a real app, update in database
Ok(Some(User {
id,
name: name.unwrap_or_else(|| "John Doe".to_string()),
email: email.unwrap_or_else(|| "john@example.com".to_string()),
age: age.unwrap_or(30),
posts: vec![],
created_at: chrono::Utc::now().to_rfc3339(),
}))
}
/// Delete a user
async fn delete_user(
id: juniper::ID,
context: &DatabaseContext,
) -> FieldResult<bool> {
// In a real app, delete from database
Ok(true) // Simulate successful deletion
}
}
type Schema = juniper::RootNode<'static, QueryRoot, MutationRoot, EmptySubscription>;
fn create_advanced_schema() -> Schema {
Schema::new(QueryRoot, MutationRoot, EmptySubscription::new())
}
Integration with Oxidite Routing
Integrate GraphQL with Oxidite’s routing system:
use oxidite::prelude::*;
use std::sync::Arc;
// Enhanced GraphQL handler with proper request/response handling
async fn graphql_endpoint(
mut req: Request,
State(schema): State<Arc<Schema>>
) -> Result<Response> {
match req.method().as_str() {
"GET" => {
// Serve GraphQL Playground/GraphiQL in development
serve_graphql_playground()
}
"POST" => {
// Handle GraphQL query
handle_graphql_request(req, schema.as_ref()).await
}
_ => Err(Error::MethodNotAllowed),
}
}
async fn handle_graphql_request(req: Request, schema: &Schema) -> Result<Response> {
use http_body_util::BodyExt;
// Collect the request body
let body_bytes = req
.into_body()
.collect()
.await
.map_err(|e| Error::InternalServerError(e.to_string()))?
.to_bytes();
let body_str = String::from_utf8_lossy(&body_bytes);
// Parse GraphQL request
let gql_request: juniper::http::GraphQLRequest =
serde_json::from_str(&body_str)
.map_err(|e| Error::BadRequest(format!("Invalid GraphQL request: {}", e)))?;
// Execute the request
let context = DatabaseContext {};
let response = gql_request.execute(schema, &context).await;
// Create response
let json_response = serde_json::Value::from(response);
if json_response.get("errors").is_some() {
Ok(Response::json(json_response))
} else {
Ok(Response::json(json_response))
}
}
fn serve_graphql_playground() -> Result<Response> {
let html = r#"
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8/>
<title>GraphQL Playground</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/graphql-playground-react/build/static/css/index.css" />
<link rel="shortcut icon" href="https://cdn.jsdelivr.net/npm/graphql-playground-react/build/favicon.png" />
<script src="https://cdn.jsdelivr.net/npm/graphql-playground-react/build/static/js/middleware.js"></script>
</head>
<body>
<div id="root">
<style>
body {
background-color: rgb(23, 42, 58);
font-family: 'Open Sans', sans-serif;
height: 90vh;
margin: 0;
overflow: hidden;
width: 100vw;
}
#root {
height: 100%;
width: 100%;
}
.loading {
align-items: center;
display: flex;
justify-content: center;
height: 100%;
width: 100%;
}
.loading img {
animation: loadingAnimation 1s infinite alternate;
}
@keyframes loadingAnimation {
0% { opacity: 0.3; }
100% { opacity: 1; }
}
</style>
<div class="loading">
<img src='https://cdn.jsdelivr.net/npm/graphql-playground-react/build/logo.png' alt=''>
</div>
</div>
<script>
window.addEventListener('load', function (event) {
const root = document.getElementById('root');
const wsProto = location.protocol === 'https:' ? 'wss:' : 'ws:';
GraphQLPlayground.init(root, {
endpoint: location.href,
subscriptionsEndpoint: `${wsProto}//${location.host}${location.pathname}`
});
});
</script>
</body>
</html>
"#;
Ok(Response::html(html.to_string()))
}
// Initialize the application with GraphQL
#[tokio::main]
async fn main() -> Result<()> {
let schema = Arc::new(create_advanced_schema());
let mut router = Router::new();
// Add GraphQL endpoint
router.post("/graphql")
.with_state(schema.clone())
.handler(graphql_endpoint);
router.get("/graphql")
.with_state(schema)
.handler(graphql_endpoint);
Server::new(router)
.listen("127.0.0.1:3000".parse()?)
.await
}
Database Integration
Connect GraphQL resolvers to your database:
use oxidite::prelude::*;
use oxidite_db::Model;
use serde::{Deserialize, Serialize};
// Define models that match your GraphQL types
#[derive(Model, Serialize, Deserialize, juniper::GraphQLObject)]
#[model(table = "graphql_users")]
#[graphql(description = "A user in the system")]
pub struct GraphqlUser {
#[model(primary_key)]
pub id: i32,
#[model(not_null)]
pub name: String,
#[model(unique, not_null)]
pub email: String,
pub age: i32,
#[model(created_at)]
pub created_at: String,
}
#[derive(Model, Serialize, Deserialize, juniper::GraphQLObject)]
#[model(table = "graphql_posts")]
#[graphql(description = "A blog post")]
pub struct GraphqlPost {
#[model(primary_key)]
pub id: i32,
#[model(not_null)]
pub title: String,
#[model(not_null)]
pub content: String,
pub author_id: i32,
pub published: bool,
#[model(created_at)]
pub created_at: String,
}
// Enhanced context with database access
struct DatabaseContext {
// In a real app, this would contain database connection
}
// Query resolvers that use the database
struct DbQueryRoot;
#[juniper::graphql_object(Context = DatabaseContext)]
impl DbQueryRoot {
/// Get a user by ID
async fn user(id: i32, context: &DatabaseContext) -> FieldResult<Option<GraphqlUser>> {
// In a real app, fetch from database
let user = GraphqlUser::find_by_id(id).await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(user)
}
/// Get all users
async fn users(context: &DatabaseContext) -> FieldResult<Vec<GraphqlUser>> {
let users = GraphqlUser::find_all().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(users)
}
/// Get a post by ID
async fn post(id: i32, context: &DatabaseContext) -> FieldResult<Option<GraphqlPost>> {
let post = GraphqlPost::find_by_id(id).await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(post)
}
/// Get posts by author
async fn posts_by_author(
author_id: i32,
context: &DatabaseContext
) -> FieldResult<Vec<GraphqlPost>> {
let posts = GraphqlPost::find_where(&format!("author_id = {}", author_id)).await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(posts)
}
}
// Mutation resolvers that modify the database
struct DbMutationRoot;
#[juniper::graphql_object(Context = DatabaseContext)]
impl DbMutationRoot {
/// Create a new user
async fn create_user(
name: String,
email: String,
age: i32,
context: &DatabaseContext,
) -> FieldResult<GraphqlUser> {
let user = GraphqlUser {
id: 0, // Will be auto-generated
name,
email,
age,
created_at: chrono::Utc::now().to_rfc3339(),
};
let saved_user = user.save().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(saved_user)
}
/// Update a user
async fn update_user(
id: i32,
name: Option<String>,
email: Option<String>,
age: Option<i32>,
context: &DatabaseContext,
) -> FieldResult<Option<GraphqlUser>> {
if let Some(mut user) = GraphqlUser::find_by_id(id).await.map_err(|e| {
juniper::FieldError::new(e.to_string(), juniper::Value::null())
})? {
if let Some(new_name) = name {
user.name = new_name;
}
if let Some(new_email) = email {
user.email = new_email;
}
if let Some(new_age) = age {
user.age = new_age;
}
let updated_user = user.save().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(Some(updated_user))
} else {
Ok(None)
}
}
/// Delete a user
async fn delete_user(
id: i32,
context: &DatabaseContext,
) -> FieldResult<bool> {
if let Some(user) = GraphqlUser::find_by_id(id).await.map_err(|e| {
juniper::FieldError::new(e.to_string(), juniper::Value::null())
})? {
user.delete().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(true)
} else {
Ok(false)
}
}
}
type DbSchema = juniper::RootNode<'static, DbQueryRoot, DbMutationRoot, EmptySubscription>;
fn create_db_schema() -> DbSchema {
DbSchema::new(DbQueryRoot, DbMutationRoot, EmptySubscription::new())
}
Authentication and Authorization
Secure your GraphQL endpoints:
use oxidite::prelude::*;
// Context with authentication info
struct AuthenticatedContext {
user: Option<GraphqlUser>,
}
// Secured query root
struct SecuredQueryRoot;
#[juniper::graphql_object(Context = AuthenticatedContext)]
impl SecuredQueryRoot {
/// Get current user (requires authentication)
async fn me(context: &AuthenticatedContext) -> FieldResult<Option<GraphqlUser>> {
match &context.user {
Some(user) => Ok(Some(user.clone())),
None => Err(juniper::FieldError::new(
"Authentication required",
juniper::Value::null()
)),
}
}
/// Get users (requires admin role)
async fn users(context: &AuthenticatedContext) -> FieldResult<Vec<GraphqlUser>> {
match &context.user {
Some(user) if is_admin_user(user) => {
GraphqlUser::find_all().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))
}
Some(_) => Err(juniper::FieldError::new(
"Admin role required",
juniper::Value::null()
)),
None => Err(juniper::FieldError::new(
"Authentication required",
juniper::Value::null()
)),
}
}
/// Get user by ID (public endpoint)
async fn user(id: i32, context: &AuthenticatedContext) -> FieldResult<Option<GraphqlUser>> {
// Anyone can view user profiles
GraphqlUser::find_by_id(id).await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))
}
}
// Secured mutation root
struct SecuredMutationRoot;
#[juniper::graphql_object(Context = AuthenticatedContext)]
impl SecuredMutationRoot {
/// Create post (authenticated users only)
async fn create_post(
title: String,
content: String,
context: &AuthenticatedContext,
) -> FieldResult<GraphqlPost> {
match &context.user {
Some(user) => {
let post = GraphqlPost {
id: 0,
title,
content,
author_id: user.id,
published: false,
created_at: chrono::Utc::now().to_rfc3339(),
};
post.save().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))
}
None => Err(juniper::FieldError::new(
"Authentication required to create posts",
juniper::Value::null()
)),
}
}
/// Update own post (must be the author)
async fn update_post(
id: i32,
title: Option<String>,
content: Option<String>,
published: Option<bool>,
context: &AuthenticatedContext,
) -> FieldResult<Option<GraphqlPost>> {
match &context.user {
Some(current_user) => {
if let Some(mut post) = GraphqlPost::find_by_id(id).await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?
{
// Check if current user is the author
if post.author_id != current_user.id {
return Err(juniper::FieldError::new(
"Only the author can update this post",
juniper::Value::null()
));
}
if let Some(new_title) = title {
post.title = new_title;
}
if let Some(new_content) = content {
post.content = new_content;
}
if let Some(new_published) = published {
post.published = new_published;
}
let updated_post = post.save().await
.map_err(|e| juniper::FieldError::new(e.to_string(), juniper::Value::null()))?;
Ok(Some(updated_post))
} else {
Ok(None)
}
}
None => Err(juniper::FieldError::new(
"Authentication required to update posts",
juniper::Value::null()
)),
}
}
}
// Authentication middleware for GraphQL
async fn graphql_auth_middleware(
mut req: Request,
next: Next,
) -> Result<Response> {
// Extract authentication token from headers
let auth_header = req.headers()
.get("authorization")
.and_then(|hv| hv.to_str().ok());
let mut context = AuthenticatedContext { user: None };
if let Some(auth) = auth_header {
if auth.starts_with("Bearer ") {
let token = auth.trim_start_matches("Bearer ").trim();
// Verify token and get user
if let Ok(user_id) = verify_jwt_token(token).await {
// Fetch user from database
if let Ok(Some(user)) = GraphqlUser::find_by_id(user_id).await {
context.user = Some(user);
}
}
}
}
// Add context to request extensions for GraphQL handler
req.extensions_mut().insert(context);
next.run(req).await
}
async fn verify_jwt_token(_token: &str) -> Result<i32, String> {
// In a real app, verify the JWT token and return user ID
// This is a placeholder implementation
Ok(1)
}
fn is_admin_user(user: &GraphqlUser) -> bool {
// In a real app, check user roles from database
user.email == "admin@example.com"
}
type SecuredSchema = juniper::RootNode<'static, SecuredQueryRoot, SecuredMutationRoot, EmptySubscription>;
fn create_secured_schema() -> SecuredSchema {
SecuredSchema::new(SecuredQueryRoot, SecuredMutationRoot, EmptySubscription::new())
}
Subscriptions
Implement real-time GraphQL subscriptions:
use oxidite::prelude::*;
use juniper::http::GraphQLRequest;
use futures::stream::Stream;
use tokio_stream::wrappers::UnboundedReceiverStream;
use serde::{Deserialize, Serialize};
// Define subscription types
#[derive(juniper::GraphQLObject)]
#[graphql(description = "A notification")]
struct Notification {
id: juniper::ID,
message: String,
user_id: juniper::ID,
created_at: String,
}
// Subscription root
struct SubscriptionRoot;
#[juniper::graphql_subscription]
impl SubscriptionRoot {
/// Subscribe to notifications for a specific user
async fn notifications(
&self,
user_id: juniper::ID,
) -> impl Stream<Item = Notification> {
use tokio_stream::StreamExt;
// Create a channel for sending notifications
let (tx, rx) = tokio::sync::mpsc::unbounded_channel::<Notification>();
// Simulate sending notifications
let user_id_clone = user_id.clone();
tokio::spawn(async move {
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(5));
for i in 1..=10 {
interval.tick().await;
let notification = Notification {
id: juniper::ID::from(format!("notif_{}", i)),
message: format!("Notification {} for user {}", i, user_id_clone),
user_id: user_id_clone.clone(),
created_at: chrono::Utc::now().to_rfc3339(),
};
if tx.send(notification).is_err() {
break; // Channel closed
}
}
});
UnboundedReceiverStream::new(rx)
}
}
type SubscriptionSchema = juniper::RootNode<'static, SecuredQueryRoot, SecuredMutationRoot, SubscriptionRoot>;
fn create_subscription_schema() -> SubscriptionSchema {
SubscriptionSchema::new(
SecuredQueryRoot,
SecuredMutationRoot,
SubscriptionRoot,
)
}
// WebSocket handler for subscriptions
async fn websocket_graphql_handler(
ws: oxidite_realtime::websocket::WebSocket
) -> Result<()> {
ws.on_message(|msg| async move {
match msg {
oxidite_realtime::websocket::Message::Text(text) => {
// Parse GraphQL subscription message
match serde_json::from_str::<SubscriptionMessage>(&text) {
Ok(sub_msg) => {
match sub_msg.r#type.as_str() {
"connection_init" => {
// Initialize connection
Ok(oxidite_realtime::websocket::Message::Text(
r#"{"type":"connection_ack"}"#.to_string()
))
}
"subscribe" => {
// Handle subscription request
// This would typically involve setting up a subscription
Ok(oxidite_realtime::websocket::Message::Text(
r#"{"type":"next","id":"1","payload":{"data":{"hello":"world"}}}"#.to_string()
))
}
"unsubscribe" => {
// Handle unsubscribe
Ok(oxidite_realtime::websocket::Message::Text(
r#"{"type":"complete","id":"1"}"#.to_string()
))
}
_ => Ok(oxidite_realtime::websocket::Message::Text(
r#"{"type":"error","payload":"Unknown message type"}"#.to_string()
))
}
}
Err(_) => Ok(oxidite_realtime::websocket::Message::Text(
r#"{"type":"error","payload":"Invalid message format"}"#.to_string()
)),
}
}
_ => Ok(msg), // Return other messages as-is
}
}).await?;
Ok(())
}
#[derive(Deserialize, Serialize)]
struct SubscriptionMessage {
r#type: String,
id: Option<String>,
payload: Option<serde_json::Value>,
}
Performance Optimization
Optimize GraphQL performance:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
// DataLoader pattern for efficient database access
struct DataLoader<T> {
cache: Arc<RwLock<HashMap<i32, T>>>,
batch_loader: Arc<dyn Fn(Vec<i32>) -> BoxFuture<Vec<T>> + Send + Sync>,
}
type BoxFuture<T> = std::pin::Pin<Box<dyn futures::Future<Output = T> + Send>>;
impl<T: Clone + Send + Sync + 'static> DataLoader<T> {
fn new<F, Fut>(loader: F) -> Self
where
F: Fn(Vec<i32>) -> Fut + Send + Sync + 'static,
Fut: futures::Future<Output = Vec<T>> + Send + 'static,
{
Self {
cache: Arc::new(RwLock::new(HashMap::new())),
batch_loader: Arc::new(move |keys| {
let loader = loader.clone();
Box::pin(loader(keys))
}),
}
}
async fn load(&self, key: i32) -> Option<T> {
// Check cache first
{
let cache = self.cache.read().await;
if let Some(item) = cache.get(&key) {
return Some(item.clone());
}
}
// Load in batch (simplified for example)
let items = (self.batch_loader)(vec![key]).await;
let item = items.into_iter().find(|_| true); // Simplified
// Cache the result
if let Some(ref item) = item {
let mut cache = self.cache.write().await;
cache.insert(key, item.clone());
}
item
}
async fn load_many(&self, keys: Vec<i32>) -> Vec<T> {
let mut uncached_keys = Vec::new();
let mut results = Vec::new();
// Check cache for each key
{
let cache = self.cache.read().await;
for key in &keys {
if let Some(item) = cache.get(key) {
results.push(item.clone());
} else {
uncached_keys.push(*key);
}
}
}
// Load uncached items in batch
if !uncached_keys.is_empty() {
let loaded_items = (self.batch_loader)(uncached_keys.clone()).await;
// Add to cache and results
let mut cache = self.cache.write().await;
for (i, key) in uncached_keys.iter().enumerate() {
if i < loaded_items.len() {
let item = loaded_items[i].clone();
cache.insert(*key, item.clone());
results.push(item);
}
}
}
results
}
}
// Context with data loaders
struct OptimizedContext {
user_loader: DataLoader<GraphqlUser>,
post_loader: DataLoader<GraphqlPost>,
}
// Query root using data loaders
struct OptimizedQueryRoot;
#[juniper::graphql_object(Context = OptimizedContext)]
impl OptimizedQueryRoot {
/// Get user with optimized loading
async fn user(id: i32, context: &OptimizedContext) -> FieldResult<Option<GraphqlUser>> {
let user = context.user_loader.load(id).await;
Ok(user)
}
/// Get multiple users efficiently
async fn users(ids: Vec<i32>, context: &OptimizedContext) -> FieldResult<Vec<GraphqlUser>> {
let users = context.user_loader.load_many(ids).await;
Ok(users)
}
/// Get posts by author with optimized loading
async fn posts_by_author(
author_id: i32,
context: &OptimizedContext
) -> FieldResult<Vec<GraphqlPost>> {
// In a real app, you'd have a specialized loader for this
// For now, just return empty to satisfy the example
Ok(vec![])
}
}
// Schema with optimizations
type OptimizedSchema = juniper::RootNode<'static, OptimizedQueryRoot, SecuredMutationRoot, SubscriptionRoot>;
fn create_optimized_schema() -> OptimizedSchema {
let user_loader = DataLoader::new(|ids| {
Box::pin(async move {
// In a real app, batch fetch users from database
ids.into_iter()
.map(|id| GraphqlUser {
id,
name: format!("User {}", id),
email: format!("user{}@example.com", id),
age: 25,
created_at: chrono::Utc::now().to_rfc3339(),
})
.collect()
})
});
let post_loader = DataLoader::new(|ids| {
Box::pin(async move {
// In a real app, batch fetch posts from database
ids.into_iter()
.map(|id| GraphqlPost {
id,
title: format!("Post {}", id),
content: format!("Content of post {}", id),
author_id: 1,
published: true,
created_at: chrono::Utc::now().to_rfc3339(),
})
.collect()
})
});
let context = OptimizedContext {
user_loader,
post_loader,
};
OptimizedSchema::new(
OptimizedQueryRoot,
SecuredMutationRoot,
SubscriptionRoot,
)
}
Testing GraphQL
Test your GraphQL endpoints:
use oxidite::prelude::*;
use oxidite_testing::TestServer;
#[cfg(test)]
mod graphql_tests {
use super::*;
#[tokio::test]
async fn test_graphql_query() {
let schema = Arc::new(create_advanced_schema());
let server = TestServer::new(move |router| {
router.post("/graphql")
.with_state(schema.clone())
.handler(graphql_endpoint);
}).await;
let query = r#"
{
users {
id
name
email
}
}
"#;
let response = server
.post("/graphql")
.json(&serde_json::json!({
"query": query
}))
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert!(json["data"]["users"].is_array());
}
#[tokio::test]
async fn test_graphql_mutation() {
let schema = Arc::new(create_advanced_schema());
let server = TestServer::new(move |router| {
router.post("/graphql")
.with_state(schema.clone())
.handler(graphql_endpoint);
}).await;
let mutation = r#"
mutation {
createUser(newUser: {name: "Test User", email: "test@example.com", age: 30}) {
id
name
email
age
}
}
"#;
let response = server
.post("/graphql")
.json(&serde_json::json!({
"query": mutation
}))
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert!(json["data"]["createUser"]["id"].is_string());
assert_eq!(json["data"]["createUser"]["name"], "Test User");
}
#[tokio::test]
async fn test_graphql_error_handling() {
let schema = Arc::new(create_advanced_schema());
let server = TestServer::new(move |router| {
router.post("/graphql")
.with_state(schema.clone())
.handler(graphql_endpoint);
}).await;
let invalid_query = r#"
{
invalidField
}
"#;
let response = server
.post("/graphql")
.json(&serde_json::json!({
"query": invalid_query
}))
.send()
.await;
assert_eq!(response.status(), 200); // GraphQL returns 200 even with errors
let json: serde_json::Value = response.json().await;
assert!(json["errors"].is_array());
assert!(json["errors"].as_array().unwrap().len() > 0);
}
#[tokio::test]
async fn test_graphql_authentication() {
// Test authenticated GraphQL endpoint
let schema = Arc::new(create_secured_schema());
let server = TestServer::new(move |router| {
router.post("/graphql")
.middleware(graphql_auth_middleware)
.with_state(schema.clone())
.handler(graphql_endpoint);
}).await;
let query = r#"
{
me {
id
name
email
}
}
"#;
// Request without authentication should fail
let response = server
.post("/graphql")
.json(&serde_json::json!({
"query": query
}))
.send()
.await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
// Should have an error about authentication being required
if let Some(errors) = json["errors"].as_array() {
assert!(!errors.is_empty());
}
// Request with authentication should succeed
let response = server
.post("/graphql")
.header("Authorization", "Bearer valid_token")
.json(&serde_json::json!({
"query": query
}))
.send()
.await;
assert_eq!(response.status(), 200);
}
}
Summary
GraphQL integration in Oxidite provides:
- Schema Definition: Define types and operations with Rust structs
- Query and Mutation Support: Handle data fetching and modifications
- Database Integration: Connect resolvers to your data models
- Authentication: Secure your GraphQL endpoints
- Subscriptions: Real-time data updates via WebSockets
- Performance Optimization: DataLoader pattern and caching
- Testing: Comprehensive testing utilities
- Error Handling: Proper GraphQL error responses
GraphQL offers a flexible alternative to REST APIs, allowing clients to request exactly the data they need while maintaining strong typing and introspection capabilities.
Subcrate Reference Overview
This section documents each Oxidite crate, when to use it, and the primary API entry points.
Core runtime crates
oxidite: umbrella crate and preludeoxidite-core: router, request/response, extractors, serveroxidite-middleware: reusable HTTP middleware layersoxidite-config: typed application configurationoxidite-utils: utility helpers (ids, strings, validation, dates)
Data and state crates
oxidite-db: ORM and database abstractionoxidite-macros: derive macros (especiallyModel)oxidite-cache: memory/redis caching abstractionsoxidite-queue: in-memory/redis/postgres job queuesoxidite-storage: local + S3 file storage
Security and identity crates
oxidite-auth: JWT, RBAC, sessions, OAuth helpersoxidite-security: crypto/hash/random/sanitization helpers
Web/API feature crates
oxidite-realtime: websocket/sse/pubsub/event helpersoxidite-template: SSR templates + static file servingoxidite-openapi: OpenAPI spec and docs generationoxidite-graphql: GraphQL schema/handler utilitiesoxidite-mail: SMTP + message/attachment APIsoxidite-plugin: plugin loading and lifecycle hooks
Tooling crates
oxidite-cli: project generation and developer commandsoxidite-testing: test server/request/response helpers
Core Stack Crates
oxidite
Use when you want a single entry point for framework features.
Main exports:
oxidite_core::*- feature-gated re-exports (
db,auth,queue,cache,realtime,template,mail,storage,security,utils) preludemodule
oxidite-core
Primary APIs:
- modules:
error,extract,request,response,router,server,tls,types,versioning,cookie - key re-exports:
Router,ServerRequest,Response- extractors:
Json,Path,Query,State,Form,Cookies,Body
Typical scenario:
- create
Router - register routes
- start
Server
oxidite-middleware
Main APIs:
LoggerLayerRequestIdLayerSecurityHeadersLayerCsrfLayerRateLimiterTimeoutMiddlewareCacheLayer
Use to compose middleware with tower::ServiceBuilder.
oxidite-config
Typed config structs:
ConfigAppConfigServerConfigDatabaseConfigCacheConfigQueueConfigSecurityConfig
Use for environment-aware startup configuration.
oxidite-utils
Main utility groups:
- date helpers
- id generation (
generate_id,generate_uuid,generate_short_id,generate_numeric_id) - string helpers (
slugify,truncate,capitalize,random_string,camel_case,snake_case) - input validation helpers
Data and State Crates
oxidite-db
Key types:
DbPool,DbTransactionDatabaseType,PoolOptionsModeltrait +#[derive(Model)]ModelQuery,Pagination,SortDirection,QueryBuilderOrmError,OrmResult- relations:
HasMany,HasOne,BelongsTo - migrations:
Migration,MigrationManager
Golden path:
- connect with
DbPool::connectorconnect_with_options - derive
Model - query with
Model::query()+ typed filters/order/pagination - use
with_transactionfor multi-step writes
oxidite-macros
Main macro:
#[derive(Model)]
Attribute forms:
#[model(table = "...")]- supports validation attributes handled by the derive
Use this crate with oxidite-db to reduce model boilerplate while keeping compile-time diagnostics.
oxidite-cache
Main APIs:
- trait
Cache MemoryCacheRedisCacheNamespacedCache
Use for caching read-heavy paths and invalidating by namespace/tag strategy.
oxidite-queue
Main APIs:
Job,JobStatus,JobResultQueue,QueueBackend,MemoryBackendRedisBackend,PostgresBackendWorkerQueueStats,StatsTracker
Use for background jobs with selectable backends.
oxidite-storage
Main APIs:
- trait
Storage LocalStorageS3StorageFileValidator,ValidationRulesStoredFile,FileMetadata
Use for user uploads and object storage abstraction.
Security and Identity Crates
oxidite-auth
Main modules/exports:
- password hashing:
PasswordHasher,hash_password,verify_password - JWT:
JwtManager,create_token,verify_token,Claims - middleware:
AuthMiddleware - RBAC:
Role,Permission - sessions:
Session,SessionStore,InMemorySessionStore,RedisSessionStore,SessionManager - session middleware:
SessionMiddleware,SessionLayer - OAuth helpers:
OAuth2Client,OAuth2Config,ProviderConfig,OAuth2Provider - authorization guards/services:
RequireRole,RequirePermission,AuthorizationService - API keys:
ApiKey,ApiKeyMiddleware - security flows: email verification, password reset, two-factor helpers
Error model:
AuthError
oxidite-security
Main APIs:
- symmetric crypto:
encrypt,decrypt,AesKey - hashing/HMAC:
sha256,sha512,hmac_sha256,verify_hmac_sha256 - secure randomness:
random_bytes,random_hex,secure_token,random_alphanumeric,random_range,try_random_range - sanitization:
sanitize_html,escape_html,strip_tags
Use for cryptographic primitives and input sanitization utilities.
Web/API Feature Crates
oxidite-realtime
Main modules/exports:
- SSE:
SseEvent,SseStream,SseConfig - pub/sub:
PubSub,Subscriber,Channel - event model:
Event,EventType - websocket:
WebSocketConnection,WebSocketManager,WsMessage,WebSocketError
oxidite-template
Main APIs:
TemplateEngine,Context,Template- parser/renderer modules
- filters module
- static files:
StaticFiles,serve_static,static_handler
oxidite-openapi
Main APIs:
- spec types:
OpenApiSpec,Info,Server,PathItem,Operation,Parameter,RequestBody,Response,Schema,Components - builders/helpers:
OpenApiBuilder,get_operation,post_operation - traits:
ToSchema,AutoDocs - docs renderer:
generate_docs_html
oxidite-graphql
Main APIs:
GraphQLSchemaContextResolverExtension,ResolverRegistryGraphQLHandlercreate_handler()
oxidite-mail
Main APIs:
MailerMessageSmtpTransport,SmtpConfigAttachment
oxidite-plugin
Main APIs:
- plugin model:
Plugin,PluginInfo,PluginHook,HookResult - runtime:
PluginLoader,PluginManager - setup:
PluginConfig,create_manager
Tooling Crates
oxidite-cli
CLI provides:
- project creation
- model/controller/middleware and additional generators
- migrations and seed management
- dev workflow helpers
Use this for the default developer workflow in Oxidite projects.
oxidite-testing
Main APIs:
TestRequestTestResponseTestServertest_router- async test macro re-export (
tokio::test)
Use for unit/integration tests against routers/handlers with minimal setup.
Subcrate API Map
This page maps core public APIs so you can quickly find the right type, trait, or function.
oxidite-core
- modules:
error,extract,request,response,router,server,tls,types,versioning,cookie - common exports:
Error,ResultRouter,Handler,ServerRequest,Response- extractors:
FromRequest,Json,Path,Query,State,Form,Cookies,Body - versioning:
ApiVersion,VersionedRouter
oxidite-db
- db types:
DatabaseType,PoolOptions,DbPool,DbTransaction - traits:
Database,Model - query types:
ModelQuery,QueryBuilder,Pagination,SortDirection,QueryValue - errors/results:
OrmError,OrmResult, DBResult - relations:
HasMany,HasOne,BelongsTo - migrations:
Migration,MigrationManager
oxidite-auth
- password hashing:
PasswordHasher,hash_password,verify_password - JWT:
JwtManager,create_token,verify_token,Claims - middleware:
AuthMiddleware,SessionMiddleware,SessionLayer,ApiKeyMiddleware - sessions:
Session,SessionStore,InMemorySessionStore,RedisSessionStore,SessionManager - authorization:
Role,Permission,RequireRole,RequirePermission,AuthorizationService - OAuth:
OAuth2Client,OAuth2Config,ProviderConfig,OAuth2Provider - API keys:
ApiKey - errors:
AuthError, authResult
oxidite-cache
- trait:
Cache - implementations:
MemoryCache,RedisCache,NamespacedCache - support types:
CacheStats - errors:
CacheError, cacheResult
oxidite-queue
- queue/job:
Queue,QueueBackend,MemoryBackend,RedisBackend,PostgresBackend - job model:
Job,JobStatus,JobResult - workers/stats:
Worker,QueueStats,StatsTracker - errors:
QueueError, queueResult
oxidite-realtime
- SSE:
SseEvent,SseStream,SseConfig - pubsub:
PubSub,Subscriber,Channel - event:
Event,EventType - websocket:
WebSocketConnection,WebSocketManager,WsMessage,WebSocketError - errors:
RealtimeError, realtimeResult
oxidite-template
- rendering:
TemplateEngine,Template,Context - internals:
Parser,TemplateNode,Renderer,Filters - static serving:
StaticFiles,serve_static,static_handler - errors:
TemplateError, templateResult
oxidite-storage
- trait:
Storage - backends:
LocalStorage,S3Storage - validation:
FileValidator,ValidationRules - metadata:
StoredFile,FileMetadata - errors:
StorageError, storageResult
oxidite-security
- crypto:
encrypt,decrypt,AesKey - hash/HMAC:
sha256,sha512,hmac_sha256,verify_hmac_sha256 - random:
random_bytes,random_hex,secure_token,random_alphanumeric,random_range,try_random_range - sanitize:
sanitize_html,escape_html,strip_tags - errors:
SecurityError, securityResult
oxidite-openapi
- spec types:
OpenApiSpec,Info,Server,PathItem,Operation,Parameter,RequestBody,Response,MediaType,Schema,Components - builders/helpers:
OpenApiBuilder,get_operation,post_operation,generate_docs_html - traits:
ToSchema,AutoDocs
oxidite-graphql
- runtime:
GraphQLSchema,GraphQLHandler,create_handler - context/resolvers:
Context,ResolverExtension,ResolverRegistry
oxidite-plugin
- plugin model:
Plugin,PluginInfo,PluginHook,HookResult - runtime:
PluginLoader,PluginManager - setup:
PluginConfig,create_manager
oxidite-mail
- mail APIs:
Mailer,Message,Attachment - transport:
SmtpTransport,SmtpConfig - errors:
MailError, mailResult
oxidite-config
Config,EnvironmentAppConfig,ServerConfig,DatabaseConfig,CacheConfig,QueueConfig,SecurityConfig- errors:
ConfigError
oxidite-testing
TestRequest,TestRequestErrorTestResponseTestServer,test_router- async test support:
tokio::testre-export
oxidite-utils
- date utilities
- id utilities
- string utilities
- validation utilities
oxidite-cli
- command surface for project scaffolding, code generation, migrations, seeds, dev/runtime workflows.
Oxidite Complete Handbook: From Foundations to Production Systems
This is a full learning path to become production-ready with Oxidite.
Course outcomes
By the end of this course, learners can:
- build REST APIs and SSR apps with Oxidite
- design maintainable route/handler/service architecture
- use Oxidite ORM and raw SQL safely
- implement auth/session/authorization patterns
- run async workers, realtime events, caching, and storage
- test, observe, and deploy production services
Audience and prerequisites
Audience:
- Rust developers building web backends
- teams migrating from Express/FastAPI/Laravel-like stacks
Prerequisites:
- Rust ownership/borrowing basics
- async/await basics
- SQL and HTTP fundamentals
Course format
- 14 modules
- each module includes objectives, coding labs, and checkpoints
- capstone project delivered at the end
Module 1: Foundation
Topics:
- Oxidite architecture and crate ecosystem
- request lifecycle and middleware chain
- project setup and feature flags
Lab:
- create a new project and expose health + version endpoints
Module 2: Routing and Handlers
Topics:
- router composition
- path/query/body extraction
- response shaping and status codes
Lab:
- CRUD routes for a small resource with validation
Module 3: Error Handling and Diagnostics
Topics:
- typed domain errors
- HTTP status mapping
- error payload conventions
- structured logging fundamentals
Lab:
- implement a unified API error envelope and handler mapping
Module 4: Configuration and Environments
Topics:
- config loading
- environment-specific behavior
- secrets handling
Lab:
- local/dev/staging/prod config profile setup
Module 5: Database and ORM
Topics:
DbPool, models, queries, pagination- transactions and consistency boundaries
- relation loading and soft deletes
Lab:
- build users/posts/comments domain with list filters and pagination
Module 6: Migrations and Schema Evolution
Topics:
- migration workflow
- backward-compatible schema changes
- rollback strategy
Lab:
- add non-breaking schema migration + backfill job
Module 7: Authentication and Authorization
Topics:
- password hashing and JWT flows
- sessions and redis-backed sessions
- RBAC/PBAC strategy
Lab:
- secure admin/user route sets with role + ownership checks
Module 8: Caching and Performance
Topics:
- read-through caching
- invalidation strategy
- pagination/indexing patterns
Lab:
- cache expensive list endpoint with measurable speedup
Module 9: Jobs and Async Work
Topics:
- queue backends
- retries and dead-letter patterns
- idempotent handlers
Lab:
- email + notification job pipeline with retry policy
Module 10: Realtime and Event-Driven Flows
Topics:
- websocket and SSE usage
- pub/sub channels and room semantics
- event contract versioning
Lab:
- realtime leaderboard/notification stream
Module 11: File and Media Handling
Topics:
- local and S3 storage backends
- validation rules and upload security
Lab:
- secure file upload endpoint + metadata persistence
Module 12: API Documentation and Integrations
Topics:
- OpenAPI generation
- GraphQL integration patterns
- plugin architecture basics
Lab:
- expose OpenAPI docs + one GraphQL endpoint
Module 13: Testing Strategy
Topics:
- unit/integration/contract testing
- test fixtures and deterministic state
- migration and transaction tests
Lab:
- full test suite for one bounded domain
Module 14: Production Deployment and Operations
Topics:
- health checks and readiness probes
- rollout strategies and rollback plans
- telemetry and incident triage
Lab:
- deploy staged release and run synthetic smoke tests
Capstone project
Build a multi-tenant event platform with:
- auth + roles
- relational models
- queue workers
- realtime updates
- cached analytics endpoint
- OpenAPI docs
- full automated tests
Recommended pace
- intensive: 2-3 weeks full-time
- standard: 8-12 weeks part-time
Assessment rubric
- correctness (40%)
- code quality and architecture (25%)
- tests and observability (20%)
- performance and reliability (15%)
Instructor/mentor checklist
- review architecture before Module 5 and Module 10
- enforce typed errors and test requirements each module
- require capstone production-readiness checklist signoff
Certification criteria (optional)
- capstone passes all tests
- deployment checklist completed
- code review meets style and reliability gates
Migration and Adoption Guide
This chapter explains how to adopt Oxidite in existing systems without risky rewrites.
Adoption principles
- Prefer incremental migration over big-bang rewrites.
- Keep external API contracts stable while internals change.
- Preserve existing database schema first; refactor schema later.
- Keep raw SQL for critical paths where planner behavior matters.
- Add observability before moving production traffic.
Common migration patterns
1. Strangler Pattern (recommended)
Use a reverse proxy and route selected endpoints to Oxidite first.
- Keep current app as primary.
- Introduce Oxidite service behind the same domain.
- Move low-risk read endpoints first.
- Move write endpoints after parity tests.
- Decommission old routes gradually.
2. Domain-by-domain cutover
Move one domain at a time:
- auth
- users/profile
- feed/content
- payments
- notifications
This reduces blast radius and speeds rollback.
3. Data-first migration
When schema compatibility is the hardest part:
- connect Oxidite to the existing database
- port models with
#[derive(Model)] - keep complex SQL via raw query escape hatch
- replace ORM paths incrementally
Compatibility checklist
Before moving traffic:
- response JSON shape parity
- error status + message parity
- auth/session behavior parity
- idempotency parity for retries
- latency/error-rate baseline parity
Session and auth compatibility
When migrating from session-heavy systems:
- keep cookie names/flags (
Secure,HttpOnly,SameSite) unchanged during transition - keep redis key prefixes and TTL policy stable
- verify logout and token/session revocation paths
Realtime compatibility
When clients depend on stable event contracts:
- freeze room naming
- freeze event names
- freeze payload fields
- use an adapter bridge until all producers are migrated
Background jobs and events
- design workers as at-least-once consumers
- enforce idempotency at database boundary
- commit message offsets only after durable side effects
- dead-letter poison messages with context
Production rollout playbook
- Shadow read traffic
- Dual-write or compare mode (where safe)
- Weighted traffic shifting (1% -> 10% -> 25% -> 50% -> 100%)
- Hold periods with SLO checks between each stage
- Keep one-click rollback path
Signals you are ready for full cutover
- parity test suite is green across auth/data/realtime paths
- p95 and p99 latency are stable or improved
- no spike in 4xx/5xx error rates
- no data integrity drift in reconciliation checks
Socket.IO Bridge Adapter Guide
This guide shows how to keep existing Socket.IO clients while migrating backend APIs to Oxidite.
Target scenario
- Existing frontend depends on Socket.IO event names and room semantics.
- You want Oxidite to own business APIs without breaking realtime clients.
Recommended architecture
- Keep current Socket.IO edge process (Node) temporarily.
- Move domain logic/API routes to Oxidite.
- Publish realtime domain events from Oxidite to Redis/Kafka.
- Socket.IO edge consumes those events and emits unchanged client events.
Event contract freeze
Before migration, freeze:
- room naming (
user:{id},ctf:{eventId},team:{id}) - event names (
leaderboard:update,notification:new, etc.) - payload shape and nullable fields
Oxidite producer pattern
Use oxidite-realtime + queue/pubsub layer to emit canonical domain events.
use oxidite_realtime::{Event, EventType};
let event = Event::new(
EventType::Custom("leaderboard:update".into()),
serde_json::json!({"eventId": 42, "delta": 15})
);
Bridge consumer pattern
In bridge service:
- Consume Oxidite domain events.
- Map to legacy Socket.IO event names.
- Emit to existing rooms.
- Log unmapped events as warnings.
Backward compatibility checks
- Client contract tests for room/event/payload compatibility
- Replay test stream against staging clients
- Drop-rate and lag metrics on bridge consumer
Cutover plan
- Shadow mode: Oxidite emits but clients still served from legacy path.
- Dual emit: compare payloads from both paths.
- Flip write source to Oxidite.
- Remove legacy emitters after stable release window.
Sequelize -> Oxidite ORM Cookbook
This cookbook maps common Sequelize model patterns to oxidite-db + #[derive(Model)].
Model mapping
Sequelize:
tableName->#[model(table = "...")]timestamps: true-> includecreated_at: i64,updated_at: i64paranoid/soft delete -> includedeleted_at: Option<i64>
Oxidite example:
use oxidite_db::Model;
#[derive(Model, Debug, Clone)]
#[model(table = "ctf_events")]
pub struct CtfEvent {
pub id: i64,
pub title: String,
pub state: String,
pub created_at: i64,
pub updated_at: i64,
pub deleted_at: Option<i64>,
}
CRUD mapping
Model.create(...)->MyModel::create(&db, model).await?Model.findByPk(id)->MyModel::find_by_id(&db, id).await?instance.save()->model.save_checked(&db).await?instance.destroy()->model.delete(&db).await?
Query mapping
where->MyModel::query().filter_eq("col", value)order->.order_by("created_at", SortDirection::Desc)limit/offset->.paginate(Pagination::from_page(page, per_page)?)
Associations
Use relation helpers in oxidite_db::relations:
HasManyHasOneBelongsTo
Prefer eager loading helpers for N+1-sensitive paths.
Keep raw SQL where needed
For complex analytics SQL, preserve SQL and execute via db query APIs first, then optimize later.
Migration sequence
- Port models without changing table schema.
- Port read queries.
- Port writes with transaction tests.
- Port background-job database paths.
High-Throughput Postgres Analytics Patterns
Use this for leaderboard and heavy reporting endpoints.
Principles
- Keep hot paths in raw SQL if query planner quality matters.
- Minimize allocations in response shaping.
- Use explicit projections; avoid
SELECT *.
Query shape recommendations
- Pre-aggregate with CTEs when combining solves/rank windows.
- Use covering indexes for filter + order columns.
- Use keyset pagination for deep pages.
Oxidite execution path
- Use
oxidite-dbquery APIs for direct SQL execution. - Map result rows into typed response structs.
- Add route-level timing + rows-scanned metrics.
Leaderboard example checklist
- index on
(event_id, score DESC, updated_at DESC) - index on submission facts
(event_id, user_id, solved_at) - immutable event snapshots where possible
Validation gates
EXPLAIN (ANALYZE, BUFFERS)baseline before migration- p95 latency comparison under representative load
- correctness checks on tie-break rules
Redis Session Compatibility (Node -> Oxidite)
This guide keeps users logged in during migration.
Compatibility goals
- Preserve cookie name and signing behavior.
- Preserve Redis key namespace and TTL policy.
- Preserve session invalidation semantics.
Migration approach
- Run Oxidite and Node against the same Redis session store.
- Validate Oxidite middleware reads existing session records.
- Keep session payload backward-compatible through transition.
Checklist
- Same cookie attributes (
Secure,HttpOnly,SameSite,Path,Domain) - Same rotation/renewal logic
- Same logout and forced-revoke behavior
Rollout
- Start with read-only session validation endpoints.
- Enable write/update only after parity checks pass.
- Keep dual read tolerance for one release cycle.
Kafka Integration with Idempotent Consumers
Use this guide when migrating event workers from Node to Oxidite.
Design rules
- Process messages at-least-once.
- Make handlers idempotent.
- Commit offsets only after durable side effects.
Idempotency techniques
- Dedup table keyed by
event_idor producer idempotency key. - Transactional write pattern: business change + dedup marker together.
- Ignore duplicates as successful no-op.
Recommended worker flow
- Receive message.
- Validate schema/version.
- Begin DB transaction.
- Check dedup marker.
- Apply side effects if first-seen.
- Persist dedup marker.
- Commit transaction.
- Commit Kafka offset.
Failure handling
- Retry transient DB/network errors.
- Dead-letter poison messages with context.
- Expose lag/retry/dead-letter metrics.
Production Setup
Deploying Oxidite applications to production requires careful consideration of performance, security, monitoring, and reliability. This chapter covers everything you need to know to run Oxidite applications in production.
Overview
Production setup includes:
- Environment configuration
- Security hardening
- Performance optimization
- Monitoring and logging
- Deployment strategies
- Scaling considerations
- Backup and disaster recovery
Environment Configuration
Configure your application for production environments:
# config/production.toml
[server]
host = "0.0.0.0"
port = 80
workers = 4
timeout = 30
keep_alive = 75
tcp_nodelay = true
[database]
url = "${DATABASE_URL}"
pool_size = 20
timeout = 30
max_lifetime = 1800
idle_timeout = 600
[logging]
level = "info"
format = "json"
output = "stdout"
sentry_dsn = "${SENTRY_DSN}"
[cache]
backend = "redis"
url = "${REDIS_URL}"
ttl = 3600
[security]
cors_enabled = true
allowed_origins = ["https://yourdomain.com", "https://www.yourdomain.com"]
csrf_enabled = true
hsts_enabled = true
content_security_policy = "default-src 'self'; script-src 'self' 'unsafe-inline'"
rate_limiting = true
max_requests_per_minute = 100
[ssl]
enabled = true
cert_path = "/etc/ssl/certs/cert.pem"
key_path = "/etc/ssl/private/key.pem"
Environment Variables
Use environment variables for sensitive configuration:
# Production environment variables
export DATABASE_URL="postgresql://user:pass@prod-db:5432/app_prod"
export REDIS_URL="redis://prod-redis:6379"
export JWT_SECRET="long-random-string-here"
export ENCRYPTION_KEY="32-byte-encryption-key-here"
export SENTRY_DSN="https://key@sentry.io/project"
export SMTP_HOST="smtp.gmail.com"
export SMTP_USER="noreply@yourdomain.com"
export SMTP_PASS="smtp-password"
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
Configuration Loading
Load configuration dynamically:
use oxidite::prelude::*;
use config::{Config, ConfigError, Environment, File};
#[derive(serde::Deserialize, Clone)]
pub struct AppConfig {
pub server: ServerConfig,
pub database: DatabaseConfig,
pub logging: LoggingConfig,
pub security: SecurityConfig,
pub cache: CacheConfig,
}
#[derive(serde::Deserialize, Clone)]
pub struct ServerConfig {
pub host: String,
pub port: u16,
pub workers: usize,
pub timeout: u64,
}
#[derive(serde::Deserialize, Clone)]
pub struct DatabaseConfig {
pub url: String,
pub pool_size: u32,
pub timeout: u64,
}
#[derive(serde::Deserialize, Clone)]
pub struct LoggingConfig {
pub level: String,
pub format: String,
pub sentry_dsn: Option<String>,
}
#[derive(serde::Deserialize, Clone)]
pub struct SecurityConfig {
pub cors_enabled: bool,
pub allowed_origins: Vec<String>,
pub csrf_enabled: bool,
pub rate_limiting: bool,
pub max_requests_per_minute: u32,
}
#[derive(serde::Deserialize, Clone)]
pub struct CacheConfig {
pub backend: String,
pub url: String,
pub ttl: u64,
}
impl AppConfig {
pub fn from_env() -> Result<Self, ConfigError> {
let mut cfg = Config::builder()
.add_source(File::with_name("config/default"))
.add_source(File::with_name(&format!("config/{}",
std::env::var("APP_ENV").unwrap_or_else(|_| "development".to_string())
)).required(false))
.add_source(Environment::with_prefix("APP"));
// Override with specific environment if set
if let Ok(env) = std::env::var("APP_ENV") {
cfg = cfg.add_source(File::with_name(&format!("config/{}", env)).required(false));
}
cfg.build()?.try_deserialize()
}
}
// Initialize application with configuration
#[tokio::main]
async fn main() -> Result<()> {
let config = AppConfig::from_env()
.map_err(|e| Error::InternalServerError(format!("Configuration error: {}", e)))?;
// Initialize logging
init_logging(&config.logging).await?;
// Initialize database
init_database(&config.database).await?;
// Initialize cache
init_cache(&config.cache).await?;
// Create and run server
let router = create_routes(&config).await?;
let server = Server::new(router);
server.listen(format!("{}:{}", config.server.host, config.server.port).parse()?).await
}
async fn init_logging(config: &LoggingConfig) -> Result<()> {
// Initialize logging based on configuration
match config.level.as_str() {
"debug" => std::env::set_var("RUST_LOG", "debug"),
"info" => std::env::set_var("RUST_LOG", "info"),
"warn" => std::env::set_var("RUST_LOG", "warn"),
"error" => std::env::set_var("RUST_LOG", "error"),
_ => std::env::set_var("RUST_LOG", "info"),
}
// Initialize tracing subscriber
use tracing_subscriber::{EnvFilter, fmt};
let filter = EnvFilter::try_from_default_env()
.unwrap_or(EnvFilter::new(&config.level));
let subscriber = fmt()
.with_env_filter(filter)
.json();
tracing::subscriber::set_global_default(subscriber)
.map_err(|e| Error::InternalServerError(format!("Logging setup error: {}", e)))?;
Ok(())
}
async fn init_database(config: &DatabaseConfig) -> Result<()> {
// Initialize database connection pool
println!("Connecting to database: {}", config.url);
Ok(())
}
async fn init_cache(config: &CacheConfig) -> Result<()> {
// Initialize cache backend
println!("Connecting to cache: {} ({})", config.url, config.backend);
Ok(())
}
async fn create_routes(_config: &AppConfig) -> Result<Router> {
let mut router = Router::new();
// Add routes
router.get("/", |_req| async { Ok(Response::text("Hello from production!".to_string())) });
Ok(router)
}
Security Hardening
Implement security best practices:
use oxidite::prelude::*;
// Security middleware
async fn security_middleware(req: Request, next: Next) -> Result<Response> {
// Add security headers
let mut response = next.run(req).await?;
// HTTP Strict Transport Security
response.headers_mut().insert(
"Strict-Transport-Security",
"max-age=31536000; includeSubDomains; preload".parse().unwrap()
);
// X-Frame-Options
response.headers_mut().insert(
"X-Frame-Options",
"SAMEORIGIN".parse().unwrap()
);
// X-Content-Type-Options
response.headers_mut().insert(
"X-Content-Type-Options",
"nosniff".parse().unwrap()
);
// X-XSS-Protection
response.headers_mut().insert(
"X-XSS-Protection",
"1; mode=block".parse().unwrap()
);
// Content Security Policy
response.headers_mut().insert(
"Content-Security-Policy",
"default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https:; connect-src 'self' https://*.sentry.io".parse().unwrap()
);
// Referrer Policy
response.headers_mut().insert(
"Referrer-Policy",
"strict-origin-when-cross-origin".parse().unwrap()
);
Ok(response)
}
// Input validation middleware
async fn input_validation_middleware(req: Request, next: Next) -> Result<Response> {
// Validate content length
if let Some(content_length) = req.headers().get("content-length") {
if let Ok(length_str) = content_length.to_str() {
if let Ok(length) = length_str.parse::<usize>() {
const MAX_BODY_SIZE: usize = 10 * 1024 * 1024; // 10MB
if length > MAX_BODY_SIZE {
return Err(Error::PayloadTooLarge);
}
}
}
}
// Sanitize input (simplified)
let mut req = req;
validate_request_body(&mut req).await?;
next.run(req).await
}
async fn validate_request_body(req: &mut Request) -> Result<()> {
// In a real implementation, this would validate and sanitize the request body
// Check for SQL injection patterns, XSS attempts, etc.
Ok(())
}
// Rate limiting middleware
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use std::time::{Duration, Instant};
#[derive(Clone)]
pub struct RateLimiter {
limits: Arc<RwLock<HashMap<String, Vec<Instant>>>>,
max_requests: u32,
window_duration: Duration,
}
impl RateLimiter {
pub fn new(max_requests: u32, window_seconds: u64) -> Self {
Self {
limits: Arc::new(RwLock::new(HashMap::new())),
max_requests,
window_duration: Duration::from_secs(window_seconds),
}
}
pub async fn is_allowed(&self, identifier: &str) -> bool {
let now = Instant::now();
let window_start = now - self.window_duration;
let mut limits = self.limits.write().await;
// Clean old requests
if let Some(times) = limits.get_mut(identifier) {
times.retain(|time| *time > window_start);
}
// Check limit
let count = limits
.entry(identifier.to_string())
.or_insert_with(Vec::new)
.len();
if count < self.max_requests as usize {
limits.get_mut(identifier).unwrap().push(now);
true
} else {
false
}
}
}
async fn rate_limiting_middleware(
req: Request,
next: Next,
State(rate_limiter): State<Arc<RateLimiter>>
) -> Result<Response> {
let client_ip = req.headers()
.get("x-forwarded-for")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown")
.to_string();
if !rate_limiter.is_allowed(&client_ip).await {
return Err(Error::TooManyRequests);
}
next.run(req).await
}
Performance Optimization
Optimize your application for production performance:
use oxidite::prelude::*;
use std::sync::Arc;
// Connection pooling configuration
pub struct ConnectionPoolConfig {
pub min_connections: u32,
pub max_connections: u32,
pub acquire_timeout: std::time::Duration,
pub idle_timeout: std::time::Duration,
pub max_lifetime: std::time::Duration,
}
impl ConnectionPoolConfig {
pub fn production() -> Self {
Self {
min_connections: 5,
max_connections: 20,
acquire_timeout: std::time::Duration::from_secs(30),
idle_timeout: std::time::Duration::from_secs(600),
max_lifetime: std::time::Duration::from_secs(1800),
}
}
}
// Caching middleware
use std::collections::HashMap;
use tokio::sync::RwLock;
#[derive(Clone)]
pub struct CacheLayer {
store: Arc<RwLock<HashMap<String, CachedResponse>>>,
ttl: std::time::Duration,
}
#[derive(Clone)]
struct CachedResponse {
response: Response,
timestamp: std::time::Instant,
}
impl CacheLayer {
pub fn new(ttl_seconds: u64) -> Self {
Self {
store: Arc::new(RwLock::new(HashMap::new())),
ttl: std::time::Duration::from_secs(ttl_seconds),
}
}
pub async fn get(&self, key: &str) -> Option<Response> {
let cache = self.store.read().await;
if let Some(cached) = cache.get(key) {
if cached.timestamp.elapsed() < self.ttl {
Some(cached.response.clone())
} else {
None
}
} else {
None
}
}
pub async fn set(&self, key: String, response: Response) {
let mut cache = self.store.write().await;
cache.insert(key, CachedResponse {
response,
timestamp: std::time::Instant::now(),
});
}
}
// Caching middleware for GET requests
async fn caching_middleware(
req: Request,
next: Next,
State(cache): State<Arc<CacheLayer>>
) -> Result<Response> {
if req.method() == http::Method::GET {
let cache_key = format!("{}:{}", req.method(), req.uri());
// Try to get from cache
if let Some(cached_response) = cache.get(&cache_key).await {
return Ok(cached_response);
}
// Execute request
let response = next.run(req).await?;
// Cache the response if appropriate
if response.status().is_success() {
cache.set(cache_key, response.clone()).await;
}
Ok(response)
} else {
// For non-GET requests, bypass cache
next.run(req).await
}
}
// Compression middleware
use brotli::enc::backward_references::BrotliEncoderParams;
use flate2::write::{GzEncoder, DeflateEncoder};
use flate2::Compression;
async fn compression_middleware(req: Request, next: Next) -> Result<Response> {
let accept_encoding = req.headers()
.get("accept-encoding")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("");
let response = next.run(req).await?;
// Only compress if response is large enough and client accepts compression
let body_size = get_body_size(&response);
if body_size > 1024 && response.status().is_success() {
let mut response = response;
if accept_encoding.contains("br") {
// Brotli compression
compress_response_br(&mut response).await?;
} else if accept_encoding.contains("gzip") {
// Gzip compression
compress_response_gzip(&mut response).await?;
} else if accept_encoding.contains("deflate") {
// Deflate compression
compress_response_deflate(&mut response).await?;
}
}
Ok(response)
}
fn get_body_size(response: &Response) -> usize {
// Get the size of the response body
// This is a simplified implementation
0
}
async fn compress_response_br(response: &mut Response) -> Result<()> {
// Brotli compression implementation
response.headers_mut().insert(
"Content-Encoding",
"br".parse().unwrap()
);
Ok(())
}
async fn compress_response_gzip(response: &mut Response) -> Result<()> {
// Gzip compression implementation
response.headers_mut().insert(
"Content-Encoding",
"gzip".parse().unwrap()
);
Ok(())
}
async fn compress_response_deflate(response: &mut Response) -> Result<()> {
// Deflate compression implementation
response.headers_mut().insert(
"Content-Encoding",
"deflate".parse().unwrap()
);
Ok(())
}
// Static file serving with caching
use std::path::Path;
use tokio::fs;
async fn static_file_handler(Path(file_path): Path<String>) -> Result<Response> {
// Validate path to prevent directory traversal
if file_path.contains("..") || file_path.starts_with('/') {
return Err(Error::BadRequest("Invalid file path".to_string()));
}
let full_path = format!("public/{}", file_path);
// Check if file exists
if !Path::new(&full_path).exists() {
return Err(Error::NotFound);
}
// Read file
let contents = fs::read(&full_path).await
.map_err(|e| Error::InternalServerError(format!("Failed to read file: {}", e)))?;
// Set appropriate content type
let content_type = get_content_type(&file_path);
let mut response = Response::html(String::from_utf8_lossy(&contents).to_string());
response.headers_mut().insert(
"Content-Type",
content_type.parse().unwrap()
);
// Add caching headers
response.headers_mut().insert(
"Cache-Control",
"public, max-age=31536000".parse().unwrap() // 1 year
);
// Add ETag for cache validation
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(&contents);
let hash = format!("{:x}", hasher.finalize());
response.headers_mut().insert(
"ETag",
format!("\"{}\"", hash).parse().unwrap()
);
Ok(response)
}
fn get_content_type(path: &str) -> &'static str {
match std::path::Path::new(path).extension().and_then(|ext| ext.to_str()) {
Some("html") => "text/html",
Some("css") => "text/css",
Some("js") => "application/javascript",
Some("json") => "application/json",
Some("png") => "image/png",
Some("jpg") | Some("jpeg") => "image/jpeg",
Some("gif") => "image/gif",
Some("svg") => "image/svg+xml",
Some("ico") => "image/x-icon",
Some("woff") => "font/woff",
Some("woff2") => "font/woff2",
Some("ttf") => "font/ttf",
Some("eot") => "application/vnd.ms-fontobject",
_ => "application/octet-stream",
}
}
Monitoring and Logging
Implement comprehensive monitoring:
use oxidite::prelude::*;
use serde_json::json;
// Structured logging middleware
async fn logging_middleware(req: Request, next: Next) -> Result<Response> {
let start = std::time::Instant::now();
let method = req.method().clone();
let uri = req.uri().clone();
let user_agent = req.headers()
.get("user-agent")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown")
.to_string();
let remote_addr = req.headers()
.get("x-forwarded-for")
.and_then(|hv| hv.to_str().ok())
.unwrap_or("unknown")
.to_string();
let response = next.run(req).await;
let duration = start.elapsed();
let log_entry = json!({
"timestamp": chrono::Utc::now().to_rfc3339(),
"level": "info",
"event": "http_request",
"method": method.to_string(),
"uri": uri.to_string(),
"user_agent": user_agent,
"remote_addr": remote_addr,
"status": response.as_ref().map(|r| r.status().as_u16()).unwrap_or(500),
"duration_ms": duration.as_millis(),
"service": "oxidite-app"
});
// Log to stdout in JSON format
println!("{}", log_entry);
response
}
// Error logging middleware
async fn error_logging_middleware(req: Request, next: Next) -> Result<Response> {
match next.run(req).await {
Ok(response) => Ok(response),
Err(error) => {
let error_log = json!({
"timestamp": chrono::Utc::now().to_rfc3339(),
"level": "error",
"event": "http_error",
"error": error.to_string(),
"error_type": error_type_name(&error),
"method": req.method().to_string(),
"uri": req.uri().to_string(),
"service": "oxidite-app"
});
eprintln!("{}", error_log);
Err(error)
}
}
}
fn error_type_name(error: &Error) -> &'static str {
match error {
Error::NotFound => "NotFound",
Error::BadRequest(_) => "BadRequest",
Error::Unauthorized(_) => "Unauthorized",
Error::Forbidden => "Forbidden",
Error::TooManyRequests => "TooManyRequests",
Error::InternalServerError => "InternalServerError",
Error::InternalServerError(_) => "Server",
Error::Validation(_) => "Validation",
Error::RateLimited => "RateLimited",
_ => "Unknown",
}
}
// Metrics collection
use std::sync::atomic::{AtomicU64, Ordering};
pub struct RequestMetrics {
pub total_requests: AtomicU64,
pub total_errors: AtomicU64,
pub total_2xx: AtomicU64,
pub total_3xx: AtomicU64,
pub total_4xx: AtomicU64,
pub total_5xx: AtomicU64,
}
impl RequestMetrics {
pub fn new() -> Self {
Self {
total_requests: AtomicU64::new(0),
total_errors: AtomicU64::new(0),
total_2xx: AtomicU64::new(0),
total_3xx: AtomicU64::new(0),
total_4xx: AtomicU64::new(0),
total_5xx: AtomicU64::new(0),
}
}
pub fn increment_request(&self, status_code: u16) {
self.total_requests.fetch_add(1, Ordering::SeqCst);
match status_code {
200..=299 => {
self.total_2xx.fetch_add(1, Ordering::SeqCst);
}
300..=399 => {
self.total_3xx.fetch_add(1, Ordering::SeqCst);
}
400..=499 => {
self.total_4xx.fetch_add(1, Ordering::SeqCst);
}
500..=599 => {
self.total_5xx.fetch_add(1, Ordering::SeqCst);
self.total_errors.fetch_add(1, Ordering::SeqCst);
}
_ => {}
}
}
pub fn get_stats(&self) -> MetricsSnapshot {
MetricsSnapshot {
total_requests: self.total_requests.load(Ordering::SeqCst),
total_errors: self.total_errors.load(Ordering::SeqCst),
total_2xx: self.total_2xx.load(Ordering::SeqCst),
total_3xx: self.total_3xx.load(Ordering::SeqCst),
total_4xx: self.total_4xx.load(Ordering::SeqCst),
total_5xx: self.total_5xx.load(Ordering::SeqCst),
}
}
}
pub struct MetricsSnapshot {
pub total_requests: u64,
pub total_errors: u64,
pub total_2xx: u64,
pub total_3xx: u64,
pub total_4xx: u64,
pub total_5xx: u64,
}
// Metrics endpoint
async fn metrics_endpoint(State(metrics): State<Arc<RequestMetrics>>) -> Result<Response> {
let stats = metrics.get_stats();
let metrics_json = json!({
"uptime": get_uptime(),
"requests": {
"total": stats.total_requests,
"2xx": stats.total_2xx,
"3xx": stats.total_3xx,
"4xx": stats.total_4xx,
"5xx": stats.total_5xx,
},
"errors": {
"total": stats.total_errors,
"rate": if stats.total_requests > 0 {
(stats.total_errors as f64 / stats.total_requests as f64) * 100.0
} else {
0.0
}
},
"health": "healthy"
});
Ok(Response::json(metrics_json))
}
fn get_uptime() -> String {
// Calculate application uptime
// This would typically be tracked from application start time
"0h 0m 0s".to_string()
}
Deployment Strategies
Deploy your application with various strategies:
# Dockerfile for production deployment
FROM rust:1.92 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bullseye-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/my_app /usr/local/bin/my_app
EXPOSE 80
CMD ["my_app"]
# docker-compose.yml for production
version: '3.8'
services:
app:
build: .
ports:
- "80:80"
environment:
- APP_ENV=production
- DATABASE_URL=postgresql://user:pass@db:5432/app_prod
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
db:
image: postgres:15
environment:
POSTGRES_DB: app_prod
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
postgres_data:
# Kubernetes deployment example
apiVersion: apps/v1
kind: Deployment
metadata:
name: oxidite-app
spec:
replicas: 3
selector:
matchLabels:
app: oxidite-app
template:
metadata:
labels:
app: oxidite-app
spec:
containers:
- name: app
image: my-org/oxidite-app:latest
ports:
- containerPort: 80
env:
- name: APP_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: oxidite-app-service
spec:
selector:
app: oxidite-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Health Checks
Implement health check endpoints:
use oxidite::prelude::*;
// Health check endpoint
async fn health_check(_req: Request) -> Result<Response> {
// Perform health checks
let db_healthy = check_database_health().await;
let cache_healthy = check_cache_health().await;
let disk_space_ok = check_disk_space().await;
let healthy = db_healthy && cache_healthy && disk_space_ok;
let status = if healthy { "healthy" } else { "unhealthy" };
let health_response = serde_json::json!({
"status": status,
"checks": {
"database": db_healthy,
"cache": cache_healthy,
"disk_space": disk_space_ok
},
"timestamp": chrono::Utc::now().to_rfc3339()
});
if healthy {
Ok(Response::json(health_response))
} else {
Ok(Response::json(health_response)) // Still return 200 but with unhealthy status
}
}
async fn readiness_check(_req: Request) -> Result<Response> {
// Readiness check - is the app ready to serve traffic?
let ready = check_readiness_conditions().await;
if ready {
Ok(Response::ok())
} else {
Err(Error::ServiceUnavailable("Application not ready".to_string()))
}
}
async fn liveness_check(_req: Request) -> Result<Response> {
// Liveness check - is the app alive?
// Usually just a simple response to indicate the process is running
Ok(Response::ok())
}
async fn check_database_health() -> bool {
// Check database connectivity
// In a real app, this would make a simple query
true
}
async fn check_cache_health() -> bool {
// Check cache connectivity
// In a real app, this would ping the cache
true
}
async fn check_disk_space() -> bool {
// Check available disk space
// In a real app, this would check actual disk usage
true
}
async fn check_readiness_conditions() -> bool {
// Check if all prerequisites are met
check_database_health().await && check_cache_health().await
}
Backup and Recovery
Implement backup strategies:
use oxidite::prelude::*;
use tokio::fs;
use std::path::Path;
// Backup handler
async fn backup_handler(_req: Request) -> Result<Response> {
// Trigger a backup
let backup_result = create_backup().await;
match backup_result {
Ok(backup_info) => Ok(Response::json(serde_json::json!({
"status": "success",
"backup": backup_info
}))),
Err(e) => Err(Error::InternalServerError(format!("Backup failed: {}", e))),
}
}
async fn create_backup() -> Result<BackupInfo> {
// Create a database backup
let timestamp = chrono::Utc::now().format("%Y%m%d_%H%M%S").to_string();
let backup_filename = format!("backup_{}.sql", timestamp);
let backup_path = format!("./backups/{}", backup_filename);
// Ensure backup directory exists
fs::create_dir_all("./backups").await?;
// In a real app, this would export the database
// For example: pg_dump for PostgreSQL
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await; // Simulate backup process
// Create a dummy backup file for the example
fs::write(&backup_path, "/* Database backup content */").await?;
Ok(BackupInfo {
filename: backup_filename,
path: backup_path,
size: 1024, // Size in bytes
created_at: chrono::Utc::now().to_rfc3339(),
})
}
#[derive(serde::Serialize)]
struct BackupInfo {
filename: String,
path: String,
size: u64,
created_at: String,
}
// Restore handler
async fn restore_handler(
Path(backup_file): Path<String>,
_req: Request
) -> Result<Response> {
let backup_path = format!("./backups/{}", backup_file);
if !Path::new(&backup_path).exists() {
return Err(Error::NotFound);
}
// In a real app, this would restore the database from the backup
let restore_result = restore_from_backup(&backup_path).await;
match restore_result {
Ok(_) => Ok(Response::json(serde_json::json!({
"status": "success",
"message": "Restore completed successfully"
}))),
Err(e) => Err(Error::InternalServerError(format!("Restore failed: {}", e))),
}
}
async fn restore_from_backup(_backup_path: &str) -> Result<()> {
// In a real app, this would restore the database
// For example: psql to import a PostgreSQL dump
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await; // Simulate restore process
Ok(())
}
// List backups
async fn list_backups(_req: Request) -> Result<Response> {
let mut backups = Vec::new();
// Scan backup directory
let mut entries = fs::read_dir("./backups").await?;
while let Some(entry) = entries.next_entry().await? {
let path = entry.path();
if path.extension().and_then(|ext| ext.to_str()) == Some("sql") {
if let Some(filename) = path.file_name().and_then(|name| name.to_str()) {
let metadata = entry.metadata().await?;
backups.push(serde_json::json!({
"filename": filename,
"size": metadata.len(),
"created_at": format!("{:?}", metadata.created())
}));
}
}
}
// Sort by creation time (newest first)
backups.sort_by(|a, b| {
b["created_at"].as_str().cmp(&a["created_at"].as_str())
});
Ok(Response::json(serde_json::json!({
"backups": backups,
"count": backups.len()
})))
}
Scaling Considerations
Design for horizontal scaling:
use oxidite::prelude::*;
// Horizontal scaling considerations
pub struct ScalableAppState {
pub app_id: String,
pub instance_id: String,
pub cluster_nodes: Vec<String>,
pub shared_cache: Arc<dyn CacheProvider>,
pub shared_database: Arc<dyn DatabaseProvider>,
}
// Cache provider trait for pluggable cache backends
pub trait CacheProvider: Send + Sync {
fn get(&self, key: &str) -> Option<String>;
fn set(&self, key: String, value: String, ttl: std::time::Duration) -> Result<()>;
fn delete(&self, key: &str) -> Result<()>;
fn clear(&self) -> Result<()>;
}
// Database provider trait for pluggable database backends
pub trait DatabaseProvider: Send + Sync {
fn query(&self, sql: &str) -> Result<Vec<serde_json::Value>>;
fn execute(&self, sql: &str) -> Result<u64>;
fn transaction<F, R>(&self, f: F) -> Result<R>
where
F: FnOnce(&dyn Transaction) -> Result<R>;
}
pub trait Transaction {
fn query(&self, sql: &str) -> Result<Vec<serde_json::Value>>;
fn execute(&self, sql: &str) -> Result<u64>;
}
// Distributed lock for coordination between instances
pub trait DistributedLock: Send + Sync {
fn acquire(&self, key: &str, ttl: std::time::Duration) -> Result<LockGuard>;
}
pub struct LockGuard;
impl Drop for LockGuard {
fn drop(&mut self) {
// Release lock automatically
}
}
// Example of scaling-aware handler
async fn distributed_handler(
_req: Request,
State(app_state): State<Arc<ScalableAppState>>
) -> Result<Response> {
// Use distributed locks for critical sections
let lock = app_state.shared_cache.acquire_lock("critical_section",
std::time::Duration::from_secs(30)).await?;
// Perform critical operation
let result = perform_critical_operation().await?;
// Lock is automatically released when guard goes out of scope
drop(lock);
Ok(Response::json(result))
}
async fn perform_critical_operation() -> Result<serde_json::Value> {
// Critical operation that should only run on one instance at a time
Ok(serde_json::json!({ "status": "completed" }))
}
// Load balancing considerations
async fn load_balancer_health_check(_req: Request) -> Result<Response> {
// Return light-weight health check for load balancers
Ok(Response::text("OK".to_string()))
}
// Instance-specific information
async fn instance_info(
State(app_state): State<Arc<ScalableAppState>>
) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"instance_id": app_state.instance_id,
"app_id": app_state.app_id,
"cluster_size": app_state.cluster_nodes.len(),
"timestamp": chrono::Utc::now().to_rfc3339()
})))
}
Summary
Production setup for Oxidite applications requires attention to:
- Environment Configuration: Proper configuration loading and environment variables
- Security Hardening: Headers, validation, rate limiting, and input sanitization
- Performance Optimization: Caching, compression, and connection pooling
- Monitoring: Logging, metrics, and health checks
- Deployment: Containerization, orchestration, and scaling
- Backup and Recovery: Regular backups and restore procedures
- Scaling: Horizontal scaling with shared resources
Following these practices ensures your Oxidite applications are secure, performant, and reliable in production environments.
Docker Deployment Guide
This chapter covers Docker-based development and production deployment for Oxidite apps.
Goals
- reproducible local dev environments
- predictable production images
- safe rollout and rollback using containers
1. Basic Dockerfile (single-stage)
FROM rust:1.89-bookworm
WORKDIR /app
COPY . .
RUN cargo build --release
EXPOSE 8080
CMD ["./target/release/example-project"]
Use this for quick experiments only. It produces large images.
2. Recommended multi-stage Dockerfile
# Build stage
FROM rust:1.89-bookworm AS builder
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
COPY src ./src
COPY templates ./templates
COPY public ./public
COPY oxidite.toml ./
RUN cargo build --release
# Runtime stage
FROM debian:bookworm-slim
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/example-project /app/example-project
COPY --from=builder /app/templates /app/templates
COPY --from=builder /app/public /app/public
COPY --from=builder /app/oxidite.toml /app/oxidite.toml
EXPOSE 8080
CMD ["/app/example-project"]
3. .dockerignore baseline
target
.git
.github
.DS_Store
*.log
.env
4. Docker Compose for local development
services:
app:
build: .
image: oxidite-example:dev
ports:
- "8080:8080"
environment:
RUST_LOG: info
volumes:
- ./templates:/app/templates:ro
- ./public:/app/public:ro
Run:
docker compose up --build
5. Health checks
Expose an app health endpoint and add container health check:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 15s
timeout: 3s
retries: 5
6. Production hardening checklist
- run as non-root user
- pin base image tags
- set CPU/memory limits
- configure restart policy
- forward logs to centralized collector
- externalize secrets (do not bake into image)
7. Performance considerations
- compile in release mode
- keep runtime image slim
- keep static assets separate when possible (CDN/reverse proxy)
- prefer immutable container images per release
8. Deployment patterns
- single VM with Docker Compose
- Kubernetes deployment + service + ingress
- blue/green or canary rollouts
9. Troubleshooting
- app exits immediately: verify binary path and execute permissions
- 404 for templates/static: verify copied paths and working directory
- slow startup: check image size and cold storage pulls
- TLS issues: terminate TLS at ingress/reverse proxy first
10. Suggested CI pipeline
cargo check --workspacecargo test --workspace- build image
- run container smoke test (
curl /health) - push image
- deploy with rollback metadata
Performance
Performance optimization is crucial for delivering fast, responsive Oxidite applications. This chapter covers various techniques and strategies to optimize your application’s performance.
Overview
Performance optimization includes:
- Request handling optimization
- Database query optimization
- Caching strategies
- Memory management
- Concurrency and parallelism
- Network optimizations
- Profiling and monitoring
Request Handling Optimization
Optimize how your application handles incoming requests:
use oxidite::prelude::*;
use std::sync::Arc;
// Efficient request handler with minimal allocations
async fn optimized_handler(req: Request) -> Result<Response> {
// Pre-allocate response data structures
let mut response_data = String::with_capacity(1024);
// Use efficient string operations
response_data.push_str("{\"message\":\"Hello, World!\",\"timestamp\":\"");
response_data.push_str(&chrono::Utc::now().to_rfc3339());
response_data.push_str("\"}");
Ok(Response::json(serde_json::Value::String(response_data)))
}
// Lazy evaluation for expensive operations
async fn lazy_evaluation_handler(req: Request) -> Result<Response> {
// Only perform expensive operation if needed
let include_details = req.uri().query()
.map(|q| q.contains("details=true"))
.unwrap_or(false);
let mut response = serde_json::json!({
"simple": "data"
});
if include_details {
// Only execute expensive operation when necessary
let expensive_data = expensive_computation().await;
response["expensive_data"] = expensive_data;
}
Ok(Response::json(response))
}
async fn expensive_computation() -> serde_json::Value {
// Simulate expensive computation
tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;
serde_json::json!({ "computed": true, "value": 42 })
}
// Request preprocessing middleware
async fn preprocessing_middleware(req: Request, next: Next) -> Result<Response> {
// Parse and validate request early
if req.method() == http::Method::POST || req.method() == http::Method::PUT {
// Check content length before processing
if let Some(content_length) = req.headers().get("content-length") {
if let Ok(length_str) = content_length.to_str() {
if let Ok(length) = length_str.parse::<usize>() {
const MAX_SIZE: usize = 10 * 1024 * 1024; // 10MB
if length > MAX_SIZE {
return Err(Error::PayloadTooLarge);
}
}
}
}
}
next.run(req).await
}
Database Query Optimization
Optimize database interactions:
use oxidite::prelude::*;
use serde::{Deserialize, Serialize};
// Optimized model with proper indexing hints
#[derive(Model, Serialize, Deserialize)]
#[model(table = "optimized_users")]
pub struct OptimizedUser {
#[model(primary_key)]
pub id: i32,
#[model(unique, not_null, indexed)] // Add index hint
pub email: String,
#[model(not_null, indexed)] // Add index hint
pub name: String,
#[model(indexed)] // Add index hint
pub created_at: String,
}
// Batch operations for better performance
impl OptimizedUser {
pub async fn create_batch(users: Vec<Self>) -> Result<Vec<Self>> {
// In a real implementation, this would use bulk insert
let mut results = Vec::with_capacity(users.len());
for mut user in users {
// Simulate batch insert
user.id = rand::random::<i32>(); // Simulate auto-increment
results.push(user);
}
Ok(results)
}
pub async fn find_by_ids(ids: &[i32]) -> Result<Vec<Self>> {
// Use IN clause instead of multiple individual queries
let id_list: String = ids.iter()
.map(|id| id.to_string())
.collect::<Vec<_>>()
.join(",");
// In a real implementation, this would execute:
// SELECT * FROM optimized_users WHERE id IN (...)
Ok(vec![]) // Placeholder
}
pub async fn find_with_pagination(page: u32, per_page: u32) -> Result<(Vec<Self>, u32)> {
// Implement efficient pagination
let offset = (page - 1) * per_page;
// In a real implementation, this would execute:
// SELECT * FROM optimized_users LIMIT per_page OFFSET offset
let users = vec![]; // Placeholder
let total_count = 100; // Placeholder
Ok((users, total_count))
}
pub async fn update_batch(updates: Vec<(i32, String)>) -> Result<u32> {
// Batch update implementation
let mut affected_rows = 0;
for (id, name) in updates {
// In a real implementation, this would execute:
// UPDATE optimized_users SET name = ? WHERE id = ?
affected_rows += 1; // Placeholder
}
Ok(affected_rows)
}
}
// Connection pooling optimization
use std::sync::Arc;
use tokio::sync::Semaphore;
#[derive(Clone)]
pub struct OptimizedDbPool {
semaphore: Arc<Semaphore>,
connections: Vec<Arc<dyn DatabaseConnection>>,
}
pub trait DatabaseConnection: Send + Sync {
fn execute(&self, query: &str) -> Result<()>;
fn query(&self, query: &str) -> Result<Vec<serde_json::Value>>;
}
impl OptimizedDbPool {
pub fn new(max_connections: usize) -> Self {
Self {
semaphore: Arc::new(Semaphore::new(max_connections)),
connections: Vec::new(), // In a real implementation, populate with actual connections
}
}
pub async fn with_connection<F, R>(&self, operation: F) -> Result<R>
where
F: FnOnce(&dyn DatabaseConnection) -> Result<R>,
{
let _permit = self.semaphore.acquire().await
.map_err(|_| Error::InternalServerError("Connection pool error".to_string()))?;
// In a real implementation, lease a connection and execute the operation
// This is a simplified example
let conn = self.get_connection()?;
operation(conn.as_ref())
}
fn get_connection(&self) -> Result<Arc<dyn DatabaseConnection>> {
// In a real implementation, return an available connection
Err(Error::InternalServerError("Not implemented".to_string()))
}
}
// Query optimization with prepared statements
pub struct PreparedStatement {
query: String,
param_types: Vec<DbType>,
}
#[derive(Debug)]
pub enum DbType {
Integer,
Text,
Boolean,
Timestamp,
}
impl PreparedStatement {
pub fn new(query: &str) -> Self {
// Parse query to identify parameter types
Self {
query: query.to_string(),
param_types: vec![], // In a real implementation, parse parameter types
}
}
pub async fn execute(&self, params: &[&dyn ToSql]) -> Result<()> {
// Execute prepared statement with parameters
// This would use the actual database driver
Ok(())
}
}
pub trait ToSql {
fn to_sql(&self) -> String;
}
Caching Strategies
Implement effective caching:
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::{RwLock, Mutex};
use std::time::{Duration, Instant};
// In-memory cache with TTL
#[derive(Clone)]
pub struct InMemoryCache {
store: Arc<RwLock<HashMap<String, CacheEntry>>>,
capacity: usize,
default_ttl: Duration,
}
#[derive(Clone)]
struct CacheEntry {
value: String,
timestamp: Instant,
ttl: Duration,
}
impl InMemoryCache {
pub fn new(capacity: usize, default_ttl: Duration) -> Self {
Self {
store: Arc::new(RwLock::new(HashMap::new())),
capacity,
default_ttl,
}
}
pub async fn get(&self, key: &str) -> Option<String> {
let store = self.store.read().await;
if let Some(entry) = store.get(key) {
if entry.timestamp.elapsed() < entry.ttl {
Some(entry.value.clone())
} else {
// Entry expired, will be cleaned up later
None
}
} else {
None
}
}
pub async fn set(&self, key: String, value: String, ttl: Option<Duration>) -> Result<()> {
let ttl = ttl.unwrap_or(self.default_ttl);
let mut store = self.store.write().await;
// Clean up expired entries if capacity is exceeded
if store.len() >= self.capacity {
store.retain(|_, entry| entry.timestamp.elapsed() < entry.ttl);
}
store.insert(key, CacheEntry {
value,
timestamp: Instant::now(),
ttl,
});
Ok(())
}
pub async fn delete(&self, key: &str) -> Result<bool> {
let mut store = self.store.write().await;
Ok(store.remove(key).is_some())
}
pub async fn clear_expired(&self) -> Result<usize> {
let mut store = self.store.write().await;
let mut removed_count = 0;
store.retain(|_, entry| {
if entry.timestamp.elapsed() >= entry.ttl {
removed_count += 1;
false
} else {
true
}
});
Ok(removed_count)
}
}
// Redis-like cache implementation
pub struct RedisCache {
client: Arc<MockRedisClient>, // In a real implementation, use actual Redis client
}
struct MockRedisClient;
impl MockRedisClient {
pub async fn get(&self, _key: &str) -> Option<String> {
Some("cached_value".to_string()) // Placeholder
}
pub async fn set(&self, _key: &str, _value: &str, _ttl: Duration) -> Result<()> {
Ok(()) // Placeholder
}
pub async fn del(&self, _key: &str) -> Result<bool> {
Ok(true) // Placeholder
}
}
impl RedisCache {
pub fn new() -> Self {
Self {
client: Arc::new(MockRedisClient),
}
}
pub async fn get(&self, key: &str) -> Result<Option<String>> {
Ok(self.client.get(key).await)
}
pub async fn set(&self, key: &str, value: &str, ttl: Duration) -> Result<()> {
self.client.set(key, value, ttl).await
}
pub async fn delete(&self, key: &str) -> Result<bool> {
self.client.del(key).await
}
}
// Cache middleware
async fn caching_middleware(
req: Request,
next: Next,
State(cache): State<Arc<InMemoryCache>>
) -> Result<Response> {
if req.method() != http::Method::GET {
// Only cache GET requests
return next.run(req).await;
}
let cache_key = format!("response_{}_{}", req.method(), req.uri());
// Try to get from cache
if let Some(cached_response) = cache.get(&cache_key).await {
return Ok(Response::html(cached_response));
}
// Execute request
let response = next.run(req).await?;
// Cache successful responses
if response.status().is_success() {
let response_clone = response.clone(); // In a real implementation, serialize response
cache.set(cache_key, "cached_response".to_string(), Some(Duration::from_secs(300))).await?;
}
Ok(response)
}
// Cache-aside pattern implementation
pub struct CachedRepository {
cache: Arc<InMemoryCache>,
db_pool: OptimizedDbPool,
}
impl CachedRepository {
pub fn new(cache: Arc<InMemoryCache>, db_pool: OptimizedDbPool) -> Self {
Self { cache, db_pool }
}
pub async fn get_user(&self, id: i32) -> Result<Option<OptimizedUser>> {
let cache_key = format!("user_{}", id);
// Try cache first
if let Some(cached) = self.cache.get(&cache_key).await {
return Ok(serde_json::from_str(&cached).ok());
}
// Cache miss, query database
let user = self.db_pool.with_connection(|conn| {
// Execute SELECT * FROM users WHERE id = ?
Ok(None::<OptimizedUser>) // Placeholder
}).await?;
// Cache the result if found
if let Some(ref user) = user {
if let Ok(serialized) = serde_json::to_string(user) {
self.cache.set(cache_key, serialized, Some(Duration::from_secs(600))).await?;
}
}
Ok(user)
}
pub async fn invalidate_user_cache(&self, id: i32) -> Result<()> {
let cache_key = format!("user_{}", id);
self.cache.delete(&cache_key).await?;
Ok(())
}
}
Memory Management
Optimize memory usage:
use oxidite::prelude::*;
use std::sync::Arc;
// Efficient data structures
pub struct EfficientDataStructures;
impl EfficientDataStructures {
// Use SmallVec for small collections that may grow
pub fn use_small_collections() -> Result<()> {
use smallvec::SmallVec;
// Stack-allocated for small arrays, heap-allocated for larger ones
let mut small_vec: SmallVec<[u32; 4]> = SmallVec::new();
small_vec.push(1);
small_vec.push(2);
small_vec.push(3);
small_vec.push(4);
// If we add a 5th element, it moves to heap allocation
Ok(())
}
// Use String instead of &str when ownership is needed
pub fn efficient_string_handling() -> Result<()> {
let mut buffer = String::with_capacity(1024); // Pre-allocate
// Efficient string building
buffer.push_str("Hello");
buffer.push(' ');
buffer.push_str("World");
// Avoid unnecessary clones
let shared_string = Arc::new(buffer);
Ok(())
}
// Use Cow (Clone on Write) for flexible string handling
pub fn cow_example(input: &str) -> std::borrow::Cow<str> {
if input.contains("transform") {
std::borrow::Cow::Owned(input.replace("transform", "optimized"))
} else {
std::borrow::Cow::Borrowed(input)
}
}
// Use interned strings for repeated values
pub fn interned_strings_example() -> Result<()> {
use std::collections::HashMap;
// For repeated string values, consider interning
let mut string_interner = HashMap::new();
string_interner.insert("status_active", 1);
string_interner.insert("status_inactive", 2);
Ok(())
}
}
// Memory pool for frequently allocated objects
pub struct ObjectPool<T> {
objects: Arc<Mutex<Vec<T>>>,
factory: Box<dyn Fn() -> T + Send + Sync>,
}
impl<T: Send + 'static> ObjectPool<T> {
pub fn new(factory: Box<dyn Fn() -> T + Send + Sync>, initial_size: usize) -> Self {
let mut objects = Vec::with_capacity(initial_size);
for _ in 0..initial_size {
objects.push(factory());
}
Self {
objects: Arc::new(Mutex::new(objects)),
factory,
}
}
pub async fn get(&self) -> PooledObject<T> {
let mut objects = self.objects.lock().await;
if let Some(obj) = objects.pop() {
PooledObject {
obj: Some(obj),
pool: self.objects.clone(),
}
} else {
// Create new object if pool is empty
PooledObject {
obj: Some((self.factory)()),
pool: self.objects.clone(),
}
}
}
}
pub struct PooledObject<T> {
obj: Option<T>,
pool: Arc<Mutex<Vec<T>>>,
}
impl<T> std::ops::Deref for PooledObject<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
self.obj.as_ref().unwrap()
}
}
impl<T> std::ops::DerefMut for PooledObject<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
self.obj.as_mut().unwrap()
}
}
impl<T> Drop for PooledObject<T> {
fn drop(&mut self) {
if let Some(obj) = self.obj.take() {
// Return object to pool
let mut pool = self.pool.blocking_lock();
pool.push(obj);
}
}
}
// Memory-efficient JSON handling
use serde_json;
pub struct EfficientJsonHandler;
impl EfficientJsonHandler {
pub async fn handle_large_json(req: Request) -> Result<Response> {
use bytes::Bytes;
// For large JSON payloads, process incrementally
let body_bytes = hyper::body::to_bytes(req.into_body()).await
.map_err(|e| Error::InternalServerError(e.to_string()))?;
// Parse JSON efficiently
let parsed: serde_json::Value = serde_json::from_slice(&body_bytes)
.map_err(|e| Error::BadRequest(e.to_string()))?;
// Process only needed fields to avoid memory overhead
let result = process_needed_fields(&parsed);
Ok(Response::json(result))
}
pub fn process_needed_fields(value: &serde_json::Value) -> serde_json::Value {
match value {
serde_json::Value::Object(map) => {
// Only extract needed fields
let mut result = serde_json::Map::new();
if let Some(id) = map.get("id") {
result.insert("id".to_string(), id.clone());
}
if let Some(name) = map.get("name") {
result.insert("name".to_string(), name.clone());
}
serde_json::Value::Object(result)
}
_ => value.clone(),
}
}
}
Concurrency and Parallelism
Optimize concurrent operations:
use oxidite::prelude::*;
use tokio::task;
use std::sync::Arc;
// Parallel request processing
pub struct ParallelProcessor;
impl ParallelProcessor {
pub async fn process_requests_in_parallel(requests: Vec<Request>) -> Result<Vec<Response>> {
let mut handles = Vec::new();
for request in requests {
let handle = task::spawn(async move {
// Process each request in parallel
process_single_request(request).await
});
handles.push(handle);
}
let mut responses = Vec::with_capacity(handles.len());
for handle in handles {
match handle.await {
Ok(response_result) => {
if let Ok(response) = response_result {
responses.push(response);
}
}
Err(e) => {
// Handle task panic
eprintln!("Task failed: {}", e);
}
}
}
Ok(responses)
}
pub async fn process_data_streams(data_chunks: Vec<Vec<u8>>) -> Result<Vec<Vec<u8>>> {
// Process data chunks in parallel
let handles: Vec<_> = data_chunks
.into_iter()
.map(|chunk| {
task::spawn(async move {
process_chunk(chunk).await
})
})
.collect();
let mut results = Vec::new();
for handle in handles {
if let Ok(processed_chunk) = handle.await {
if let Ok(chunk) = processed_chunk {
results.push(chunk);
}
}
}
Ok(results)
}
}
async fn process_single_request(_req: Request) -> Result<Response> {
// Simulate request processing
Ok(Response::ok())
}
async fn process_chunk(chunk: Vec<u8>) -> Result<Vec<u8>> {
// Simulate chunk processing
Ok(chunk)
}
// Semaphore for controlling concurrent operations
use tokio::sync::Semaphore;
pub struct RateLimitedProcessor {
semaphore: Arc<Semaphore>,
}
impl RateLimitedProcessor {
pub fn new(concurrent_limit: usize) -> Self {
Self {
semaphore: Arc::new(Semaphore::new(concurrent_limit)),
}
}
pub async fn process_with_limit<F, R>(&self, operation: F) -> Result<R>
where
F: std::future::Future<Output = Result<R>>,
{
let _permit = self.semaphore.acquire().await
.map_err(|_| Error::InternalServerError("Semaphore error".to_string()))?;
operation.await
}
pub async fn process_batch_with_limit<T, F, R>(
&self,
items: Vec<T>,
processor: impl Fn(T) -> F,
) -> Result<Vec<R>>
where
F: std::future::Future<Output = Result<R>>,
{
let mut handles = Vec::new();
for item in items {
let semaphore = self.semaphore.clone();
let future = async move {
let _permit = semaphore.acquire().await
.map_err(|_| Error::InternalServerError("Semaphore error".to_string()))?;
processor(item).await
};
handles.push(task::spawn(future));
}
let mut results = Vec::new();
for handle in handles {
match handle.await {
Ok(result) => {
if let Ok(val) = result {
results.push(val);
}
}
Err(e) => {
eprintln!("Task failed: {}", e);
}
}
}
Ok(results)
}
}
// Async/await best practices
pub struct AsyncBestPractices;
impl AsyncBestPractices {
// Use join! for independent operations
pub async fn parallel_independent_operations() -> Result<()> {
use tokio::join;
let user_future = fetch_user_data();
let product_future = fetch_product_data();
let order_future = fetch_order_data();
let (user_result, product_result, order_result) = join!(user_future, product_future, order_future);
// Process results
let _user = user_result?;
let _product = product_result?;
let _order = order_result?;
Ok(())
}
// Use select! for racing operations
pub async fn racing_operations() -> Result<String> {
use tokio::select;
select! {
result = fetch_from_primary_db() => {
Ok(result?)
}
result = fetch_from_backup_db() => {
eprintln!("Primary DB failed, using backup");
Ok(result?)
}
_ = tokio::time::sleep(tokio::time::Duration::from_secs(5)) => {
Err(Error::Timeout)
}
}
}
// Use spawn_blocking for CPU-intensive work
pub async fn cpu_intensive_work() -> Result<()> {
let result = task::spawn_blocking(|| {
// CPU-intensive work that shouldn't block the async runtime
perform_cpu_intensive_calculation()
}).await
.map_err(|e| Error::InternalServerError(e.to_string()))?;
// Process result
let _processed = result;
Ok(())
}
}
fn perform_cpu_intensive_calculation() -> String {
// Simulate CPU-intensive work
(0..1000).fold(0, |acc, x| acc + x * x).to_string()
}
async fn fetch_user_data() -> Result<String> { Ok("user_data".to_string()) }
async fn fetch_product_data() -> Result<String> { Ok("product_data".to_string()) }
async fn fetch_order_data() -> Result<String> { Ok("order_data".to_string()) }
async fn fetch_from_primary_db() -> Result<String> { Ok("primary_data".to_string()) }
async fn fetch_from_backup_db() -> Result<String> { Ok("backup_data".to_string()) }
Network Optimizations
Optimize network performance:
use oxidite::prelude::*;
// HTTP/2 and HTTP/3 optimizations
pub struct HttpOptimizations;
impl HttpOptimizations {
// Enable HTTP/2 server push (when available)
pub fn configure_http2_server() -> Result<()> {
// In a real implementation, configure HTTP/2 settings
// Enable server push, header compression, multiplexing
Ok(())
}
// Connection reuse and keep-alive
pub fn configure_keep_alive() -> Result<()> {
// In a real implementation, configure connection pooling
// Set appropriate keep-alive timeouts
Ok(())
}
// Enable compression
pub fn enable_compression() -> Result<()> {
// In a real implementation, enable gzip/brotli compression
// Set appropriate compression levels
Ok(())
}
}
// Response streaming for large data
async fn stream_large_data(_req: Request) -> Result<Response> {
use futures::stream::{self, StreamExt};
use tokio_util::codec::{FramedWrite, LinesCodec};
use tokio_util::io::StreamReader;
// Create a stream of data chunks
let chunks: Vec<Result<bytes::Bytes, std::io::Error>> = vec![
Ok(bytes::Bytes::from("Line 1\n")),
Ok(bytes::Bytes::from("Line 2\n")),
Ok(bytes::Bytes::from("Line 3\n")),
];
let stream = stream::iter(chunks);
// Convert stream to response
// In a real implementation, this would create a streaming response
Ok(Response::text("Streaming response".to_string()))
}
// Chunked transfer encoding for large responses
async fn chunked_response(_req: Request) -> Result<Response> {
// For responses that are built incrementally
let mut response_builder = Response::builder();
// Set chunked encoding header
response_builder.header("Transfer-Encoding", "chunked");
// In a real implementation, this would return a chunked response
Ok(Response::text("Chunked response content".to_string()))
}
// CDN-friendly headers
async fn cdn_optimized_response(_req: Request) -> Result<Response> {
let mut response = Response::html("<h1>Hello World</h1>");
// Add CDN-friendly headers
response.headers_mut().insert(
"Cache-Control",
"public, max-age=3600".parse().unwrap() // Cache for 1 hour
);
response.headers_mut().insert(
"Vary",
"Accept-Encoding".parse().unwrap() // Important for compression
);
// Add ETag for validation
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update("<h1>Hello World</h1>");
let hash = format!("{:x}", hasher.finalize());
response.headers_mut().insert(
"ETag",
format!("\"{}\"", hash).parse().unwrap()
);
Ok(response)
}
// Optimized static file serving
use tokio::fs::File;
use tokio::io::AsyncReadExt;
async fn optimized_static_file_handler(Path(file_path): Path<String>) -> Result<Response> {
// Validate path to prevent directory traversal
if file_path.contains("..") || file_path.starts_with('/') {
return Err(Error::BadRequest("Invalid file path".to_string()));
}
let full_path = format!("public/{}", file_path);
// Check if file exists
if !std::path::Path::new(&full_path).exists() {
return Err(Error::NotFound);
}
// Open file
let mut file = File::open(&full_path).await
.map_err(|e| Error::InternalServerError(format!("Failed to open file: {}", e)))?;
// Get file metadata
let metadata = file.metadata().await
.map_err(|e| Error::InternalServerError(format!("Failed to get metadata: {}", e)))?;
// Read file content
let mut contents = vec![0; metadata.len() as usize];
file.read_exact(&mut contents).await
.map_err(|e| Error::InternalServerError(format!("Failed to read file: {}", e)))?;
// Set appropriate content type
let content_type = get_content_type(&file_path);
let mut response = Response::html(String::from_utf8_lossy(&contents).to_string());
response.headers_mut().insert(
"Content-Type",
content_type.parse().unwrap()
);
// Add caching headers
response.headers_mut().insert(
"Cache-Control",
"public, max-age=31536000".parse().unwrap() // 1 year
);
// Add ETag
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(&contents);
let hash = format!("{:x}", hasher.finalize());
response.headers_mut().insert(
"ETag",
format!("\"{}\"", hash).parse().unwrap()
);
Ok(response)
}
fn get_content_type(path: &str) -> &'static str {
match std::path::Path::new(path).extension().and_then(|ext| ext.to_str()) {
Some("html") => "text/html",
Some("css") => "text/css",
Some("js") => "application/javascript",
Some("json") => "application/json",
Some("png") => "image/png",
Some("jpg") | Some("jpeg") => "image/jpeg",
Some("gif") => "image/gif",
Some("svg") => "image/svg+xml",
Some("ico") => "image/x-icon",
Some("woff") => "font/woff",
Some("woff2") => "font/woff2",
Some("ttf") => "font/ttf",
Some("eot") => "application/vnd.ms-fontobject",
_ => "application/octet-stream",
}
}
Profiling and Monitoring
Monitor and profile your application:
use oxidite::prelude::*;
use std::sync::Arc;
use tokio::time::{Duration, Instant};
// Performance monitoring middleware
async fn performance_monitoring_middleware(req: Request, next: Next) -> Result<Response> {
let start_time = Instant::now();
let method = req.method().clone();
let uri = req.uri().clone();
let response = next.run(req).await;
let elapsed = start_time.elapsed();
// Log performance metrics
log_performance_metrics(&method, &uri, elapsed, response.as_ref().map(|r| r.status().as_u16()).unwrap_or(500));
response
}
fn log_performance_metrics(method: &http::Method, uri: &http::Uri, elapsed: Duration, status_code: u16) {
println!(
"PERFORMANCE - {} {} - {}ms - Status: {}",
method,
uri.path(),
elapsed.as_millis(),
status_code
);
// In a real implementation, send to metrics collection system
// like Prometheus, DataDog, etc.
}
// Request tracing middleware
async fn request_tracing_middleware(req: Request, next: Next) -> Result<Response> {
let trace_id = uuid::Uuid::new_v4().to_string();
let span_id = uuid::Uuid::new_v4().to_string();
// Add trace headers to response
let mut response = next.run(req).await?;
response.headers_mut().insert(
"X-Trace-ID",
trace_id.parse().unwrap()
);
response.headers_mut().insert(
"X-Span-ID",
span_id.parse().unwrap()
);
Ok(response)
}
// Memory usage monitoring
use sysinfo::{System, SystemExt, ProcessExt};
pub struct MemoryMonitor;
impl MemoryMonitor {
pub fn get_memory_usage() -> MemoryUsage {
let mut system = System::new_all();
system.refresh_all();
MemoryUsage {
total_memory: system.total_memory(),
used_memory: system.used_memory(),
free_memory: system.free_memory(),
}
}
pub fn get_process_memory_usage() -> ProcessMemoryUsage {
let mut system = System::new_all();
system.refresh_processes();
if let Some(process) = system.process(sysinfo::get_current_pid().expect("Failed to get PID")) {
ProcessMemoryUsage {
memory: process.memory(),
virtual_memory: process.virtual_memory(),
}
} else {
ProcessMemoryUsage {
memory: 0,
virtual_memory: 0,
}
}
}
}
pub struct MemoryUsage {
pub total_memory: u64,
pub used_memory: u64,
pub free_memory: u64,
}
pub struct ProcessMemoryUsage {
pub memory: u64,
pub virtual_memory: u64,
}
// Performance metrics endpoint
async fn performance_metrics_endpoint(_req: Request) -> Result<Response> {
let memory_usage = MemoryMonitor::get_memory_usage();
let process_memory = MemoryMonitor::get_process_memory_usage();
let metrics = serde_json::json!({
"memory": {
"total": memory_usage.total_memory,
"used": memory_usage.used_memory,
"free": memory_usage.free_memory,
"process_used": process_memory.memory,
"process_virtual": process_memory.virtual_memory
},
"timestamp": chrono::Utc::now().to_rfc3339()
});
Ok(Response::json(metrics))
}
// Slow query detection
pub struct QueryProfiler {
slow_query_threshold: Duration,
}
impl QueryProfiler {
pub fn new(slow_query_threshold_ms: u64) -> Self {
Self {
slow_query_threshold: Duration::from_millis(slow_query_threshold_ms),
}
}
pub async fn execute_with_profiling<F, R>(&self, query_name: &str, operation: F) -> Result<R>
where
F: std::future::Future<Output = Result<R>>,
{
let start = Instant::now();
let result = operation.await;
let elapsed = start.elapsed();
if elapsed > self.slow_query_threshold {
eprintln!(
"SLOW QUERY WARNING - {}: {:?}ms",
query_name,
elapsed.as_millis()
);
}
result
}
}
// Benchmark utilities
pub struct Benchmarker;
impl Benchmarker {
pub async fn benchmark<F, R>(name: &str, iterations: usize, operation: F) -> BenchmarkResult
where
F: Fn() -> R + Copy,
{
let start = Instant::now();
for _ in 0..iterations {
let _result = operation();
}
let elapsed = start.elapsed();
let avg_time = elapsed / iterations as u32;
println!(
"BENCHMARK - {}: {} iterations in {:?} (avg: {:?} per iteration)",
name, iterations, elapsed, avg_time
);
BenchmarkResult {
name: name.to_string(),
iterations,
total_time: elapsed,
avg_time,
}
}
}
pub struct BenchmarkResult {
pub name: String,
pub iterations: usize,
pub total_time: Duration,
pub avg_time: Duration,
}
// Example benchmark usage
async fn run_benchmarks() -> Result<()> {
// Benchmark different operations
Benchmarker::benchmark("string_concatenation", 10000, || {
let mut s = String::new();
s.push_str("hello");
s.push(' ');
s.push_str("world");
s
}).await;
Benchmarker::benchmark("vector_creation", 10000, || {
let mut v = Vec::with_capacity(10);
for i in 0..10 {
v.push(i);
}
v
}).await;
Ok(())
}
Summary
Performance optimization in Oxidite applications involves:
- Request Handling: Efficient request processing and middleware
- Database Optimization: Query optimization, connection pooling, batching
- Caching: In-memory and distributed caching strategies
- Memory Management: Efficient data structures and allocation
- Concurrency: Proper use of async/await and parallel processing
- Network Optimization: HTTP/2, compression, and streaming
- Profiling: Monitoring and measuring performance metrics
Following these optimization techniques will help you build fast, efficient Oxidite applications that can handle high loads while maintaining responsiveness.
PDF Export Guide
To produce a long-form PDF handbook from this book:
- Build HTML:
mdbook build docs/book
- Convert to PDF using your preferred engine.
Example with wkhtmltopdf:
wkhtmltopdf docs/book/book/index.html Oxidite-Complete-Handbook.pdf
For full-book output quality, use print styles and combine chapters in order.
Notes for very large manuals
- Split by sections if a single file becomes too large for your PDF engine.
- Keep image assets local in the book directory.
- Use consistent heading hierarchy so table of contents is generated correctly.
Appendix: Common Patterns and Recipes
This appendix contains common patterns, recipes, and solutions to frequently encountered scenarios when building applications with Oxidite.
Request Data Extraction Patterns
Extracting Multiple Types of Data from One Request
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize)]
struct SearchParams {
q: String,
page: Option<u32>,
limit: Option<u32>,
}
#[derive(Deserialize)]
struct SearchPayload {
query: String,
filters: Option<serde_json::Value>,
}
// Handler that extracts path, query, and JSON body
async fn advanced_search(
Path(category): Path<String>,
Query(params): Query<SearchParams>,
Json(payload): Json<SearchPayload>,
) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"category": category,
"search_params": params,
"payload": payload,
"message": "Advanced search executed"
})))
}
Working with Cookies
use oxidite::prelude::*;
async fn handle_cookies(cookies: Cookies) -> Result<Response> {
let session_id = cookies.get("session_id");
let theme = cookies.get("theme").unwrap_or("light");
let mut response_data = serde_json::json!({
"theme": theme,
"has_session": session_id.is_some()
});
if let Some(sid) = session_id {
response_data["session_id"] = serde_json::Value::String(sid.to_string());
}
Ok(Response::json(response_data))
}
Response Patterns
Conditional Responses
use oxidite::prelude::*;
async fn conditional_response(query: Query<serde_json::Value>) -> Result<Response> {
let format = query.0.get("format")
.and_then(|v| v.as_str())
.unwrap_or("json");
match format {
"html" => Ok(Response::html("<h1>HTML Response</h1>".to_string())),
"text" => Ok(Response::text("Text Response".to_string())),
_ => Ok(Response::json(serde_json::json!({ "message": "JSON Response" }))),
}
}
Streaming Large Data
use oxidite::prelude::*;
use futures::stream::{self, StreamExt};
use http_body_util::StreamBody;
use hyper::body::Frame;
use bytes::Bytes;
async fn stream_large_data(_req: Request) -> Result<Response> {
// Create a stream of data chunks
let chunks = vec![
"data-chunk-1",
"data-chunk-2",
"data-chunk-3",
"data-chunk-4",
];
let stream = stream::iter(chunks.into_iter().map(|chunk| {
Ok::<_, hyper::Error>(Frame::data(Bytes::from(chunk)))
}));
let body = StreamBody::new(stream);
let response = hyper::Response::builder()
.status(http::StatusCode::OK)
.header(hyper::header::CONTENT_TYPE, "text/plain")
.body(body.boxed())
.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(response)
}
Error Handling Patterns
Custom Error Responses
use oxidite::prelude::*;
// Create a custom error response
fn custom_error_response(message: &str, status: u16) -> Response {
Response::json(serde_json::json!({
"error": message,
"status": status
}))
}
async fn custom_error_handler(_req: Request) -> Result<Response> {
// Simulate a validation error
let is_valid = false;
if !is_valid {
let error_response = custom_error_response("Validation failed", 422);
return Ok(error_response);
}
Ok(Response::json(serde_json::json!({ "status": "success" })))
}
Error Recovery Pattern
use oxidite::prelude::*;
async fn recoverable_operation(_req: Request) -> Result<Response> {
// Attempt operation that might fail
let result = some_risky_operation().await;
match result {
Ok(data) => Ok(Response::json(data)),
Err(_) => {
// Return a fallback response instead of error
Ok(Response::json(serde_json::json!({
"warning": "Using cached data",
"data": get_cached_data()
})))
}
}
}
async fn some_risky_operation() -> Result<serde_json::Value, Box<dyn std::error::Error>> {
// Simulate an operation that might fail
Err("Operation failed".into())
}
fn get_cached_data() -> serde_json::Value {
serde_json::json!({ "cached": true, "data": "fallback" })
}
Middleware Patterns
Authentication Middleware
use oxidite::prelude::*;
async fn auth_middleware(req: Request, next: Next) -> Result<Response> {
// Check for auth token in headers
let auth_header = req.headers().get("authorization")
.and_then(|hv| hv.to_str().ok());
match auth_header {
Some(token) if token.starts_with("Bearer ") => {
// Validate token (simplified)
let token = token.trim_start_matches("Bearer ");
if validate_token(token) {
// Add user info to request extensions
let mut req = req;
req.extensions_mut().insert(CurrentUser { id: 1, role: "user".to_string() });
next.run(req).await
} else {
Err(Error::Unauthorized("Invalid token".to_string()))
}
}
_ => Err(Error::Unauthorized("Missing or invalid token".to_string()))
}
}
fn validate_token(_token: &str) -> bool {
// In a real app, validate against your auth system
true
}
#[derive(Clone)]
struct CurrentUser {
id: u32,
role: String,
}
async fn protected_route(user: CurrentUser) -> Result<Response> {
Ok(Response::json(serde_json::json!({
"message": "Access granted",
"user_id": user.id,
"role": user.role
})))
}
Database Patterns
Repository Pattern
use oxidite::prelude::*;
use serde::Deserialize;
// Simplified repository pattern
struct UserRepository;
impl UserRepository {
async fn find_by_id(&self, id: u32) -> Result<Option<User>, Error> {
// In a real app, query your database
if id == 1 {
Ok(Some(User {
id,
name: "Alice".to_string(),
email: "alice@example.com".to_string(),
}))
} else {
Ok(None)
}
}
async fn find_all(&self, limit: u32, offset: u32) -> Result<Vec<User>, Error> {
// In a real app, query your database
Ok(vec![
User {
id: 1,
name: "Alice".to_string(),
email: "alice@example.com".to_string(),
},
User {
id: 2,
name: "Bob".to_string(),
email: "bob@example.com".to_string(),
},
])
}
}
#[derive(serde::Serialize, Clone)]
struct User {
id: u32,
name: String,
email: String,
}
async fn get_user(
Path(user_id): Path<u32>,
State(repo): State<std::sync::Arc<UserRepository>>
) -> Result<Response> {
match repo.find_by_id(user_id).await? {
Some(user) => Ok(Response::json(serde_json::json!(user))),
None => Err(Error::NotFound),
}
}
async fn get_users(
Query(params): Query<PageParams>,
State(repo): State<std::sync::Arc<UserRepository>>
) -> Result<Response> {
let users = repo.find_all(params.limit.unwrap_or(10), params.offset.unwrap_or(0)).await?;
Ok(Response::json(serde_json::json!(users)))
}
#[derive(Deserialize)]
struct PageParams {
limit: Option<u32>,
offset: Option<u32>,
}
Configuration Patterns
Environment-Based Configuration
use oxidite::prelude::*;
use serde::Deserialize;
#[derive(Deserialize, Clone)]
struct AppConfig {
database_url: String,
server_port: u16,
debug_mode: bool,
}
impl Default for AppConfig {
fn default() -> Self {
Self {
database_url: std::env::var("DATABASE_URL").unwrap_or("sqlite::memory:".to_string()),
server_port: std::env::var("PORT")
.unwrap_or("3000".to_string())
.parse()
.unwrap_or(3000),
debug_mode: std::env::var("DEBUG").unwrap_or("false".to_string()) == "true",
}
}
}
async fn config_endpoint(State(config): State<AppConfig>) -> Result<Response> {
Ok(Response::json(serde_json::json!(config)))
}
Testing Patterns
Unit Testing Handlers
#[cfg(test)]
mod tests {
use super::*;
use oxidite_testing::TestClient;
use tokio;
#[tokio::test]
async fn test_home_route() {
let mut router = Router::new();
router.get("/", home);
let client = TestClient::new(router);
let response = client.get("/").send().await;
assert_eq!(response.status(), 200);
let body = response.text().await;
assert!(body.contains("Welcome to Oxidite!"));
}
#[tokio::test]
async fn test_api_route() {
let mut router = Router::new();
router.get("/api/hello", api_hello);
let client = TestClient::new(router);
let response = client.get("/api/hello").send().await;
assert_eq!(response.status(), 200);
let json: serde_json::Value = response.json().await;
assert_eq!(json["message"], "Hello from API");
}
}
Performance Patterns
Caching with Memoization
use oxidite::prelude::*;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{SystemTime, UNIX_EPOCH};
// Simple in-memory cache
struct SimpleCache {
data: Arc<Mutex<HashMap<String, (serde_json::Value, u64)>>>,
ttl_seconds: u64,
}
impl SimpleCache {
fn new(ttl_seconds: u64) -> Self {
Self {
data: Arc::new(Mutex::new(HashMap::new())),
ttl_seconds,
}
}
fn get(&self, key: &str) -> Option<serde_json::Value> {
let data = self.data.lock().unwrap();
if let Some((value, timestamp)) = data.get(key) {
let current_time = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
if current_time - timestamp < self.ttl_seconds {
Some(value.clone())
} else {
// Entry expired
drop(data);
self.remove(key);
None
}
} else {
None
}
}
fn set(&self, key: String, value: serde_json::Value) {
let mut data = self.data.lock().unwrap();
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
data.insert(key, (value, timestamp));
}
fn remove(&self, key: &str) {
let mut data = self.data.lock().unwrap();
data.remove(key);
}
}
async fn cached_computation(
State(cache): State<Arc<SimpleCache>>,
Path(computation_type): Path<String>
) -> Result<Response> {
let cache_key = format!("computation_{}", computation_type);
// Check cache first
if let Some(cached_result) = cache.get(&cache_key) {
return Ok(Response::json(serde_json::json!({
"result": cached_result,
"from_cache": true
})));
}
// Perform expensive computation
let result = perform_expensive_computation(&computation_type).await;
// Cache the result
cache.set(cache_key, result.clone());
Ok(Response::json(serde_json::json!({
"result": result,
"from_cache": false
})))
}
async fn perform_expensive_computation(_input: &str) -> serde_json::Value {
// Simulate expensive computation
serde_json::json!({ "computed": true, "value": 42 })
}
Common Anti-Patterns to Avoid
Blocking Operations in Async Context
Don’t do this:
// BAD: This blocks the async runtime
async fn bad_handler(_req: Request) -> Result<Response> {
let result = std::process::Command::new("slow_command").output().unwrap();
Ok(Response::text(format!("{:?}", result)))
}
Do this instead:
// GOOD: Use spawn_blocking for CPU-intensive operations
use tokio::task;
async fn good_handler(_req: Request) -> Result<Response> {
let result = task::spawn_blocking(|| {
std::process::Command::new("slow_command").output().unwrap()
}).await.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(Response::text(format!("{:?}", result)))
}
Improper Error Handling
Don’t do this:
// BAD: Converting errors to strings loses context
async fn bad_error_handling(_req: Request) -> Result<Response> {
let data = some_operation().await.map_err(|e| Error::InternalServerError(e.to_string()))?;
Ok(Response::json(data))
}
Do this instead:
// GOOD: Preserve error types when possible
async fn good_error_handling(_req: Request) -> Result<Response> {
let data = some_operation().await?;
Ok(Response::json(data))
}
async fn some_operation() -> Result<serde_json::Value, Error> {
// Return specific error types that map to appropriate HTTP statuses
Err(Error::NotFound)
}
This appendix provides practical patterns for common scenarios you’ll encounter when building Oxidite applications. Use these as starting points for your own implementations, adapting them to your specific needs.