One annotation to distribute work across a fleet. Built-in failover, load balancing, and everything a server needs — no orchestration layer required.
# work.mpl
@cluster pub fn add() -> Int do
1 + 1
end
# api/router.mpl
from Api.Todos import handle_get_todo, handle_list_todos
pub fn build_router() do
HTTP.router()
|> HTTP.on_get("/todos", HTTP.clustered(handle_list_todos))
|> HTTP.on_get("/todos/:id", HTTP.clustered(handle_get_todo))
endThe primitives for building reliable, scalable server infrastructure are built into the language itself.
Keep clustered work source-first with @cluster, wrap the routes you want with HTTP.clustered(...), and let the runtime handle placement and failover. No manual submit/status plumbing in app code.
from Api.Health import handle_health
from Api.Todos import handle_create_todo, handle_get_todo, handle_list_todos
pub fn build_router() do
HTTP.router()
|> HTTP.on_get("/health", handle_health)
|> HTTP.on_get("/todos", HTTP.clustered(handle_list_todos))
|> HTTP.on_get("/todos/:id", HTTP.clustered(handle_get_todo))
|> HTTP.on_post("/todos", handle_create_todo)
endBoot through Node.start_from_env() and let the runtime handle discovery, promotion, and recovery across the same app code. No package-owned cluster control plane or external coordinator.
fn main() do
let _ = Node.start_from_env()
HTTP.serve(build_router(), 8080)
end
# same app on every node
# primary executes the work
# standby mirrors and resumes if the primary disappearsHTTP, PostgreSQL, SQLite, WebSockets, migrations, rate limiting, background workers — all in the standard library. No package hunting, no driver setup, no glue code. Ship a production server with zero external dependencies.
fn main() do
let pool = Postgres.open(Env.get("DATABASE_URL"))
HTTP.serve(HTTP.router()
|> HTTP.on_get("/users", fn(req) do
let users = Repo.all(pool, Query.select(User)
|> Query.where(fn(u) do u.active end)
|> Query.limit(100))
HTTP.response(200, Json.encode(users))
end)
# WebSockets on the same server, same process
|> HTTP.on_websocket("/live", fn(ws) do
let _ = Ws.send(ws, Json.encode({status: "connected"}))
Ws.loop(ws)
end), 8080)
endA visual monitoring layer is coming built directly into Mesh — no Prometheus, no Grafana, no sidecar containers. See your nodes, watch actors spawn and die, observe data flowing across your cluster in real time.
See your entire infrastructure as it operates — built in.
Benchmarked on dedicated Fly.io machines — 2 vCPU, 4 GB RAM, same region, private network. No synthetic games.
The meaningful number is Mesh vs Elixir. They share the same actor model — Mesh gets you 2.3× the throughput at less than half the latency, compiled to a native binary.
GET /text — Isolated VMs, 100 concurrent connections
Fly.io performance-2x · 100 connections · 30s warmup + 5×30s runs · Run 1 excluded · Full methodology →
Distribution as a first-class language feature changes how you build servers.
Static types, native speed
Mesh shares Elixir's actor model and let-it-crash philosophy, adding static type inference. No runtime type surprises, no Dialyzer setup, and code compiles to native binaries instead of running on the BEAM VM.
Distribution built-in
Go's goroutines are fast but distribution still requires Redis, queues, or external systems. In Mesh, @cluster and HTTP.clustered(...) are built in — failover, load balancing, and exactly-once semantics with zero infrastructure.
True multi-node concurrency
Node.js is single-threaded and requires cluster modules, Redis, and worker_threads just to scale. Mesh has native multi-core actors, multi-node distribution, and type safety without the TypeScript toolchain overhead.
Install Mesh in seconds. Works on macOS, Linux, and Windows.
curl -sSf https://meshlang.dev/install.sh | sh