Mesh
Now in development — v14.0

The language built for distributed systems.

One annotation to distribute work across a fleet. Built-in failover, load balancing, and everything a server needs — no orchestration layer required.

clustered starter
# work.mpl
@cluster pub fn add() -> Int do
  1 + 1
end

# api/router.mpl
from Api.Todos import handle_get_todo, handle_list_todos

pub fn build_router() do
  HTTP.router()
    |> HTTP.on_get("/todos", HTTP.clustered(handle_list_todos))
    |> HTTP.on_get("/todos/:id", HTTP.clustered(handle_get_todo))
end
.mpl
Auto-Failover
Runtime-owned recovery
No manual retries
LLVM Native
Compiled binaries
No VM overhead
Type-Safe
Full inference
Hindley-Milner
Batteries Included
HTTP, Postgres, WS
Full server stdlib
Features

Distributed systems, simplified

The primitives for building reliable, scalable server infrastructure are built into the language itself.

01
Distribution

Cluster-Native Distribution

Keep clustered work source-first with @cluster, wrap the routes you want with HTTP.clustered(...), and let the runtime handle placement and failover. No manual submit/status plumbing in app code.

api/router.mpl
from Api.Health import handle_health
from Api.Todos import handle_create_todo, handle_get_todo, handle_list_todos

pub fn build_router() do
  HTTP.router()
    |> HTTP.on_get("/health", handle_health)
    |> HTTP.on_get("/todos", HTTP.clustered(handle_list_todos))
    |> HTTP.on_get("/todos/:id", HTTP.clustered(handle_get_todo))
    |> HTTP.on_post("/todos", handle_create_todo)
end
02
Resilience

Runtime-Owned Failover

Boot through Node.start_from_env() and let the runtime handle discovery, promotion, and recovery across the same app code. No package-owned cluster control plane or external coordinator.

main.mpl
fn main() do
  let _ = Node.start_from_env()
  HTTP.serve(build_router(), 8080)
end

# same app on every node
# primary executes the work
# standby mirrors and resumes if the primary disappears
03
Batteries Included

Full Server Stdlib

HTTP, PostgreSQL, SQLite, WebSockets, migrations, rate limiting, background workers — all in the standard library. No package hunting, no driver setup, no glue code. Ship a production server with zero external dependencies.

server.mpl
fn main() do
  let pool = Postgres.open(Env.get("DATABASE_URL"))

  HTTP.serve(HTTP.router()
    |> HTTP.on_get("/users", fn(req) do
      let users = Repo.all(pool, Query.select(User)
        |> Query.where(fn(u) do u.active end)
        |> Query.limit(100))
      HTTP.response(200, Json.encode(users))
    end)
    # WebSockets on the same server, same process
    |> HTTP.on_websocket("/live", fn(ws) do
      let _ = Ws.send(ws, Json.encode({status: "connected"}))
      Ws.loop(ws)
    end), 8080)
end
04
Coming Soon

Built-in Observatory

A visual monitoring layer is coming built directly into Mesh — no Prometheus, no Grafana, no sidecar containers. See your nodes, watch actors spawn and die, observe data flowing across your cluster in real time.

mesh observatory
Coming Soon
primarynode-1workernode-2workernode-3
Node HealthActor TracesLive Data FlowRequest WaterfallActor MailboxesCluster Topology

See your entire infrastructure as it operates — built in.

Real-world benchmarks

Native speed, expressive as Elixir

Benchmarked on dedicated Fly.io machines — 2 vCPU, 4 GB RAM, same region, private network. No synthetic games.

💧

The meaningful number is Mesh vs Elixir. They share the same actor model — Mesh gets you 2.3× the throughput at less than half the latency, compiled to a native binary.

Requests per second

GET /text — Isolated VMs, 100 concurrent connections

Rust
46,244
Go
30,306
Mesh
29,108
Elixir
12,441

Fly.io performance-2x · 100 connections · 30s warmup + 5×30s runs · Run 1 excluded · Full methodology →

Comparison

Why Mesh?

Distribution as a first-class language feature changes how you build servers.

Closest architecture
vs Elixir

Static types, native speed

Mesh shares Elixir's actor model and let-it-crash philosophy, adding static type inference. No runtime type surprises, no Dialyzer setup, and code compiles to native binaries instead of running on the BEAM VM.

Type inferenceNative binariesSame actor model
vs Go

Distribution built-in

Go's goroutines are fast but distribution still requires Redis, queues, or external systems. In Mesh, @cluster and HTTP.clustered(...) are built in — failover, load balancing, and exactly-once semantics with zero infrastructure.

@cluster decoratorAuto-failoverNo external queue
vs Node.js

True multi-node concurrency

Node.js is single-threaded and requires cluster modules, Redis, and worker_threads just to scale. Mesh has native multi-core actors, multi-node distribution, and type safety without the TypeScript toolchain overhead.

True parallelismNative distributionNo build step
One command to install

Start building your distributed system

Install Mesh in seconds. Works on macOS, Linux, and Windows.

$curl -sSf https://meshlang.dev/install.sh | sh