GitHub - rivet-dev/rivet at fix-docs-button

What is Rivet?

Rivet Actors are long-running, lightweight processes designed for stateful workloads. State lives in-memory with automatic persistence. Create one per agent, per session, or per user — with built-in workflows, queues, and scheduling.

Backend

const agent = actor({
  // In-memory, persisted state for the actor
  state: { messages: [] as Message[] },

  // Long-running actor process
  run: async (c) => {
    // Process incoming messages from the queue
    for await (const msg of c.queue.iter()) {
      c.state.messages.push({ role: "user", content: msg.body.text });
      const response = streamText({ model: openai("gpt-5"), messages: c.state.messages });

      // Stream realtime events to all connected clients
      for await (const delta of response.textStream) {
        c.broadcast("token", delta);
      }

      c.state.messages.push({ role: "assistant", content: await response.text });
    }
  },
});

Client (frontend or backend)

// Connect to an actor
const agent = client.agent.getOrCreate("agent-123").connect();

// Listen for realtime events
agent.on("token", delta => process.stdout.write(delta));

// Send message to actor
await agent.queue.send("how many r's in strawberry?");

Features

One Actor per agent, per session, per user — state, storage, and networking included.

Rivet provides:

  • In-memory state — Co-located with compute for instant reads and writes. Persist with SQLite or BYO database.
  • Runs indefinitely, sleeps when idle — Long-lived when active, hibernates when idle.
  • Scales infinitely, scales to zero — Supports bursty workloads and is cost-efficient.
  • Global edge network — Deploy close to your users or in specific legal jurisdictions without complexity.

Actors support:

  • WebSockets — Real-time bidirectional streaming built in.
  • Workflows — Multi-step operations with automatic retries.
  • Queues — Durable message queues for reliable async processing.
  • Scheduling — Timers and cron jobs within your actor.

Use Cases

One primitive that adapts to agents, workflows, collaboration, and more.

  • AI Agent — Each agent runs as its own actor with persistent context, memory, and the ability to schedule tool calls.
  • Sandbox Orchestration — Coordinate sandbox sessions, queue work, and schedule cleanup in one long-lived actor per workspace.
  • Workflows — Multi-step operations with automatic retries, scheduling, and durable state across steps.
  • Collaborative Documents — Real-time collaborative editing where each document is an actor broadcasting changes to all connected users.
  • Per-Tenant Database — One actor per tenant with low-latency in-memory reads and durable tenant data persistence.
  • Chat — One actor per room or conversation with in-memory state, persistent history, and realtime delivery.

How Actors Compare

Rivet Actors vs. Traditional Infrastructure

Metric Rivet Actor Kubernetes Pod Virtual Machine
Cold start ~20ms ~6s ~30s
Memory per instance ~0.6KB ~50MB ~512MB
Idle cost $0 ~$85/mo (cluster) ~$5/mo
Horizontal scale Infinite ~5k nodes Manual
Multi-region Global edge 1 region 1 region

State

Metric Rivet Actor Redis Postgres
Read latency 0ms ~1ms ~5ms
Benchmark details & methodology

Cold Start

  • Rivet Actor (~20ms): Includes durable state init, not just a process spawn. No actor key, so no cross-region locking. Measured with Node.js and FoundationDB.
  • Kubernetes Pod (~6s): Node.js 24 Alpine image (56MB compressed) on AWS EKS with a pre-provisioned m5.large node. Breakdown: ~1s image pull and extraction, ~3-4s scheduling and container runtime setup, ~1s container start.
  • Virtual Machine (~30s): AWS EC2 t3.nano from launch to SSH-ready, using an Amazon Linux 2 AMI. t3.nano is the smallest available EC2 instance (512MB RAM).

Memory Per Instance

  • Rivet Actor (~0.6KB): RSS delta divided by actor count, measured by spawning 10,000 actors in Node.js v24 on Linux x86.
  • Kubernetes Pod (~50MB): Minimum idle Node.js container on Linux x86: Node.js v24 runtime (~43MB RSS), containerd-shim (~3MB), pause container (~1MB), and kubelet per-pod tracking (~2MB).
  • Virtual Machine (~512MB): AWS EC2 t3.nano, the smallest available EC2 instance with 512MB allocated memory.

Read Latency

  • Rivet Actor (0ms): State is read from co-located SQLite/KV storage on the same machine as the actor, with no network round-trip.
  • Redis (~1ms): AWS ElastiCache Redis (cache.t3.micro) in the same availability zone as the application.
  • Postgres (~5ms): AWS RDS PostgreSQL (db.t3.micro) in the same availability zone as the application.

Idle Cost

  • Rivet Actor ($0): Assumes Rivet Actors running on a serverless platform. Actors scale to zero with no idle infrastructure costs. Traditional container deployments may incur idle costs.
  • Virtual Machine (~$5/mo): AWS EC2 t3.nano ($0.0052/hr compute + $1.60/mo for 20GB gp3 storage) running 24/7. t3.nano is the smallest available EC2 instance (512MB RAM).
  • Kubernetes Cluster (~$85/mo): AWS EKS control plane ($73/mo) plus a single t3.nano worker node with 20GB gp3 storage, running 24/7.

Horizontal Scale

  • Rivet Actors (Infinite): Scale linearly by adding nodes with no single cluster size limit.
  • Kubernetes (~5k nodes): Officially supports clusters of up to 5,000 nodes per the Kubernetes scalability documentation.

Multi-Region

  • Rivet (Global edge network): Automatically spawns actors near your users and handles routing across regions.

Built-In Observability

Powerful debugging and monitoring tools from local development to production at scale.

Rivet Inspector

  • SQLite Viewer — Browse and query your actor's SQLite database in real-time
  • Workflow State — Inspect workflow progress, steps, and retries as they execute
  • Event Monitoring — Track every state change and action as it happens
  • REPL — Call actions, subscribe to events, and interact directly with your code

Deployment Options

RivetKit is a library. Connect it to Rivet Cloud or self-host when you need scaling, fault tolerance, and observability.

Just a Library

Install a package and run locally. No servers, no infrastructure. Actors run in your process during development.

Get started →

Self-Host

Single Rust binary or Docker container. Works with Postgres, file system, or FoundationDB.

docker run -p 6420:6420 rivetdev/engine

Self-hosting documentation →

Rivet Cloud

Fully managed. Global edge network. Connects to your existing cloud — Vercel, Railway, AWS, wherever you already deploy.

Sign up →

Open source, permissively licensed — Self-hosting matters for enterprise deployments, cloud portability, and avoiding vendor lock-in. Apache 2.0 means you own your infrastructure. View on GitHub →

Getting Started

Integrations

Serverless, containers, or your own servers — Rivet Actors work with your existing infrastructure, frameworks, and tools.

Infrastructure: VercelRailwayAWSDocker

Frameworks: ReactNext.jsHonoExpressElysiatRPC

Runtimes: Node.jsBunDenoCloudflare Workers

Tools: VitestPinoAI SDKOpenAPIAsyncAPI

Request an integration →

Projects in This Repository

Project Description
RivetKit TypeScript Client & server library for building actors
RivetKit Rust Rust client (experimental)
RivetKit Python Python client (experimental)
Rivet Engine Rust orchestration engine
Pegboard Actor orchestrator & networking
Gasoline Durable execution engine
Guard Traffic routing proxy
Epoxy Multi-region KV store (EPaxos)
Dashboard Inspector for debugging actors
Website Source for rivet.dev
Documentation Source for rivet.dev/docs

Community

License

Apache 2.0