


This guide will help you migrate data to Convex from Postgres on any provider (Supabase, Neon, RDS, etc).
Exporting your Postgres data
Dumping to a file using Psql
If you have relatively small amounts of data (less than a few gigabytes), you can likely just dump your data to a file from psql and import it into Convex.
First, use psql to connect to your postgres instance. To dump the table people run a command like this one. It will dump in the JSONL format.
Adding it to Convex
If you already have a Convex project, simply cd into the directory of that project. If you don’t have one yet, you can make a new project with.
Import the data
Once you are in the project directory, you can import your jsonl file into your convex deployment
Take a look at your new data in the dashboard
Define your schema
You can add a Convex schema to your project following guides here https://docs.convex.dev/database/schemas#writing-schemas.
Tip: there is a button in the dashboard to generate this automatically based on your data.
It might end up looking like this:
Your favorite LLMs (eg Cursor) can be rather helpful here to generate a schema. You can dump the postgres schema from psql with \d people and feed that to a prompt to generate the Convex schema.
Reconnect your table relationships
If you have a field referring to documents in other tables, the convex-ic way to do that is to use v.id("people") as the field. Note: this field can’t be any arbitrary string. It must be the person._id value.
You’re welcome to continue using your other ID with an index on that field, but you will likely find benefit from using v.id, for instance runtime validators that ensure the IDs point to the correct tables.
Example schema:
Migrating relationships to Convex IDs generally looks like:
-
Import the data with existing primary keys (e.g. an
idfield) -
Add an index on that field to look it up by
id. -
Update relationship fields (e.g. a
authorId: v.string()field) to bev.union(v.string(), v.id("users")). At this point your schema looks like: -
Write a function to get the document by either id:
-
Write a migration to walk the table and update the
authorIdreferences:See the migration docs for setting up and running migrations.
-
Update your schema to use the
v.idvalidator:Note: thanks to schema validation, this will only successfully deploy if all blog posts have an
authorIdthat is a valid Convex ID. Unless you disable schema validation, the deployed Convex schema is guaranteed to match the data at rest!
Alternative: Streaming import using Airbyte
If you have a larger amount of data, or want to continuously stream data, you can try out the Airbyte integration. Check out more details https://www.convex.dev/can-do/airbyte
Build in minutes, scale forever.
Convex is the backend platform with everything you need to build your full-stack AI project. Cloud functions, a database, file storage, scheduling, workflow, vector search, and realtime updates fit together seamlessly.
Get started