Migrating from PlanetScale to Neon.tech

Following PlanetScale's recent announcement about discontinuing their free plan, like many others, I found myself reevaluating other options. As a developer who heavily relied on PlanetScale's services for my hobby projects, this news initially left me disheartened. However, after exploring alternative platforms, I stumbled upon Neon, which offers a nice free tier for their postgres hosting. In this blog post, I'll walk you through my personal experience of transitioning from PlanetScale to Neon.tech.

I've picked neon.tech, because they have a free tier, and their "Launch" plan currently starts at $19, which is a bit more reasonable for hobby projects that need to scale up. Another option is CockroachDb.

Connection strings

Get your planetscale connection strings either from your apps .env or create a new password in the planetscale dashboard.

Planetscale

mysql://<USER>:<PASSWORD>@aws.connect.psdb.cloud/mydb?sslaccept=strict

Neon

If you dont have an account yet, sign up on neon.tech and create your database. If possible, use the same database name that you used on planetscale.

postgresql://<USER>:<PASSWORD>@ep-super-paper-12345.eu-central-1.aws.neon.tech/mydb?sslmode=require

Install pgloader

GitHub - dimitri/pgloader: Migrate to PostgreSQL in a single command!

Both Installing pgloader via brew on my m1 mac and apt install did not work for me on Ubuntu 22.04.4 LTS. So I had to build pgloader from source.

git clone https://github.com/dimitri/pgloader.git
apt-get install sbcl unzip libsqlite3-dev make curl gawk freetds-dev libzip-dev
cd pgloader
make pgloader
./build/bin/pgloader --version

Create a pgloader config file

Prepare neon connectionstring for pgloader

For pgloader we have to use a slightly changed connection string for our target database on neon.
Instead of user:password@... we should use user:endpoint=<ENDPOINT>;password@...

postgresql://<USER>:endpoint=ep-super-paper-12345;<PASSWORD>@ep-super-paper-12345.eu-central-1.aws.neon.tech/mydb?sslmode=require

Prepare planetscale connectionstring for pgloader

pgloader also has different query params for the connection string. Change ?sslmode=require, to ?sslaccept=strict in your planetscale connection string.

mysql://<USER>:<PASSWORD>@aws.connect.psdb.cloud/mydb?sslmode=require

I have two branches on planetscale, main and dev. For my main branch I created a file config-main.load with the following content.

load database
	from mysql://<USER>:<PASSWORD>@aws.connect.psdb.cloud/mydb?sslmode=require
	into postgresql://<USER>:endpoint=ep-super-paper-12345;<PASSWORD>@ep-super-paper-12345.eu-central-1.aws.neon.tech/mydb?sslmode=require
	with quote identifiers, include drop, create tables, create indexes
	alter schema 'mydb' rename to 'public'
;

make sure your file ends with a semicolon ;

Repeat this for your other branches. For my dev branch I created a file config-dev.load with the connection strings for my dev branches.

Note

with quote identifiers, include drop, create tables, create indexes makes our migration a lot easier. quote identifiers preservers the case of the original column and table names. One of my tables is for example named PasswordResetToken in prisma. It is also named like that in planetscale, and the columns on it are for example named in camelCase like userId. The other flags are optional.

Not using with quote identifiers will change all table and column names to lowercase, which would require changes in our prisma schema.

Migrate the data

Now we can migrate the data from planetscale to neon, by using pgloader with our config file.

Watch out! This drops all existing data in your target tables. So make sure your target is correct and back up any existing data on your neon db.

~/code/pgloader/build/bin/pgloader config-main.load

Check data after migration

Go to the Neon Console, select the branch we migrated, and look through the data.

Update prisma.schema

Thanks to our with quote identifiers migration, we dont have to set any @@map mappings for the table names. The only thing we have to change is the provider

datasource db {
	provider = "postgressql"
	url = env("DATABASE_URL")
	relationMode = "prisma"
}

Update the connection string in your app

Update your .env.local or .env with the new connection string from neon.

DATABASE_URL='postgresql://<USER>:endpoint=ep-super-paper-12345;<PASSWORD>@ep-super-paper-12345.eu-central-1.aws.neon.tech/mydb?sslmode=require'

Don't forget to update your connection string in your CI pipelines (github secrets or similar) too, once you've migrated the data.

Baseline your schema

Planetscale uses some custom handling for the prisma migrations and I didnt find a way to export those. For neon.tech, prisma will create a _prisma_migrations table, where all migrations and their hashes are stored, so prisma knows which migrations have been run.

If you dont supply explicit types in your prisma schema, prisma will infer the types from the database. So we have to baseline our schema to make sure prisma uses the correct types.

pnpm prisma db pull

This will update your prisma.schema with the "real" types currently used in your database. For example an id column was varchar(191) on planetscale, as this is the default db type for mysql when using prisma's String type. The default for postgres would be text.

We can now step by step remove the custom mappings and types from our schema, and run migrations to make sure everything still works, without losing any data.

Create a baseline migration folder. This will hold the create table statements for all our tables.

mkdir -p prisma/migrations/0_init

Then generate an sql script from our pulled prisma.schema.

pnpm prisma migrate diff \
--from-empty \
--to-schema-datamodel prisma/schema.prisma \
--script > prisma/migrations/0_init/migration.sql

For whatever reason my migration.sql file had these two lines at the top. Since these aren't valid sql they would break our migration, so I removed them.

> prakta@0.0.0 prisma /home/aspieslechner/code/prakta
> dotenv -c -- prisma "migrate" "diff" "--from-empty" "--to-schema-datamodel" "prisma/schema.prisma" "--script"
...

Thats the state our db is in already, so we dont have to run this migration. But we have to tell prisma that our db already is in this state.

prisma migrate resolve --applied 0_init

Removing all the custom map: ... and @db... annotations from our schema, and running pnpm prisma migrate dev resulted in prisma wanting to drop all tables, which would mean losing all data. Instead i split the changes into smaller migrations.

First I removed all @db.VarChar(191) and then created a new migration with this small change.

prisma migrate dev --name remove-varchar

Then I removed db.Timestampz(6) and created another migration.

I repeated this for the custom map annotations until I ended up with the schema.prisma I had before moving the data from planetscale to neon. This should leave you with a database that has the recommended types, without any lost data.

Since this is the real "baseline" of our schema, I opted to delete all migrations again, and create another baseline from the state of our migrated database.

I deleted all rows from my db's _prisma_migrations table.

DELETE FROM _prisma_migrations;

Then I removed the migrations folder locally.

rm -rf prisma/migrations

And created a new baseline migration, just like we did before

mkdir -p prisma/migrations/0_init
pnpm prisma migrate diff \
--from-empty \
--to-schema-datamodel prisma/schema.prisma \
--script > prisma/migrations/0_init/migration.sql
pnpm prisma migrate resolve --applied 0_init

It's a bit hacky and you might be better off just creating a new db and migrating your data there, but this worked for me.

References