← Back to context

Comment by yen223

5 hours ago

Databases tend to be "stickier" than other parts of any large software system. Largely because database migrations are costly. You can't just tear down an old database and rebuild a new one, you have to figure out how to move all that data across too.

The consequence is that things in database-land tends to move slower than other types of software. This I think is the major reason why we still use SQL.

Have postgres updates actually been requiring users do migrations? Or is this just a fear that something will go wrong?

  • From what I can gather, yes[1]:

    A dump/reload of the database or use of the pg_upgrade application is required for major upgrades.

    Now, seems one can mostly use pg_upgrade, which only rewrites the system tables[2] so is fairly quick. But if on-disk format has changed it seems you're forced to dump and reload.

    At work we've mainly been using SQLAnywhere, which would just disable new functionality for databases using old on-disk format. So upgrading major versions has usually been fairly painless and quick.

    [1]: https://www.postgresql.org/support/versioning/

    [2]: https://www.postgresql.org/docs/current/pgupgrade.html

  • Well if it's self-hosted you have to do it yourself. You can either backup your databases from the old version and restore it to the new version once installed, or you can use pg_upgrade to upgrade/copy a old version data directory to the new version.

    I don't think this is done automatically when you simply install a new postgres version, but I'm not certain of that.

What's the SQL alternative?

  • JSON - er JSON-based document storage - documents with unique identifiers. and the ability to define and set schemas for the JSON, and ... we're back to a relational database