Comment by Apreche

8 hours ago

Because upgrading is a lot of work, and is higher risk than upgrading other software.

I find that upgrading Postgresql is really easy.

Testing all the apps that use it, not so much.

Seems like a massive design fail if they can't maintain backwards compatability and provide a safe, low friction upgrade process.

  • I think it's more about avoiding downtime (I just upgraded a pg with 1TB of data from v11 to v16 and I didn't notice any breaking changes). In an ideal world, every client of the DB should be able to handle the case where the DB is down and patiently wait for the DB to come back to keep doing its job. But from my experience, it's rarely the case, there is always at least 1 micro service running somewhere in the cloud that everybody forgot about that will just crash if the DB is down, which could mean losing data.