If you don’t mind me asking, how does Postgres fix this? Do they have a more sophisticated locking mechanism, or maybe a copy offline until it’s ready kind of a system?
> This might cause the whole table to be locked and copied in SQL
That was indeed the case in the past, but not so much anymore except for certain situations. MySQL, for example, has had support for in-place table alterations for a while [1]. I've used it in production and it works very well IME.
“might cause”, yes. But at least with Postgres that only happens if you add a default value to the new column. You can add the default value in your application instead, just like you would with your average NoSQL DB.
Then, when you have low load on your system, you can migrate the rows in batches to have a default value, and eventually remove the application default.
But if you don't control the client, you will now have to deal with client side migration and server database version management. If you create an additional v3, you will need to decide to either keep v1->v2->v3 code or v1->v2, v1->v3. Also, reporting.
Can’t you do this concurrently these days? Also, managing scheme in the database was the least of my issues when deleting/adding fields. You still need to make sure your clients are resilient against null or missing fields in the responses, and your scheduled jobs don’t query the data which isn’t there anymore. Point is, not having to alter the db scheme doesn’t make such a big difference. You still need to make sure your system overall is build in way that allows smooth data migrations.
Say your users table looked like this:
And now you want to add their city: This might cause the whole table to be locked and copied in SQL. While it might need no action at all in a NoSQL DB.