There were edge cases in PostgreSQL’s SERIALIZABLE isolation level - which is supposed to ensure that concurrent transactions behave as if they were committed sequentially.
Specifically - if running a transaction as SERIALIZABLE there was a very small chance that you might not see a rows inserted by another transaction that committed before you in the order. Many applications don’t need this level of transaction isolation - but for those that do it’s somewhat scary to know this was lurking under the bed.
Every implementation of a “bank” system where you keep track of deposits and withdrawals is a use-case for SERIALIZABLE, and this means a double-spend could happen because the next transaction didn’t see an account just had a transaction that drained the balance, for example.
The bank example is useful, because it tends to elicit the right thinking for people, but banking has a long history of eventual consistency.
For the vast majority of the history of banking, local branches (which is a very loose term here, e.g. a family member of the guy you know in your hometown, rather than an actual physical establishment) would operate on local knowledge only. Consistency is achieved only through a regular reconciliation process.
Even in more modern, digital times, banks depend on large batch processes and reconciliation processes.
I'd say MOST non trivial application require SERIALIZABLE. Every time apps does `BEGIN; SELECT WHERE; INSERT/UPDATE; COMMIT` it needs `SERIALIZABLE`, becuase it is only level catching cases, where concurrent transaction adds rows so that SELECT WHERE changes it's result set and therefore subsequent INSERT/UPDDATE should be done with different values.
The common bank example as I understand it doesn't require serializable, but only snapshot isolation: If two transactions both drain the source balance, the one that commits last will fail, because its snapshot doesn't match the state anymore.
If you’re UPDATEing a balance on some account table - yes. If you’re using a ledger and calculating balances (which you SHOULD) then SERIALIZABLE is needed.
It's an isolation issue but if you're coming from Mongo I'd broadly guess it's not one you're going to trigger. Also, look at their other analyses ... they're very detailed and upfront about serialization isolation issues in a lot of huge databases/datastores.
Noteworthy:
"In most respects, PostgreSQL behaved as expected: both read uncommitted and read committed prevent write skew and aborted reads."
Technically it does. You can ask for read uncommitted mode, though you'll just get read committed mode. This is correct because you're getting the minimal guarantees that you asked for. The SQL standard allows this.
The default isolation level is read committed mode, whereas the bug in question only affected applications that use serializable mode. You have to ask for serializable mode explicitly; if you're not, then you cannot possibly be affected by the bug. (Perhaps you should consider using a higher isolation level, but that would be equally true with or without this bug.)
-Yes but not the only one, was a succession of problems (why did i use mongo on the first place, on a transaction heavy callcenter database? Because the customer forced it because it was the only thing he knew)
-No just a single huge instanced, managed on Azure