This is incredibly database-specific. In Postgres random PKs are bad. But in distributed databases like Cockroach, Google Cloud Datastore, and Spanner it is the opposite - monotonic PKs are bad. You want to distribute load across the keyspace so you avoid hot shards.
In Google Cloud Bigtable we had the issue that our domain's primary key was a sequential integer autogenerated by another app. So we just reversed it, and it distributed automatically quite nicely.
It is, although you can have sharded PostgreSQL, in which case I agree with your assessment that you want random PKs to distribute them.
It's workload-specific, too. If you want to list ranges of them by PK, then of course random isn't going to work. But then you've got competing tensions: listing a range wants the things you list to be on the same shard, but focusing a workload on one shard undermines horizontal scale. So you've got to decide what you care about (or do something more elaborate).
It's also application specific. If you have workload that's write heavy, has temporal skew and is highly concurrent, but rarely creates new records, you're probably better off with a random PK, even in PG.
Even in a distributed database you want increasing (even if not monotonic) keys since the underlying b-tree or whatever will very likely behave badly for entirely random data.
UUIDv7 is very useful for these scenarios since
A: A hash or modulus of the key will be practically random due to the lower bits being random or pseudo-random (ie distributes well between nodes)
B: the first bits are sortable.. thus the underlying storage on each node won't go bananas.
I wouldn't say it is incredibly database specific, it is more database type specific. For most general, non-sharded, databases, random key values can be a problem as they lead to excess fragmentation in b-trees and similar structures.
As long as the key has sufficient entropy (i.e. not monotonic sequential ints), that ensures the keyspace is evenly distributed, correct? So UUID>=v4, ULID, KSUID, possibly snowflake, should be fine for the sake of even distribution of the hashes.
I think they address this in the article when they say that this advice is specific to monolithic applications, but I may be misremembering (I skimmed).
I'm not making any claims at all, I was just adding context from my recollection of the article that appeared to be missing from the conversation.
Edit: What the article said:
> The kinds of web applications I’m thinking of with this post are monolithic web apps, with Postgres as their primary OLTP database.
So you are correct that this does not disqualify distributed databases.
100%. You can use rendezvous hashing to determine the shard(s). The hash of a sequence should be randomly distributed as changing the LSB should propagate to 50% change in the output bits.