It's funny how this comment chain is about how names stick to ideas in somewhat arbitrary ways, and you are using "Elon" to explain a personal policy for information grooming.
Yes! A typical use case is to efficiently implement ORDER BY LIMIT N in SQL databases in a way that doesn’t require sorting the entire column just to get those first N items.
i assume this go code runs in the client since pg does not support golang server side. why would a client side ordering be faster than doing in the database?
Author here. Agree 100%! It's often what didn't work that is omitted. But there's so much juice in failed experiments — it's important to share with others.
Our Go ULID package has millisecond precision + monotonic random bytes for disambiguation while preserving ordering within the same millisecond. https://github.com/oklog/ulid
For a variety of reasons this is incredibly difficult. Functions, etc make SELECT queries writes, not just UPDATE/DELETE, etc.
It's a lot easier for your application to know what a write is and just establish connections to 2 separate poolers (or hosts on the same poolers) and direct the reads/writes appropriately.
There's already working part of libpq protocol for this - target_session_attrs. But the problem with target_session_attrs is that it just takes too long to discover new primary after failolver. We want to fix this within Odyssey.
Aren't the goals of t-digest a little bit different?
T-digest seeks to have a bounded size and an error proportional to q*(1-q), hence it gives up quantile accuracy in the middle of the distribution when under load. This algorithm seems to provide total bounded error without small but unbounded size.