I would say there are times that doing math with a primary key is a useful property (say, getting the Nth primary key (or so)) but if you are exposing it in an API I would say you would never even want a primary key projected in the first place.
A primary key is almost an implementation detail - a key that an API knows about something is one of many things that might point to this thing, might need to change, and generally might need a different representation (so don't make it your primary key.)
I also tell people to just use the bottom of any primary key space (when choosing monotonic stuff) but so many engineers just complain that they dont like the numbers (and yet many of them have had to deal with the migration a few years later so ... enjoy that I guess.)
Your claude.md (or equivalent) is the best way to teach them. At the end of any non-trivial coding session, I'll ask for it to propose edits/additions to that file based on both the functional changes and the process we followed to get there.
That's not the end of the story, though. LLMs don't learn, but you can provide them with a "handbook" that they read in every time you start a new conversation with them. While it might take a human months or years to learn what's in that handbook, the LLM digests it in seconds. Yes, you have to keep feeding it the handbook every time you start from a clean slate, and it might have taken you months to get that handbook into the complete state it's in. But maybe that's not so bad.
The good thing about this process its it means such a handbook functions as documentation for humans too, if properly written.
Claude is actually quite good at reading project documentation and code comments and acting on them. So it's also useful for encouraging project authors to write such documentation.
I'm now old enough that I need such breadcrumbs around the code to get context anyways. I won't remember why I did things without them.
I see many people learning to use chatbots as practical tools that don't understand the process that produces their output. They don't anticipate the way that output will be shaped by every detail of their request. This is an attempt to bridge that conceptual gap.
This article is talking about single-writer, single-reader storage. I think it's correct in that context. Most of the hairy problems with caches don't come up until you're multi-writer, multi-reader.
I don't think there's a confident upper bound. I just don't see why it's self-evident that the upper bound is beyond anything we've ever seen in human history.
DSQL is genuinely serverless (much more so than "Aurora Serverless"), but it's a very long way from vanilla Postgres. Think of it more like a SQL version of DynamoDB.
It's a decent protocol, but it has shortcomings. I'd expect most future use cases for that kind of thing to reach for a content-defined chunking algorithm tuned towards their common file formats and sizes.