It sounds like you're talking mostly about CRUD OLTP systems. Amazon in 1998 didn't actually have very many of those!
Consider instead what 1998 systems engineering looks like in the context of a Big Data OLAP data-warehouse (one where having denormalized replicas of it per service would cost multiples of your company's entire infra budget), where different services are built to either:
1. consume various reporting facilities of the same shared data-warehouse, adding layers of authentication, caching, API shaping, etc.; to then expose different APIs for other services to call. Think: BI; usage-based-billing reporting for invoice generation; etc.
2. abstract away Change Data Capture ETL of property tables from partners' smaller, more CRUD-y databases into your big shared data warehouse. (Think: product catalogues from book publishers), where the service owns an internal queue for robust at-least-once idempotent upserts into append-only DW tables.
At scale, an e-commerce storefront is more like banking (everything is CQRS; all data needs to be available in the same place so that realtime(!) use-cases can be built on top of joining gobs of different tables together) than it is like a forum or an issue-tracker.
There's a reason Amazon was the company to define the Dynamo architecture: their DW got so big it couldn't live on any one vertically-scaled cluster, so they had to transpose it all into a denormalized serverless key-value store (and do all the joins at query time) to keep those Big Data use-cases going!
Consider instead what 1998 systems engineering looks like in the context of a Big Data OLAP data-warehouse (one where having denormalized replicas of it per service would cost multiples of your company's entire infra budget), where different services are built to either:
1. consume various reporting facilities of the same shared data-warehouse, adding layers of authentication, caching, API shaping, etc.; to then expose different APIs for other services to call. Think: BI; usage-based-billing reporting for invoice generation; etc.
2. abstract away Change Data Capture ETL of property tables from partners' smaller, more CRUD-y databases into your big shared data warehouse. (Think: product catalogues from book publishers), where the service owns an internal queue for robust at-least-once idempotent upserts into append-only DW tables.
At scale, an e-commerce storefront is more like banking (everything is CQRS; all data needs to be available in the same place so that realtime(!) use-cases can be built on top of joining gobs of different tables together) than it is like a forum or an issue-tracker.
There's a reason Amazon was the company to define the Dynamo architecture: their DW got so big it couldn't live on any one vertically-scaled cluster, so they had to transpose it all into a denormalized serverless key-value store (and do all the joins at query time) to keep those Big Data use-cases going!