Maybe this post wasn't the right one for your comment, hence the downvotes.
But I find it intriguing. Do you mean architecting software to allow LLMs to be able to modify and extend it? Having more of the overall picture in one place (shallow monoliths) and lots of helper funtions and modules to keep code length down? Ie, optimising for the input and output context windows?
LLMs are very good at first order coding. So, writing a function, either from scratch or by composing functions given their names/definitions. When you start to ask it to do second or higher order coding (crossing service boundaries, deep code structures, recursive functions) it falls over pretty hard. Additionally, you have to consider the time it takes an engineer to populate the context when using the LLM and the time it takes them to verify the output.
LLMs can unlock incredibly development velocity. For things like creating utility or helper functions and their unit tests at the same time, an engineer using a LLM will easily 10x an equally skilled engineer not using a LLM. The key is to architect your system so that as much of it as possible can be treated this way, while not making it indecipherable for humans.
This is a temporary constraint. Soon the maintenance programmers will use an AI to tell them what the code says.
The AI might not reliably be able to do that unless it is in the same "family" of AIs that wrote the code. In other words, analogous to the situation today where choice of programming language has strategic consequences, choice of AI "family" with which to start a project will tend to have strategic consequences.
But I find it intriguing. Do you mean architecting software to allow LLMs to be able to modify and extend it? Having more of the overall picture in one place (shallow monoliths) and lots of helper funtions and modules to keep code length down? Ie, optimising for the input and output context windows?