I'd call it a principle of invariance of compost piles. Regardless of how long the compost pile is being stirred or soaked, the product of the compost pile is compost.
I must remind you that large language models are not designed to perform arithmetic calculations nor they have been trained to do so. They are trained to recognize patterns in large amounts of text data and generate responses based on that learned information. While they may not be able to perform some specific tasks, they can still provide useful information and insights in a wide range of applications. Judging their quality of *language* models because of their inability to do basic math is completely unfair.
A model that stumbles on simple math,
Lacks the skill, it's on the wrong path.
Bound by its training, it mimics and squawks,
Stochastic parrot, in its nature it's locked.
As true parrots learn, this one falls short,
Foundational limits, a lesson to thwart.
To grow and adapt, a new training must come,
For only through learning can mastery be won.