At least for the first point: C has been used extensively with non-uniform storage. Back in the DOS days when we had memory models (large, small, huge, etc...), and today, when programming all sorts of small microcontrollers. A common one I occasionally is AVR, which has distinct address spaces for code and data memory - which means a function to print string variable is a very different from the one used to print a string constant. This makes programs rather ugly, but things generally work.
As for your parallelism idea.. well every computer so far has a fixed number of execution units, even your latest 16384 core GPU still has every core perform sequential operators. And that's roughly what C's model is, it programs execution units. And it definitely hasn't stopped designers from innovating - complete different execution models like FPGA exists, and have a constant innovation in programming languages.
> At least for the first point: C has been used extensively with non-uniform storage
And the results are awful. You are confused between doing something and doing it well. The fact that plenty of people cook frozen pizza at home doesn't make frozen pizza a good pizza.
> And it definitely hasn't stopped designers from innovating
And this is where you are absolutely wrong. We have hardware designs twisted beyond belief only so that they would be usable with C concepts of computer, while obviously simpler and more robust solutions are discarded as non-viable. Just look at the examples I gave. CUDA developers had to write their own compiler to be able to work around the lack of necessary tools in C. We also got OpenMP and MPI because C sucks so much that the language needs to be extended to deal with parallelism.
And it wasn't some sort of a hindsight where at the time of writing things like different memory providers were inconceivable. Ada came out with the concept of non-uniform memory access baked in. Similarly, Ada came out with the concept for concurrency baked-in. It was obvious then already that these are the essential bits of system programming.
C was written by people who were lazy, uninterested to learn from peers and overly self-confident. And now we've got this huge pile of trash of legacy code that's very hard to replace and people like you who are so used to this trash, that they will resist its removal.
You are very confidently making some wild statements that seem to be based on the assumption that only because something isn't specified in a given place, it couldn't be specified somewhere else. That assumption is wrong.
As for your parallelism idea.. well every computer so far has a fixed number of execution units, even your latest 16384 core GPU still has every core perform sequential operators. And that's roughly what C's model is, it programs execution units. And it definitely hasn't stopped designers from innovating - complete different execution models like FPGA exists, and have a constant innovation in programming languages.