> Because... adding levels below C and conventional assembler still leaves C exactly as many levels below "high level" language as it was before and if there's a "true low level language" for today I'd like to hear about it. And the same sorts of programmers use C as when it was a low level language and the declaration doesn't even give any context, doesn't even bother to say "anymore" and yeah, I'm sick of it.
Not really. For many purposes, C is not any more low-level than a supposedly "higher level" language. 20 years ago one could argue that it made sense to choose C over Java for high-performance code because C exposed the low-level performance characteristics that you cared about. More concretely, you could be confident that a small change to C code would not result in a program with radically different performance characteristics, in a way that you couldn't be for Java. Today that's not true: when writing high-performance C code you have to be very aware of, say, cache line aliasing, or whether a given piece of code is vectorisable, even though these things are completely invisible in your code and a seemingly insignificant change can make all the difference. So to a large extent writing high-performance C code today is the same kind of programming experience (heavily dependent on empirical profiling, actively counterintuitive in a lot of areas) as writing high-performance Java, and choosing to write a program with extreme performance requirements in C rather than Java because it's easier to control performance in C is likely to be the wrong tradeoff.
C has aged better than Java though. While Java still pretty much expects that a memory access is cheap relative to CPU performance like in the mid-90's, at least C gives you enough control over memory layout to tweak the program for the growing CPU/memory performance gap with some minor changes.
In Java and other high-level languages which hide memory management, you're almost entirely at the mercy of the VM.
IMHO "complete control over memory layout and allocation" is what separates a low-level language from a high-level language now, not how close the language-level instructions are to the machine-level instructions.
What popular "high level" languages might we consider? Scanning various lists, you have bytecode based languages (Java, .NET-languages, etc), you have the various scripting languages (Python, Ruby, Perl, etc), you have languages compiled the JVM (Scala, Elixar, etc), you extension to c (c++, objective-c). It seems all those are either built on the c memory model with extensions or use an
provide the same control over memory allocation as C does.
But the argument in thread is about something or other eventually being lower-level than C, right? C++, objective-C, D and friends "high-low", provide higher-level structure on the basic C model. Which in most conceptions puts higher than C but we can put them at the same level if we want, hence the "high low" usage, which is common, I didn't invent it.
Basically, the flat memory model that C assumes is what optimization facilities in these other languages might grant you. Modern CPUs emulate this and deviate from it in combination of some memory access taking longer than others and through bugs in the hardware. But neither of these things is a reason for programmer not to normally use this model, it's a reason to be aware, add "hints", choose modes, etc (though it's better if the OS does that).
And maybe different hardware could use a different sort of overt memory. BUT, the C programming language is actually not a bad way to manipulate mix-memory so multiple memory types wouldn't particularly imply "ha, no more c now". But a lot of this is cache and programmers manipulating cache directly seems like a mistake most of the time. But GPUs? Nothing about GPUs implies no more C (see Cuda, OpenGL - C++? fine).
.NET based languages include C++ as well, and .NET has have AOT compilation to native code in multiple forms since ages.
Latest versions of C# and F# also do make use of the MSIL capabilities used by C++ on .NET.
Then if we move beyond those into AOT compiled languages with systems programming capabilties still in use in some form, D, Swift, FreePascal, RemObjects Pascal, Delphi, Ada, Modula-3, Active Oberon, ATS, Rust, Common Lisp, NEWP, PL/S, Structured Basic dialects, more could be provided if going into more obscure languages.
C isn't neither the genesis of systems programing, nor did it provide anything that wasn't already available elsewhere, other than easier ways to shoot yourself.
It is literally impossible to write any reasonable high performance software in Java. (Yes, I've worked with Java devs who thought they had written high performance software, but they had no point of reference). This is mostly due, among other things, to the way modern CPUs implement cache, and the way Java completely disregards this by requiring objects to be object graphs of randomly allocated blobs of memory. A language that allows for locality of access can easily be an order of magnitude faster, and with some care, two orders of magnitude.
Its "literally possible" to do this with Unsafe, and has been for a long time. You get a block of memory, the base address, then you put things in it.
Just because its not the "idiomatic java style" doesn't mean its not Java. You might do this because you can use this for the parts that really need hand-tuned performance, then rely on the JVM/ecosystem for the parts that don't need it.
Java forces you to use profiling, at least with C you can see the exact instructions your compiler outputs. Missing the fancy vector instructions? Modify your code til you can guarantee it's vectorized. With Java you are at the mercy of the JVM to do the right thing at runtime.
Not that I disagree with what you're saying, but I thought you'd find it interesting: you can dump the JIT assembly from Hotspot JVM pretty readily to make sure things like inlining are happening as you'd expect.
You can also view the entire compiler internals in a visual way using the igv tool. You can actually get much better insight into how your code is getting compiled on a JVM like the GraalVM than with a C compiler.
However, I will admit that this is very obscure knowledge.
The instructions no longer tell the whole story though. Maybe you can tell whether your code is vectorised, but you can't tell whether your data is coming out of main memory or L1 cache, and to a first approximation that's the only thing that matters for your program's performance.
Not really. For many purposes, C is not any more low-level than a supposedly "higher level" language. 20 years ago one could argue that it made sense to choose C over Java for high-performance code because C exposed the low-level performance characteristics that you cared about. More concretely, you could be confident that a small change to C code would not result in a program with radically different performance characteristics, in a way that you couldn't be for Java. Today that's not true: when writing high-performance C code you have to be very aware of, say, cache line aliasing, or whether a given piece of code is vectorisable, even though these things are completely invisible in your code and a seemingly insignificant change can make all the difference. So to a large extent writing high-performance C code today is the same kind of programming experience (heavily dependent on empirical profiling, actively counterintuitive in a lot of areas) as writing high-performance Java, and choosing to write a program with extreme performance requirements in C rather than Java because it's easier to control performance in C is likely to be the wrong tradeoff.