This seems to be missing the point. Sometimes users see error messages. Sometimes they're good, sometimes they're bad; and yeah, software engineers should endeavor to make sure that error behaviors are graceful, but of all the not-perfect things in this world, error handling is one of the least perfect, so users do encounter unfortunately ungraceful errors.
In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.
> Reading C++ for dummies even though I had untreated ADHD and couldn’t sit still long enough to get much past std::cout.
You may have lucked out. I also didn't get terribly far in that book, but I thought it was fairly weird when I tried to read it, and after majoring in CS in college and eventually reading some very good books on programming, I believe I was entirely justified in not liking that one.
Like a lot of blog posts, this feels like a premise worth exploring, lacking a critical exploration of that premise.
Yes, "inevitabilism" is a thing, both in tech and in politics. But, crucially, it's not always wrong! Other comments have pointed out examples, such as the internet in the 90s. But when considering new cultural and technological developments that seem like a glimpse of the future, how do we know if they're an inevitability or not?
The post says:
> what I’m most certain of is that we have choices about what our future should look like, and how we choose to use machines to build it.
To me, that sounds like mere wishful thinking. Yeah, sometimes society can turn back the tide of harmful developments; for instance, the ozone layer is well on its way to complete recovery. Other times, even when public opinion is mixed, such as with bitcoin, the technology does become quite successful, but doesn't seem to become quite as ubiquitous as its most fervent adherents expect. So how do we know which category LLM usage falls into? I don't know the answer, because I think it's a difficult thing to know in advance.
If 20% of people really think they'd be better off as factory workers, that's actually kind of a lot. Can you imagine if 20% of the working population really did work in factories? That's an enormous number.
The "right way" would be CPU vendors to support that standard. But I have thought about running a 64bits RISC-V interpreter on x86_64 (Mr Bellard, ffmpeg, tinycc, etc, wrote a risc-v emulator which could be as a based for that), and that in the kernel. Basically, you would have RISC-V assembly for x86_64 arch: at least, RISC-V here would be stellar more robust and stable that all the features creeps we have in the linux kernel because of the never ending gcc extensions addition and latest ISO C tantrums...
So the "right way" is to replace all hardware with new hardware, and the second-best solution is for CISC systems to emulate a specific RISC architecture? And you think this will be more maintainable, performant, etc? Do you have even a shred of evidence that this makes any sense at all, beyond "RISC is a good standard"?
It would a slow process. Of course you will need still RISC-V support in current compilers for legacy software.
RISC-V is a modern "good enough" with "good balance" ISA, everything is about trade-off, hence "perfection" does not make any sense. What is really different with RISC-V: it is already there, moving forward, is worldwide PI lockfree (unlike x86_64 and arm). Ofc, without extremely performant implementations (micro-arch and silicium process), all over the board (server/mobile/workstation/"embedded") it will probably fail.
And I do believe we could get a very good middle ground with very high level language interpreters (python/lua/etc) directly coded in RISC-V assembly.
And I am thinking about RISC-V... as a computer language with some compilerS (not JIT). I may investigate how much out-of-the-box-thinking and disruptive this is, hopefully soon enough.
Nobody said anything about RISC-V being "perfect" or not. The problem isn't how good RISC-V is or isn't; it's that your desire for software to target one and only one type of hardware just doesn't make any sense. That's not how computers have ever worked.
By the way, what do you mean by "PI lockfree"? Googling "ISA PI lockfree" just leads me to...another hacker news thread where you're arguing that RISC-V should replace everything.
Anyway, yes, please do "investigate how much out-of-the-box-thinking and disruptive this is" before continuing to have these inane arguments.
Any good Vim-emulator extension has macro support. VSCode also has an extension that lets you run the actual neovim server to manage your text buffer.
The settings GUI in VSCode is just an auto-generated layer over raw JSON files. You can even configure it to skip the GUI and open the JSON files directly when you open settings.
> Blocking I/O executed on another thread, with a callback to execute when done, becomes async I/O (from the user's PoV).
That's not what we're talking about when we discuss languages with async I/O, though. That's just bog-standard synchronous I/O with multithreading.
> The read/write operations are still potentially blocking, so for efficiency you need multiple threads.
That doesn't actually follow. The entire point of language-level async I/O is to be able to continue doing other work while waiting for the kernel to finish an I/O operation, without spawning a new OS thread just for this purpose.
In that case (and even sometimes in the more "graceful" cases), we don't always expect the user to know what an error message means.