My recollection is that ASIC-resistance involves using lots of scratchpad memory and mixing multiple hashing algorithms, so that you'd have to use a lot of silicon and/or bottleneck hard on external RAM. I think the same would hurt FPGAs too.
I'm pretty sure it's mathematically guaranteed that you have to be bad at compressing something. You can't compress data to less than its entropy, so compressing totally random bytes (where entropy = size) would have a high probability of not compressing at all, if no identifiable patterns appear in the data by sheer coincidence. Establishing then that you have incompressible data, the least bad option would be to signal to the decompressor to reproduce the data verbatim, without any compression. The compressor would increase the size of the data by including that signal somehow. Therefore there is always some input for a compressor that causes it to produce a larger output, even by some miniscule amount.
> What I can imagine is a purpose-built CPU that would make the JIT's job a lot easier and faster than compiling for x86 or ARM. Such a machine wouldn't execute raw Java bytecode, rather, something a tiny bit more low-level.
This is approximately exactly what Azul Systems did, doing a bog-standard RISC with hardware GC barriers and transactional memory. Cliff Click gave an excellent talk on it [0] and makes your argument around 20:14.
In the early 1990s, HP had a product called “SoftPC” that was used to emulate x86 on PA-RISC. IIRC, however, this was an OEM product written externally. My recollection of how it worked was similar to what is described in the Dynamo paper. I’m wondering if HP bought the technology and whether Dynamo was a later iteration of it? Essentially, it was a tracing JIT. Regardless, all these ideas ended up morphing into Rosetta (versions 1 and 2), though as I understand it, Rosetta also uses a couple hardware hooks to speed up some cases that would be slow if just performed in software.
That wasn’t an HP product. It was written by Insignia Solutions and ran on multiple platforms.
I had it on my Mac LCII in 1992. It barely ran well enough to run older DOS IDEs for college. Later I bought an accelerator (40Mhz 68030) and it ran better.
IIRC, I had that on my Atari ST as well, and it very slowly booted Dos 3.3 and a few basic programs.. enough for me to use turbo-C or Watcom C to compile a basic .c program to display a .pcx file.
> but JavaScript totally missed the boat on efficient compile-ability, which is the most interesting thing about Self
That's making much use of hindsight though: the creators of Self didn't think it would run fast, until it did [0]. The HOPL paper on Self [1] spends many words recounting the challenge of making Self fast.
[0] This is arguably a stronger claim than what appears in HOPL; I think it's from a talk by Dave Ungar, I'd have to check.
> And at the time, we thought it was impossible to make this language run efficiently, because it did all these things that were more abstract than languages of the time ...
If you want vectors, use vectors, elisp has them too as primitives. (I don't mean to suggest you don't know that, but still, you can just use vectors.)
I was writing from the aspect of an Lisp implementer, so that means it is not how I can just use vectors but about how existing code represents a sequence, and it is part of the job of the runtime to make it as fast as possible.
From what I see from existing ELisp code (at least in the Emacs codebase), the idiomatic representation of sequences (fixed-size or not) is using cons lists. And it is not surprising: Emacs vectors are fixed-size and that makes it very inflexible and only suitable for a few things. This matters because, for example, if you want Emacs to compete with VSCode in performance, you eventually compares how idiomatic code performs. Note that how cons lists affect performance in real-world ELisp code remains unknown because it is yet to be benchmarked, but exposing the internals of idiomatic lists as conses does pose some challenges for an implementer aiming for further optimizations.
Emacs Lisp vectors being fixed seems like an easily fixable problem. More functions can be defined to do useful things with vectors including mutations. If it is important to keep the existing type produced by make-vector immutable, a separate mutable variant can be introduced. The mutating functions blow up if applied to the immutable type.
The LC-3 has pretty odd addressing modes - in particular, you can do a doubly indirect load through a PC-relative word in the middle. But you still have to generate subtraction from negation, and negation from NOT and ADD ,,#-1. (I suppose NOT d,s = XOR d,s,#-1 would be a better use of the limited instruction encoding space too.)
reply