It runs, but it would be very slow on actual hardware.
I tried on a cycle-accurate emulator of a TRS-80 Model I with Omikron CP/M mapper. Most Z-80 machines of the time were 4MHz, but the TRS-80 was only 1.77 MHz.
1. Type "GUESS", get question prompt.
2. User types: "Are you an animal?", ENTER key
3. Wait 25 seconds
4. Program prints "N"
5. Wait 20 seconds
6. Program prints "O"
7. Wait 23 seconds
8. Program prints linefeed, returns to question prompt
Total time to return 2-char answer to user's question: 1 min 9 sec or so. I bet a longer answer would take proportionally longer.
"The wonder isn't that it does it well, it's a wonder it does it at all."
Though it'll still be kinda slow on a Model I, I've written an about 9 times faster Z-80 code for the network evaluation. I imagine the pull request will end up in the main depot but for now you can find it in https://github.com/gp48k/z80ai
I think I can do a little bit better; maybe 10% faster.
Well, I was pessimistic. Just pushed an update that slightly more than doubles the execution speed with a PR to the main depot pending. It is very close to 20 times faster than the original.
The version with external 5 x 64k memory is definitively in Osborne-1 disk drawer.
But Osborne does not work. It was so heavily modified that it will never work. The external memory was partially visible in memory space. And the display driver was improved to have 80 columns.
Now that I think about it, it might be possible that it could work in both configurations. The external memory was kinda bulky and not suitable for portable computer.
It would have been wasteful to keep the main memory empty. Or maybe it was reserved for compiled functions?
Wildberger has always been this way. Way back in 2007, Marc Chu-Carroll's "Good Math Bad Math" highlighted Wildberger: "This isn’t the typical wankish crackpottery, but rather a deep and interesting bit of crackpottery." In brief, Wildberger is clearly educated, but also clearly rejects axioms that mathematicians accepted a long time ago (infinite sets in this case):
An interesting point about Bresenham's algorithm is made by David Schmenk (dschmenk) on his "Bresen-Span" page:
"Take note that the algorithm can be viewed as the long division of delta-major/delta-minor. The error term is really the running remainder, and every step results in a pixel along the major axis until the division completes with a remainder. The division restarts by moving along the minor axis and adding the dividend back in to the running remainder (error term). This is a bit of a simplification, but the concept is that the long division will only result in two integral spans of pixels, depending on the value of the running remainder (error term). We will take this in to account to write a routine that outputs spans based on the two span lengths: a short-span and a long-span."
In other code, dschmenk does use DDA for anti-aliased lines.
The nice thing about 70s-80s computer magazines (and even some books) on archive.org is the relative lack of copyright concern: they're just out there without sign-on and checkout protection. Especially the ones for the 8-bit machines. You can find almost all the old magazines for those machines freely available, and no copyright concerns when people upload more. Even though it's still 50 years before they're public domain, in the computer world they're just "too old to worry about."
With one exception: there are absolutely no old issues of the Apple II magazine "Call A.P.P.L.E" (Apple PugetSound Program Library Exchange) anywhere online. The reason why is the group decided to keep the business going. The only place you can get those old issues is from the official callapple.org website for the price of subscription. Too bad, because there are old issues I'd love to read.
Don Lancaster (outside of Apple) did that. In fact, he ignored the Mac and connected a LaserWriter directly to his Apple II, and programmed in straight PostScript. Used that language the rest of his life. All the PDFs on his site were hand-crafted.
I don't know enough music to tell if this is insightful, or just neat pattern-matching.
A few months ago, mathematician John Baez had a series on the mathematics of various temperament and keys. Of course he knows his math, but also music thanks to being a member of rather famous musical family. (More math in the second link.)
One of the first CP/M C's was BDS-C. It's claim to fame was that it compiled the source in-memory, so it was at least that part was nice and fast.
Certainly compared to Whitesmiths C for CP/M, and not just for the $700 price vs $150 for BDS-C. Whitesmiths was real, official C, direct from P. J. Plauger and V6 Unix. But each compile went through many, many, many passes on the poor floppy (including pseudo-assembly "A-Natural" for the 8080 that then translated to real assembly). Everybody complained that while very professional, it just took too long to go through the cycle.
Contemporary BYTE recommendation was to develop & iterate on BDS-C, then at the end re-compile on Whitesmiths to squeeze the best performance.
Which if you come to think of it, fits quite naturally 40 years later, in having combined JIT / AOT toolchains for most compiled languages.
Pity that only Visual C++ seems somehow close to Energize C++ and Visual Age for C++ v4, for that kind of incremental development experience. Live++ and ROOT aren't that widespread.
Also D has a similar approach, use dmd for development, ldc or gdc for release.
As other commenters have said, C didn't actually generate fast programs for 8-bit processors, or even 16-bit processors for a long time. C is a poor fit for most of them, so assembly language was the only way to go.
A contemporary source is the opinionated "DTACK-Grounded" newsletter from 1981-1985. http://www.easy68k.com/paulrsm/dg/ Hal Hardenbergh raved about the fast 68000 chip and it's wonderfully easy assembly, but lamented that everyone switched to "portable" Pascal and C to write 16-bit programs so they seemed even slower than 8-bit ones. His favorite example was a direct comparison: Lotus 1-2-3, written in 8088 assembly, vs Context MBA with the same features but written in Pascal for portability. 1-2-3 was MUCH faster than Context on the PC, and no one remembers Context today. Or the $16,000 Unix-based AT&T workstation whose floating-point benchmarks are beaten by a $69 VIC-20. (Obviously due to the C-written runtime, which even followed the C standard of promoting all single precision calculations to double so single was no faster!)
His opinion of C was "slightly-disguised PDP/11 assembly". Not too bad for the 68000, but a terrible fit for the 8088 or Z80.
I tried on a cycle-accurate emulator of a TRS-80 Model I with Omikron CP/M mapper. Most Z-80 machines of the time were 4MHz, but the TRS-80 was only 1.77 MHz.
1. Type "GUESS", get question prompt.
2. User types: "Are you an animal?", ENTER key
3. Wait 25 seconds
4. Program prints "N"
5. Wait 20 seconds
6. Program prints "O"
7. Wait 23 seconds
8. Program prints linefeed, returns to question prompt
Total time to return 2-char answer to user's question: 1 min 9 sec or so. I bet a longer answer would take proportionally longer.
"The wonder isn't that it does it well, it's a wonder it does it at all."