Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is very much in line with what Alan Kay says about current chip architectures compared to what was available when he was coming up in the field. He often talks about the Burroughs machines and how much more advanced they were compared to our current CPUs and laments that for all the gains that Moore has given us, we have lost incredible amounts of speed via our architectures being aimed solely at C.

One anecdote that he likes to use is to compare the speed of Smalltalk running on the Xerox Alto computer with Smalltalk running on a current CPU that is 50,000x faster than the Alto. He notes that benchmarks run in both systems are only 50x faster, claiming that this means we've lost a factor of 1000x in efficiency just on the basis of using inferior architectures (at least inferior if your target language isn't C).

Part of me is thankful for the relentless push of x86 and the speed gains realized, but another part of me really regrets that all of the crazy architectures from the 70's and 80's have been lost.



The 1000x figure is probably an overstatement, as is the 50,000x figure.

The Alto's main memory had a cycle time of about 850 nsec, and could transfer 2 16-bit words per cycle: http://www.computer-refuge.org/bitsavers/pdf/xerox/parc/tech....

This gives a main memory bandwidth of roughly 5 MB/sec. A top-end single CPU system today has probably 25 GB/sec available to it, a factor of 5,000 more. Moreover, much of that is achieved through optimizing burst reads--actual sustained random access throughput is going to be much lower and the delta much less.

Given modern implementation techniques, the actual efficiency loss is probably on the order of 10x rather than 1000x. And much of it is the result of the memory wall, which has been driven by DRAM physics rather than micro-architecture. Doing a couple of memory lookups to support dynamic dispatch is a hell of a lot more expensive, relative to an ALU operation, these days than it was 30 years ago.


Kay is greatly exaggerating those figures, and tends to blame problems with the modern software stack on the hardware.

Dan Ingalls gave a talk in 2005 about the history of Smalltalk implementations in which he mentioned the Xerox NoteTaker. The NoteTaker was a PC powered by the 8086, and according to Ingalls executed Smalltalk VM bytecode at twice the speed of the Alto. Here is the link to the talk: http://www.youtube.com/watch?v=pACoq7r6KVI#t=42m50s and here is my analysis with more details on the specs and economics of the NoteTaker: http://carcaddar.blogspot.com/2012/01/personal-computer-youv...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: