Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution

This was clearly true in 01970, but it's mostly false today.

It's still true today for LLMs and, say, photorealistic VR. But what I'm doing right now is typing ASCII text into an HTML form that I will then submit, adding my comment to a persistent database where you and others can read it later. The main differences between this and a guestbook CGI 30 years ago or maybe even a dialup BBS 40 years ago have very little to do with the performance of the physical substrate. It has more in common with the People's Computer Company's Community Memory 55 years ago (?) using teletypes and an SDS 940 than with LLMs and GPU raytracing.

Sometime around 01990 the crucial limiting factor in computer usefulness went from being the performance of the physical substrate to being the programmer's imagination. This happened earlier for some applications than for others; livestreaming videogames probably requires a computer from 02010 or later, or special-purpose hardware to handle the video data.

Screensavers and demoscene prods used to be attempts to push the limits of what that physical substrate could do. When I saw Future Crew's "Unreal", on a 50MHz(?) 80486, around 01993, I had never seen a computer display anything like that before. I couldn't believe it was even possible XScreensaver contains a museum of screensavers from this period, which displayed things normally beyond the computer's ability. But, in 01998, my office computer was a dual-processor 200MHz Pentium Pro, and it had a screensaver that displayed fullscreen high-resolution clips from a Star Trek movie.

From then on, a computer screen could display literally anything the human eye could see, as long as it was prerendered. The dependence on the physical substrate had been severed. As Zombocom says, the only limit was your imagination. The demoscene retreated into retrocomputing and sizecoding compos, replaced by Shockwave, Flash, and HTML, which freed nontechnical users to materialize their imaginings.

The same thing had happened with still 2-D monochrome graphics in the 01980s; that was the desktop publishing revolution. Before that, you had to learn to program to make graphics on a computer, and the graphics were strongly constrained by the physical substrate. But once the physical substrate was good enough, further improvements didn't open up any new possible expressions. You can print the same things on a LaserWriter from 01985 that you can print on the latest black-and-white laser printer. The dependence on the physical substrate has been severed.

For things you can do with ASCII text without an LLM, the cut happened even earlier. That's why we still format our mail with RFC-822, our equations with TeX, and in some cases our code with Emacs, all of whose original physical substrate was a PDP-10.

Most things people do with computers today, and in particular the most important things, are things fewer people have been doing with computers in nearly the same way for 30 years, when the physical substrate was very different: 300 times slower, 300 times smaller, a much smaller network.

Except, maybe, mass emotional manipulation, doomscrolling, LLMs, mass surveillance, and streaming video.

A different reason to study the history of computing, though, is the sense in which your claim is true.

Perceptrons were investigated in the 01950s and largely abandoned after Minsky & Papert's book, and experienced some revival as "neural networks" in the 80s. In the 90s the US Postal Service deployed them to recognize handwritten addresses on snailmail envelopes. (A friend of mine who worked on the project told me that they discovered by serendipity that decreasing the learning rate over time was critical.) Dr. Dobb's hosted a programming contest for handwriting recognition; one entry used a neural network, but was disqualified for running too slowly, though it did best on the test data they had the patience to run it on. But in the early 21st century connectionist theories of AI were far outside the mainstream; they were only a matter of the history of computation. Although a friend of mine in 02005 or so explained to me how ConvNets worked and that they were the state-of-the-art OCR algorithm at the time.

Then ImageNet changed everything, and now we're writing production code with agentic LLMs.

Many things that people have tried before that didn't work at the time, limited by the physical substrate, might work now.



Reconsider this post after playing with an HDR10 480hz monitor.


One question, why are you prepending zero to every single year?


Usually it's because of an initiative by the Long Now Foundation that is supposed, among other things, to raise awareness about their 10,000 years clock and what it stands for.

See https://longnow.org/ideas/long-now-years-five-digit-dates-an...


And I, semi seriously, say that it's only the "Medium Now", because they only prepend one zero.

But I hate it. It makes most readers stumble over the dates, and it does so to grind an axe that is completely unrelated to the topic at hand.


Everyone hates it.


There are always a few trolls who complain about how other people spell their words, wear their hair, format their dates, choose their clothes, etc. Usually on HN these complaints get flagged into oblivion pretty quickly. But most people don't care, preferring to read about things like the interaction between technological development and digital art forms than complaints about whether "aluminum" should actually be spelled "alumium".


I would rather attribute the term "few trolls", if to use it at all, to the people pushing an agenda for a thing happening in 8000 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: