Hacker Newsnew | past | comments | ask | show | jobs | submit | drbig's commentslogin

As someone who grew up with Amiga... I find it amazing these boards still keep coming (X1000, X5000... anyone?) - they have always been insanely expensive for specs that are decade(s) old, all in the name of... really no idea what beyond "we can".

Or in other words: I wonder what if all that time, money and effort went into say AROS[1] and/or emulation. I can imagine still using AmIRC and HippoPlayer if I could run them as any other software on Linux.

1: https://aros.sourceforge.io/introduction/


Yeah, the lack of support for off-the-shelf hardware has been the doom of the Amiga revival since day one.

The refusal to port AmigaOS to anything but dead or near-dead architectures has always amazed me -- had the resources been spent on AROS instead, we'd have a usable modern ecosystem by now, rather than multiple different options that fall back on various ways of running 30 year-old binaries.

All I want is a decent modern version of YAM and universal ARexx/datatype support! :(


> Yeah, the lack of support for off-the-shelf hardware has been the doom of the Amiga revival since day one.

The Amiga was not just an OS. It was all about the custom chips that added such interesting and powerful capabilities to an otherwise unspectacular 68000. When combined with the OS, it created a system that was truly ahead of its time.

But I don't see the appeal of AmigaOS on modern hardware. Most Amiga fans are more interested in the games and demos that didn't use the OS, and used the blitter/copper etc directly.

And if you just want a faster Amiga, the PiStorm is pretty cool.


There have been firebrands attempting a revival of the Amiga since the mid 90's, and back then it was a question of making a new and modern platform -- not watching demos.

Today is a different matter, of course. Personally, emulation is more than enough for me.


There's an AROS runtime for GNU/Linux and others.

Lers of Spoil: this is about a NES game ;-) Pretty cool still, especially if one's into reverse engineering.

Instruction pipelining and this is exactly why I wish we still have the time to go back to "it is exactly as it is", think the 6502 or any architecture that does not pretend/map/table/proxy/ringaway anything.

That, but a hell lot of it with fast interconnect!

... one can always dream.


The article is essentially describing virtual memory (with enhancements) which predates the 6502 by a decade or so.


IMO it's not even quite right in its description. The first picture that describes virtual memory shows all processes as occupying the same "logical" address space with the page table just mapping pages in the "logical" address space to physical addresses one-to-one. In reality (at least in all VM systems I know of) each process has its own independent virtual address space.


The point is that we should acknowledged those "cheats" came with their reasons and that they did improve performance etc. But, they also did come with a cost (Meltdown, Spectre anyone?) and fundamentally introduced _complexities_, which at today's level of manufacturing and end of Moore's law may not be the best tradeoffs.

I'm just expressing the general sentiment of distaste for piling stuff upon stuff and holding it with a duct-tape, without ever stepping back and looking at what we have, or at least should have, learnt and where we are today in the technological stack.


Do you want to throw out out-of-order-execution and pipelining while you are at it, too?

I'm semi-serious: there are actually modern processor designs that put this burden on the programmer (or rather their fancy compiler / code generator) in order to keep the silicon simple. See eg https://en.wikipedia.org/wiki/Groq#Language_Processing_Unit


I'm curious how this dream is superior to where we are? Yes, things are more complex. But it isn't like this complexity didn't buy us anything. Quite the contrary.


> ...buy us anything.

Totally depends on who "us" and isn't. What problem is being solved etc. In the aggregate clearly the trade off has been beneficial to the most people. If what you want to do got traded, well you can still dream.


Right, but that was kind of my question? What is better about not having a lot of these things?

That is, phrasing it as a dream makes it sound like you imagine it would be better somehow. What would be better?


Think about using a modern x86-64 cpu core to run one process with no operating system. Know exactly what is in cache memory. Know exactly what deadlines you can meet and guarantee that.

It's quite a different thing to running a general purpose OS to multiplex each core with multiple processes and a hardware walked page table, TLB etc.

Obviously you know what you prefer for your laptop.

As we get more and more cores perhaps the system designs that have evolved may head back toward that simplicity somewhat? Anything above %x cpu usage gets its own isolated, un-interrupted core(s)? Uses low cost IPC? Hard to speculate with any real confidence.


I just don't know that I see it running any better for the vast majority of processes that I could imagine running on it. Was literally just transcoding some video, playing a podcast, and browsing the web. Would this be any better?

I think that is largely my qualm with the dream. The only way this really works is if we had never gone with preemptive multitasking, it seems? And that just doesn't seem like a win.

You do have me curious to know if things really do automatically pin to a cpu if it is above a threshold. I know that was talked of some, did we actually start doing that?


> Was literally just transcoding some video, playing a podcast, and browsing the web.

Yeah that's the perfect use case for current system design. Nobody sane wants to turn that case into an embedded system running a single process with hard deadline guarantees. Your laptop may not be ideal for controlling a couple of tonnes of steel at high speed, for example. Start thinking about how you would design for that and you'll see the point (whether you want to agree or not).


Apologies, almost missed that you had commented here.

I confess I assumed writing controllers for a couple of tonnes of steel at high speed would not use the same system design as a higher level computer would? In particular, I would not expect most embedded applications to use virtual memory? Is that no longer the case?


"Hard Real Time" is the magic phrase to go as deep as you want to.


This isn't really answering my question. Have they started using virtual memory in hard real time applications? Just generally searching the term confirms that they are still seen as not compatible.


In addition to search engines you can learn a great deal about all sorts of things using an LLM. This works well enough if you don't want to pay. They are very patient and you canb go as deep as you want. https://duckduckgo.com/?q=DuckDuckGo+AI+Chat&ia=chat&duckai=...


Things would be simpler, more predictable and tractable.

For example, real-time guarantees (hard time constraints on how long a particular type of event will take to process) would be easier to provide.


But why do we think that? The complexity would almost certainly still exist. Would just now be up a layer. With no guarantees that you could hit the same performance characteristics that we are able to hit today.

Put another way, if that would truly be a better place, what is stopping people from building it today?


Performance wouldn’t be the same, and that’s why nobody is manufacturing it. The industry prefers living with higher complexity when it yields better performance. That doesn’t mean that some people like in this thread wouldn’t prefer if things were more simple, even at the price of significantly lower performance.

> The complexity would almost certainly still exist.

That doesn’t follow. A lot of the complexity is purely to achieve the performance we have.


I'm used to people arguing for simpler setups because the belief is that they could make them more performant. This was specifically the push for RISC back in the day, no?

To that end, I was assuming the idea would be that we think we could have faster systems if we didn't have this stuff. If that is not the assumption, I'm curious what the appeal is?


That’s certainly not the assumption here. The appeal is, as I said, that the systems would be more predictable and tractable, instead of being a tarpit of complexity. It would be easier to reason about them, and about their runtime characteristics. Side-channel attacks wouldn’t be a thing, or at least not as much. Nowadays it’s rather difficult to reason about the runtime characteristics of code on modern CPUs, about what exactly will be going on behind the scenes. More often than not, you have to resort to testing how specific scenarios will behave, rather than being able to predict the general case.


I guess I don't know that I understand why you would dream of this, though? Just go out and program on some simpler systems? Retro computing makes the rounds a lot and is perfectly doable.


But why?


False alarm.


Very clever, and playable! Thanks.


> It's no wonder UE5 games have the reputation of being poorly optimized

Care to exemplify?

I find UE games to be not only the most optimized, but also capable of running everywhere. Take X-COM, which I can play on my 14 year old linux laptop with i915 excuse-for-a-gfx-card, whereas Unity stuff doesn't work here, and on my Windows gaming rig always makes everything red-hot without even approaching the quality and fidelity of UE games.

To me UE is like SolidWorks, whereas Unity is like FreeCAD... Which I guess is actually very close to what the differences are :-)

Or is this "reputation of being poorly optimized" only specific to UE version 5 (as compared to older versions of UE, perhaps)?


The reputation of being poorly optimized only applies to version 5, UE was rather respected before the wave of terribly performing UE 5 AAA games came out and tanked UE's reputation.

It also has a terrible reputation because a bunch of the visual effects have a hard dependency on temporal anti-aliasing, which is a form of AA which typically results in a blurry-looking picture with ghosting as soon as anything is moving.


Funnily enough a lot of those "poor performing" UE games were actually UE4 still, not UE5.


Let's be real. UE5 is a marketing term for a .x version of UE4 that broke a bunch of the rendering pipeline such that they needed am excuse to force devs to deal with the changes.


The reputation is specific to UE5. UE3 used to have such reputation as well. UE5 introduced new systems that are not compatible with traditional systems and these systems especially if used poorly tank the performance. Its not uncommon for UE5 games to run poorly even on the most expensive nvidia GPU and AI upscaling is requirement.


Thanks for the replies! Will note the UE5 specificity.


This being the UK... Perhaps better to establish The Office of Permission, then ban everything except requesting a permission form the new office... And thus create a whole permission economy, putting the `Great` back in Great Britain... /s


But how do we prevent children accessing the office of permission?


It doesn't affect us, so why complain? We can always just get a visa to watch porn.


> No amount of expressive design will beat basic functionality.

...I am very afraid this will sacrifice a lot of (basic) functionality in the name of looking different.

May only hope there will be options to "tame it down".


> If I understand correctly, the critique here is that is that LLMs cannot generate new knowledge, and/or that they cannot remember it.

> The former is false, and the latter is kind of true -- the network does not update itself yet, unfortunately, but we work around it with careful manipulation of the context.

Any and all examples of where an LLM generated "new knowledge" will be greatly appreciated. And the quotes are because I'm willing to start with the lowest bar of what "new" and "knowledge" mean when combined.


They are fundamentally mathematical models which extrapolate from data points, and occasionally they will extrapolate in a way that is consistent with reality, i.e. they will approximate uncharted territory with reasonable accuracy.

Of course, being able to tell the difference (both for the human and the machine) is the real trick!

Reasoning seems to be a case where the model uncovers what, to some degree, it already "knows".

Conversely, some experimental models (e.g. Meta's work with Concepts) shift that compute to train time, i.e. spend more compute per training token. Either way, they're mining "more meaning" out of the data by "working harder".

This is one area where I see that synthetic data could have a big advantage. Training the next gen of LLMs on the results of the previous generation's thinking would mean that you "cache" that thinking -- it doesn't need to start from scratch every time, so it could solve problems more efficiently, and (given the same resources) it would be able to go further.

Of course, the problem here is that most reasoning is dogshit, and you'd need to first build a system smart enough to pick out the good stuff...

---

It occurs to me now that you rather hoped for a concrete example. The ones that come to mind involve drawing parallels between seemingly unrelated things. On some level, things are the same shape.

I argue that noticing such a connection, such a pattern, and naming it, constitutes new and useful knowledge. This is something I spend a lot of time doing (mostly for my own amusement!), and I've found that LLMs are surprisingly good at it. They can use known patterns to coherently describe previously unnamed ones.

In other words, they map concepts onto other concepts in ways that hasn't been done before. What I'm referring to here is, I will prompt the LLM with some such query, and it will "get it", in ways I wasn't expecting. The real trick would be to get it to do that on its own, i.e. without me prompting it (or, with current tech, find a way to get it to prompt itself that produces similar results... and then feed that into some kind of Novelty+Coherence filtering system, i.e. the "real trick" again... :).

A specific example eludes me now, but it's usually a matter of "X is actually a special case of Y", or "how does X map onto Y". It's pretty good at mapping the territory. It's not "creating new territory" by doing that, it's just pointing out things that "have always been there, but nobody has looked at before", if that makes sense.


...In due time he decomposes, leaving the skeleton which will last much much longer. Just like everybody else.

I think Jorge would appreciate that.

As a skeptic of the "organized" part of any "organized religion" I had and have deep respect for both Francis and Jean Paul II. May only keep fingers crossed the next choice will also be of that sort.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: