I think you're failing to get that using a filesystem API to work with things that aren't naturally anything like filesystems might get perverse. And standard filesystems are a pretty unnatural way to lay out information anyway, given that they force everything into a tree structure.
This is what I was trying to get at. A lot of the data I deal with is directed, cyclic graphs. Actually, I personally think most data sets we care about are actually directed graphs of some kind, but we've gotten so used to thinking of them as trees that we force the metaphor too far. I mean, file systems are an excellent example of a thing we actually want to be a graph but we've forced into being a tree. Because otherwise why would we have ever invented symlinks?
There's a bunch of literature about accessing graphs through tree lenses. I'm not sure exactly what you're looking for.
SQL certainly forces you to look at graphs as trees. Do you have an specific interface you're trying to access? If you're trying to use a graph database, why mention APIs and SQL?
I just assumed they wanted to interface with existing json over http apis rather than write their own code. The sibling of my previous comment addresses that concern.
Operating systems are where device drivers live. It sounds awfully impractical to develop alternatives at this stage. I think OP is right.
I think OSes should just freeze all their features right now. Does anyone remember all the weird churn in the world of Linux, where (i) KDE changed from version 3 to 4, which broke everyone's KDE completely unnecessarily (ii) GNOME changed from version 2 to 3, which did the same (iii) Ubuntu Linux decided to change their desktop environment away from GNOME for no reason - but then unchanged it a few years later? When all was said and done, nothing substantive really got done.
So stop changing things at the OS level. Only make conservative changes which don't break the APIs and UIs. Time to feature-freeze, and work on the layers above. If the upper layers take over the work of the lower layers, then over time the lower layers can get silently replaced.
I have never had so much negative feedback and ad-hom attacks on HN as for that story, I think. :-D
Short version, the chronology goes like this:
2004: Ubuntu does the first more-or-less consumer-quality desktop Linux that is 100% free of charge. No paid version. It uses the current best of breed FOSS components and they choose GNOME 2, Mozilla, and OpenOffice.
By 2006 Ubuntu 6.06 "Dapper Drake" comes out, the first LTS. It is catching on a bit.
Fedora Core 6 and RHEL 4 are also getting established, and both use GNOME 2. Every major distro offers GNOME 2, even KDE-centric ones like SUSE. Paid distros like Mandriva and SUSE as starting to get in some trouble -- why pay when Ubuntu does the job?
Even Solaris uses GNOME 2.
2006-2007, MS is getting worried and starts talking about suing. It doesn't know who yet so it just starts saying intentionally not-vague-at-all things like the Linux desktop infringes "about 265 patents".
This is visibly true if you are 35-40 years old: if you remember desktop GUI OSes before 1995, they were all over the place. Most had desktop drive icons. Most had a global menu bar at the top. This is because most copied MacOS. Windows was an ugly mess and only lunatics copied that. (Enter the Open Group with Motif.)
But then came Win95. Huge hit.
After 1995, every GUI gets a task bar, it gets buttons for apps, even window managers like Fvwm95 and soon after IceWM. QNX Neutrino looks like it. OS/2 Warp 4 looks like it. Everyone copies it.
Around the time NT 4 is out and Win98 is taking shape, both KDE and GNOME get going and copy the Win9x look and feel. Xfce dumps its CDE look and feel, goes FOSS, and becomes a Win95 copy.
MS had a case. Everyone had copied them. MS is not stupid and it's been sued lots of times. You betcha it patented everything and kept the receipts. The only problem it has is: who does it sue?
RH says no. GNOME 3 says "oh noes our industry leading GU is, er, yeah, stale, it's stagnant, it's not changing, so what we're gonna do is rip it up and start again! With no taskbar and no hierarchical start menu and no menu bars in windows and no OK and CANCEL buttons at the bottom" and all the other things that they can identify that are from Win9x.
GNOME is mainly sponsored by Red Hat.
Canonical tries to get involved; RH says fsck off. It can't use KDE, that's visibly a ripoff. Ditto Xfce, Enlightenment, etc. LXDE doesn't exist yet.
So it does its own thing based on the Netbook Launcher. If it daren't imitate Windows then what's the leading other candidate? This Mac OS X thing is taking off. It has borrowed some stuff from Windows like Cmd+Tab and Fast User Switching and stuff and got away with it. Let's do that, then.
SUSE just wearily says "OK, how much? Where do we sign?"
RISC OS had a recognizable task bar around 1987, so 2006-2007 is just long enough for any patent on that concept to definitely expire. This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
Yes, the Icon Bar is prior art, but there are 2 problems with that.
1. It directly inspired the NeXTstep Dock.
This is unprovable after so long, but the strong suspicion is that the Dock inspired Windows 4 "Chicago" (later Windows 95) -- MS definitely knew of NeXT, but probably never heard of Acorn.
So it's 2nd hand inspiration.
2. The Dock isn't a taskbar either.
3. What the prior art may be doesn't matter unless Acorn asserted it, which AFAIK it didn't, as it no longer existed by the time of the legal threats. Nobody else did either.
4. The product development of Win95 is well documented and you can see WIP versions, get them from the Internet Archive and run them, or just peruse screenshot galleries.
The odd thing is that the early development versions look less like the Dock or Icon Bar than later ones. It's not a direct copy: it's convergent evolution. If they'd copied, they would have got there a lot sooner, and it would be more similar than it is.
> so 2006-2007 is just long enough for any patent on that concept to definitely expire.
RISC OS as Arthur: 1987
NeXTstep 0.8 demo: 1988
Windows "Chicago" test builds: 1993, 5Y later, well inside a 20Y patent lifespan
Win95 release: 8Y later
KDE first release: 1998
GNOME first release: 1999
The chronology doesn't add up, IMHO.
> This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
You're missing a different point here.
Buttons at the bottom date back to at least the Lisa.
The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top. Just as in 1983 or so GEM made its menu bar drop-down instead of pull-down (menus open on mouseover, not on click) and in 1985 or so AmigaOS made them appear and open only on a right-click -- in attempts to avoid getting sued by Apple.
> The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top.
Well, the buttons in the titlebar at the top are reminiscent of old Windows CE dialog boxes, so I guess they're not really original either! What both Unity and GNOME 3 looks like to me is an honest attempt to immediately lead in "convergence" with mobile touch-based solutions. They first came up in the netbook era where making Linux run out-of-the-box on a market-leading small-screen, perhaps touch-based device was quite easy - a kind of ease we're only now getting back to, in fact.
I just found out that there's a 1D cellular automaton called Rule 54 that is conjectured to be Turing complete, but for which there isn't yet a proof.
I think Gemini (an LLM) and me are in agreement that the proof will likely be found by a neuro-symbolic AI. As evidence for this, see AlphaEvolve and the agents which received IMO Gold.
A hard reboot is where the power goes all the way off, a soft reboot is where it doesn't. A fork bomb makes it very hard / impossible to trigger a soft reboot, forcing you to do a hard reboot.
As an extra sting, a hard reboot can be damaging if the software and hardware is not correctly handling power interruption, which was much more likely in the 90's.
To clarify a little - a hard reboot in this case is not performed by issuing a shutdown power off command, which is safe, but by pulling the plug from the wall (the worst shutdown) or holding the switch down (quite bad).
Oh, "once upon a time" I did set emacs to be my login "shell" and emacs can call other binaries via execve, handle sub processes, etc. Worked as expected.
Getting a Linux or Unix system to boot without a proper shell would be another complication, so a system completely without a she'll? I expect that to either be easy nor useful.
ML languages have a "types, modules, types-of-modules, and functors" approach to ad-hoc poly. It's a bit strange compared to what other languages do. I am wondering whether it's ever been seen outside of SML and OCaml.
For JSON deserialisation, you would declare a module-type called "JSON-deserialiser", and you would define a bunch of modules of that module-type.
The unusual thing is that a JSON-deserialiser would no longer be tied to a type (read: type, not module-type). Types in ML-like languages don't have any structure at all. I suppose you can now define many different JSON-serialisers for the same type?
The problem is unclear. I think you have a labelled graph G=(V, E) with labels c:V->R, such that each node in V consists of a triple (L, R, S) where L is a sequence of weights are on the left, R is a sequence of weights that are on the right, and S is a set of weight that have been taken off. Define c(L, R, S) to be the centre of mass. Introduce an undirected edge e={(L, R, S), (L', R', S')} between (L, R, S) and (L', R', S') either if (i) (L', R', S') results from taking the first weight off L and adding it to S, or (ii) (L', R', S') results from taking the first weight off R and adding it to S, or (iii) (L', R', S') results from taking a weight from W and adding it to L, or (iv) (L', R', S') results from taking a weight from W and adding it to R.
There is a starting node (L_0, R_0, {}) and an ending node ({}, {}, W) , with the latter having L=R={}.
I think you're trying to find the path (L_n, R_n, S_n) from the starting node to the ending node that minimises the maximum absolute value of c(L_n, R_n, S_n).
> To me, this sounds like when we first went to the moon, and people were sure we'd be on Mars be the end of the 80's.
Unlike space colonisation, there are immediate economic rewards from producing even modest improvements in AI models. As such, we should expect much faster progress in AI than space colonisation.
But it could still turn out the same way, for all we know. I just think that's unlikely.
The minerals in the asteroid belt are estimated to be worth in the $100s of quintillions. I would say that’s a decent economic incentive to develop space exploration (not necessarily colonization, but it may make it easier).
A lie. Opinion polls suggest 85% of Greenlanders oppose the territory joining the US.
> As Americans
Would you bloody stop? Most of us here aren't Americans.
reply