Considering that these CEOs are talking about replacing all skilled and unskilled labor under them with LLMs, I don't see why they can't be replaced too. In reality, LLMs are overhyped. Even Grok says it straight - LLMs are probability models with condensed human knowledge that decides what the next word/letter should be. Original thoughts isn't its forte.
(Surprisingly though, that's enough for them to recognize that you're a human. Their models can identify your complex thought progression in your prompts - no matter how robotic your language is.)
The REAL problem here is the hideous narrative some of these CEOs spin. They swing the LLMs around to convince everyone that they are replaceable, thereby crashing the value of the job market and increasing their own profits. At the same time, they project themselves as some sort of super-intelligent divine beings with special abilities without which the world will not progress, while in reality they maintain an exclusive club of wealthy connections that they guard jealously by ruining the opportunities for the others (the proverbial 'burning the ladder behind them'.) They use their PR resources to paint a larger-than-life image that hides the extreme destruction they leave behind in the pursuit of wealth - like hiding a hideous odor with bucketfuls of perfume. These two problems are the two sides of a coin that expose their duplicity and deception.
PS: I have to say that this doesn't apply to all CEOs. There are plenty of skilled CEOs, especially founders, who play a huge role in setting the company up. Here I'm talking about the stereotypical cosmopolitan bunch that comes to our mind when we hear that word. The ones who have no qualms in destroying the world for their enjoyment and look down upon normal people as if you're just fodder for them.
> The first part works because otherwise reusable rockets wouldn't have been invented (or maybe they'd have been invented 20 years later).
I do not want to take credit away from SpaceX in what they achieved. It sure is complex. But it's also possible to give someone excess credit by denying others what is due. I don't know which part of 'reusable rockets' you are talking about, whether it's the reusable engines and hardware or if it's the VTOL technology. But none of that was 'invented' by SpaceX. NASA had been doing that for decades before that, but never had enough funding to get it all together. Talking about reusable hardware and engines, the Space Shuttle Orbiter is an obvious example - the manned upper stage of a rocket that entered orbit and was reused multiple times for decades. SpaceX doesn't yet have an upper stage that has done that. The only starship among the 9 to even survive the reentry never entered orbit in the first place. Now comes the 'reusable engine'. Do you need a better example than the RS-25/SSME of the same orbiter? Now let's talk about VTOL rockets. Wasn't Apollo LMs able to land and takeoff vertically in the 1960s itself? NASA also had a 'Delta Clipper' experiment in the 1990s that did more or less the same thing as SpaceX grasshopper and Starship SN15 - 'propulsive hops', multiple times. Another innovation at SpaceX is the full-flow stage combustion cycle used in the Raptor engine. To date, it is the only FF-SCC engine to have operated in space. But both NASA and USSR had tested these things on the ground. Similarly, Starship's silica heat tiles are entirely of NASA heritage - something they never seem to mention in their live telecasts.
I see people berating NASA while comparing them with SpaceX. How much of a coincidence is it that the technologies used by SpaceX are something under NASA's expertise? The real engineers at SpaceX wouldn't deny those links. Many of them were veterans who worked with NASA to develop them. And that's fine. But it's very uncharitable to not credit NASA at all. The real important question right now is, how many of those veterans are left at SpaceX, improving these things? Meanwhile unlike SpaceX, NASA didn't keep getting government contracts, no matter how many times they failed. NASA would find their funding cut every time they looked like they achieved something.
> It's the same as Steve Jobs, the Android guys were still making prototypes with keyboards until they saw the all screen interface of the iPhone.
Two things that cannot be denied about Steve Jobs is that he had an impeccable aesthetic sense and an larger-than-life image needed to market his products. But nothing seen in the iPhone was new even in 2007. Full capacitive touch screens, multi-touch technology, etc were already in the market in some niche devices like PDAs. The technology wasn't advanced enough back then to bring it all together. Steve Jobs had the team and the resources needed to do it for the first times. But he didn't invent any of those. Again, this is not to take away the credit from Jobs for his leadership.
> Sometimes it requires a single individual pushing their will through an organization to get things done, and sometimes that requires lying.
This is the part I have a problem with. All the work done by the others are just neglected. All the damages done by these people are also neglected. You have no idea how many new ideas from their rivals they drive into oblivion, so as to retain their image. Leaders are a cog in the machine - just like everyone else working with him to generate the value. But this sort of hero worship by neglecting everyone else and their transgressions is a net negative for human race. They aren't some sort of divine magical beings.
I understand the issue with all the devices. But what about the rest of the things that depend on these electronics, especially DRAMs? Automotive, Aircraft, Marine vessels, ATC, Shipping coordination, traffic signalling, rail signalling, industrial control systems, public utility (power, water, sewage, etc) control systems, transmission grid control systems, HVAC and environment control systems, weather monitoring networks, disaster altering and management systems, ticketing systems, e-commerce backbones, scheduling and rostering systems, network backbones, entertainment media distribution systems, defense systems, and I don't know what else. Don't they all require DRAMs? What will happen to all of them?
Industrial microcontrollers and power electronics use older process nodes, mostly >=45nm. These customers aren’t competing for wafers from the same fabs as bleeding edge memory and TPUs.
Okay, but what about the rest? The ones that aren't embedded in someway and use industrial grade PCs/control stations? Or ones with large buffers like network routers? I'm also wondering about the supply of the alternate nodes and older technologies. Will the manufactures keep those lines running? Was it micron that abandoned the entire retail market in favor of supplying the hyperscalers?
> The ones that aren't embedded in someway and use industrial grade PCs/control stations? Or ones with large buffers like network routers?
Not sure if they require DDR5 but the AI crisis just caused the prices of DDR5 to rise but the market supply of DDR4 thus grew and that's why they got more expensive too
> I'm also wondering about the supply of the alternate nodes and older technologies.
I suppose these might be chinese companies but there might be some european/american companies (not sure) but if things continue, there is gonna be a strain on them in demand and they might increase their prices too
> Was it micron that abandoned the entire retail market in favor of supplying the hyperscalers?
That might be the case only for the infotainment system, but there’s usually many other ECUs in an EV. The ADAS ECUs are carrying similar amounts as an iPhone or the infotainment system. Telematics is also usually also a relatively complex one, but more towards lower sized amounts.
Then you have around 3-5 other midsized ECUs with relatively high memory sizes, or at least enough to require MMUs and to run more complex operating systems supporting typical AUTOSAR stacks.
And then you have all the small size ECUs controlling all small individual actuators.
But also all complex sensors like radars, cameras, lidars carry some amounts of relevant memory.
I still think your point is valid, though. There’s no difference in orders of magnitude when it comes to expensive RAM compared to an iPhone. But there’s cars also carried lots of low-speed, automotive grade memory in all the ECUs distributed throughout the vehicle.
Okay, accepted. But are you sure that the supply won't be a problem as well? I mean, even if these products choose a different process nodes compared to the hyperscalers, will the DRAM manufactures even keep those nodes running in favor of these industries?
What will probably happen is that the reselling market/2nd market of these might probably rise
> will the DRAM manufactures even keep those nodes running in favor of these industries?
Some will, Some might not, In my opinion, the longevity of these brands will only depend if they allow buying ram for the average person/consumer brands so I guess we might see new competition perhaps or give more marketshare to all the other fab companies beyond the main three of these industries.
I am sure that some company will 100% align with the consumers but the problem to me feels that they wouldn't be able to supply enough production to consumers in the first place so prices still might rise.
And those prices most likely will be paid by you in one form or another but it would be interesting to see how long the companies who buy dram from these providers or build datacenters or anything ram intensive will hold their price up, perhaps they might eat the loss short term similar to what we saw some companies do during trump tarrifs.
Self-hosted FOSS apps are probably the best push towards computing freedom and privacy today. But I wish that the self-hosting community moved towards a true distributed architecture, instead of trying to mimic the paradigms of corporate centralized software. This is not meant as a criticism against the current self-hosted architecture or the apps. But I wish the community focused on a different set of features that suite the home computing conditions more closely:
1. Peer-to-peer model of decentralization like bittorrent, instead of the client-server model. Local web UIs (like Transmission's web UI) may be served locally (either host-only or LAN-only) as frontend for these apps. Consider this as the 'last-mile connectivity' if you will.
2. Applications are resistant to outages. Obviously, home servers can't be expected to be always online. It may even be running on you regular desktops. But you shouldn't lose the utility of the service just because it goes offline. A great example of this is the email service. They can wait for up to 2 days for the destination server to show up before declaring a delivery failure. Even rejections are handled with retries minutes later.
3. The applications should be able to deal with dynamic IPs and NATs. We will probably need a cryptographic identity mechanism and a way to translate that into a connection to the correct end node. But most of these technologies exist today.
4. E2E encrypted and redundant storage and distribution servers for data that must absolutely be online all the time. Nostr relays seem like a good example.
The Solid and Nostr projects embody many of these ideas already. It just needs a bit more polish to feel natural and intuitive. One way to do it is to have a local daemon that acts as a gateway, cache and web-ui to external data.
Yeah, I have been planning to try out Iroh sometime soon. However, what I explained will take a whole lot of planning on top of Iroh. I also don't want to replicate what others have already achieved. It would be best if something could be built on top of those. Let's see how it goes.
> Sounds like you want a k3s based homelab and then connect it all with Tailscale or Netbird.
I apologize if it was confusing. I was suggesting the exact opposite. It's not about how to build a mini enterprise cluster. It's about how to change the service infrastructure to suit the small computers we usually find at homes, without any modifications. I'm suggesting a more fundamental change.
> I have reliable electricity and internet at home, though.
It isn't too bad where I'm at, either. But sadly, that isn't the practical situation elsewhere. We need to treat power and connectivity as random and intermittent.
> You could argue that Plex, MinIO or Mattermost is being enshittified, but definitely not self hosting as a whole.
That's probably not how you should interpret it. Self hosting as a whole is still a vastly better option. But if there is a significant enough public movement towards it, you can expect it to be targeted for enshittification too. The incidents related to Plex, MinIO and Mattermost should be taken as warning signals about what this may escalate into in the future. Here are the possible problems I foresee.
1. The situation with Plex, MinIO and Mattermost can be expected to happen more frequently. After a limit, the pain of frequent migration will become untenable. MinIO is a great example. Even the crowd on HN hadn't considered an alternative until then. Some of us learned about Garage, RustFS and Ceph S3 for the first time and we were debating about each of their pros and cons. It's very telling that that discussion was very lengthy.
2. There is a gradual nudge to move everything to the cloud and then monetize it. Mandatory online account for Win11, monetization of GH self-hosted runner (now suspended after backlash, I think) and cloudification of MS Office are good examples. You can expect a similar attempt on self hosted applications. Of course, most of our self-hosted software is currently open source. But if these big companies decide to embrace, extend and extinguish it, I'm not sure that the market will be prudent enough to stick with the FOSS options. Half of HN was fighting me a few days back when I suggested that we should strive to push the market towards serviceable modular hardware.
3. FOSS projects developed under companies are always at a higher risk of being hijacked or going rogue. To be clear, I'm not against that model. For example, I'm happy with Zulip's development and monetization model - ethical, generous and not too pushy. But mattermost shows where that can go wrong. Sure, they're are open source. But there are practical difficulties in easily overriding such issues.
4. At one time, we were expecting small form-factor headless computers (Plug computers [1]) like SheevaPlug and FreedomBox to become ubiquitous. That should still be an option, though I'm not sure where it's headed, given the current RAM situation. But even if they make a come back, it's very likely that OEMs will lock it down like smartphones today and make it difficult for you to exercise your choices of servers, if not outright restrict them. (If anybody wants to argue that normal people will never consider it, remember how smartphones were, before iPhone. We had a blackberry that was used only by a niche crowd.)
What I understood is that the author is hoarding them for the future - not because there is any need for it right now. You could argue that it's too much RAM even at end of server's useful lifetime. But who knows? What if he end up running a few dozen services on it at that time?
Honestly, the problem that they're preparing for, isn't any of our fault. This is inflicted upon the world by some very twisted business models, incentives and priorities. It's hard to predict how it will all end up. Perhaps the market will be flooded with tons of RAM that will have to be transplanted onto proper DIMM modules. Or perhaps we might be scavenging the e-waste junkyard for every last RAM IC we can find - in which case, his choice would be correct.
When we were space constrained, we built smaller. When we were block constrained, we build ssd’s. When we were graphics constrained, we built gpu’s. Now that we’re memory constrained, we’ll see some advancements in this area as well. 1TB ram chips are right around the corner.
The problem is the economic incentive. In all the prior cases you mentioned, their commercial interests aligned with our own - at least in part. This time however, I'm worried that they aren't concerned about burning down the world economy, at least in part, since their bottom line won't suffer for it.
For example, Micron didn't think about any alternatives for the consumer retail market. They just dumped it entirely.
The economics behind this isn't rocket science. Micron left the retail market because it is more profitable for them to supply exclusively to the hyperscalers. Not because they can't supply the retail market or because it wasn't profitable. What makes you think that any other manufacturer is going to take a different decision? Why would they choose a market that offers less than the biggest bidder?
Someone will fill the void. Another company will sell ram to the retail market if the big players won't. More manufacturing abilities will open up. It's not like ram is becoming extinct. Someone will tool up and produce them for the retail market just as other brands have done in other sectors when the market shows a void.
Last I checked I could still buy Crucial RAM chips. In time, maybe it's Kingston. Or maybe Gigastone.
I doubt that anybody truly knows Rust. And this is aggravated by the fact that features keep getting added. But here are two simple strategies that I found very effective in keeping us ahead of the curve.
1. Always keep the language reference with you. It's absolutely not a replacement for a good introductory textbook. But it's an unusually effective resource for anybody who has crossed that milestone. It's very effective in spontaneously uncovering new language features and in refining your understanding of the language semantics.
What we need to do with it is to refer it occasionally for even constructs that you're familiar with - for loops, for example. I wish that it was available as auto popups in code editors.
2. Use clippy, the linter. I don't have much to add here. Your code will work without it. But for some reason, clippy is an impeccable tutor into idiomatic Rust coding. And you get the advantage of the fact that it stays in sync with the latest language features. So it's yet another way to keep yourself automatically updated with the language features.
I feel like other languages also have the issue of complexity and changing over time. I doubt I know all of C++ post C++14 for example (even though that is my day job). Keeping up with all the things they throw into the standard library of Python is also near impossible unless you write python every day.
Rust has an unusually short release cycle, but each release tends to have fewer things in it. So that is probably about the same when it comes to new features per year in Python or C++.
But sure, C moves slower (and is smaller to begin with). If that is what you want to compare against. But all the languages I work with on a daily basis (C++, Python and Rust) are sprawling.
I don't have enough experience to speak about other languages in depth, but as I understand it Haskell for example has a lot of extensions. And the typescript/node ecosystem seems to move crazy fast and require a ton of different moving pieces to get anything done (especially when it comes to the build system with bundlers, minifiers and what not).
Languages should be small, not large. I find that every language I've ever used that tries to throw everything and the kitchensink at you eventually deteriorates into a mess that spills over into the projects based on that language in terms of long term instability. You should be able to take a 10 year old codebase, compile it and run it. Backwards compatibility is an absolute non-negotiable for programming languages and if you disagree with that you are building toys, not production grade systems.
I'm not sure what this is arguing against here. Anyone who follows Rust knows that it's relatively modest when it comes to adding new features; most of the "features" that get added to Rust are either new stdlib APIs or just streamlining existing features so that they're less restrictive/easier to use. And Rust has a fantastic backwards compatibility story.
I had C++, python and ruby in mind, but yes, GP also mentioned Rust in the list of 'sprawling' languages, and they are probably right about that: Rust started as a 'better C replacement' but now it is trying to dominate every space for every programming language (and - in my opinion - not being very successful because niche languages exist for a reason, it is much easier to specialize than to generalize).
I wasn't particularly commenting on Rust's backward compatibility story so if you're not sure what I was arguing about then why did you feel the need to defend Rust from accusations that weren't made in the first place?
Egad, no. This is how you get C++, whose core tenet seems to be “someone used this once in 1994 so we can never change it”.
Even adding a new keyword will break some code out there that used that as a variable name or something. Perfect backward compatibility means you can never improve anything, ever, lest it causes someone a nonzero amount of porting effort.
No, you get C++ because you're Bjarne Stroustrup and trying to get people to sign on to the C++ bandwagon (A better C! Where have I heard that before?) and so you add every feature they ask for in the hope that that will drive adoption. And you call it object oriented (even if it really isn't) because that's the buzz-word du-jour. Just like Async today.
I’ll accept that, too. But from the outside it seems like they do that by finding bizarre, previously invalid syntax and making that the way to spell the new feature. `foo[]#>££{23}` to say “use implicit parallelism on big endian machines with non-power-of-2 word sizes”? Let’s do it!
Yes. At the very least, features should carry a lot of weight and be orthogonal to other features. When I was young I used to pride myself on knowing all the ins and outs of modern C++, but over time I realized that needing to be a “language lawyer” was a design shortcoming.
All that being said I’ve never seen the functionality of Rust’s borrow checker reduced to a simpler set of orthogonal features and it’s not clear that’s even possible.
If you want or have to build a large program, something must be large, be it the language, its standard library, third party code, or code you write.
I think it’s best if it is one of the first two, as that makes it easier to add third party code to your code, and will require less effort to bring newcomers up to speed w.r.t. the code. As an example, take strings. C doesn’t really have them as a basic type, so third party libraries all invent their own, requiring those using them to add glue code.
That’s why standard libraries and, to a lesser extent, languages, tend to grow.
Ideally that’s with backwards compatibility, but there’s a tension between moving fast and not making mistakes, so sometimes, errors are made, and APIs ‘have’ to be deprecated or removed.
It's a balance thing. You can't make a language without any features, but you can be too small ('Brainfuck') and you can definitely be too large ('C++'). There is a happy medium in there somewhere and the lack of a string type was perceived as a major shortcoming of C, but then again, if you realize that they didn't even have structs in the predecessor to C (even though plenty of languages at the time did have similar constructs) they got enough of it right that it ended up taking off.
C and personal computing hit their stride at roughly the same time, your choices were (if you didn't feel like spending a fortune) Assembly, C, Pascal and BASIC for most systems that mere mortals could afford. BASIC was terribly slow, Pascal and C a good match and assembler only for those with absolutely iron discipline. Which one of the two won out (C or Pascal) was a toss up, Pascal had it's own quirks and it was mostly a matter of which of the two won out in terms of critical mass. Some people still swear by Pascal (and usually that makes them Delphi programmers, which will be around until the end because the code for the heat-death of the universe was writting in it).
For me it was Mark Williams C that clinched it, excellent documentation, good UNIX (and later Posix) compatibility and whatever I wrote on the ST could usually be easily ported to the PC. And once that critical mass took over there was really no looking back, it was C or bust. But mistakes were made, and we're paying the price for that in many ways. Ironically, C enabled the internet to come into existence and the internet then exposed mercilessly all of the inherent flaws in C.
I suspect the problem is that every feature makes it possible for an entire class of algorithms to be implement much more efficiently and/or clearly with a small extension to the language.
Many people encounter these algorithms after many other people have written large libraries and codebases. It’s much easier to slightly extend the language than start over or (if possible) implement the algorithm in an ugly way that uses existing features. But enough extensions (and glue to handle when they overlap) and even a language which was initially designed to be simple, is no longer.
e.g., Go used to be much simpler. But in particular, lack of generics kept coming up as a pain point in many projects. Now Go has generics, but arguably isn’t simple anymore.
Haskell's user-facing language gets compiled down to Haskell "core" which is what the language actually can do. So any new language feature has a check in with sanity when that first transformation gets written.
George Orwell showed us that small languages constrain our thinking.
A small language but with the ability to extend it (like Lisp) is probably the sweet spot, but lol look at what you have actually achieved - your own dialect that you have to reinvent for each project - also which other people have had to reinvent time after time.
Let languages and thought be large, but only used what is needed.
I can take anything I wrote in C since ~1982 or so and throw it at a modern C compiler and it will probably work, I may have to set some flags but that's about it. I won't have to hunt up a compiler from that era, so the codebase remains unchanged, which increases the chances that I'm not going to introduce new bugs (though the old ones will likely remain).
If I try the same with a python project that I wrote less than five years ago I'm very, very lucky if I don't end up with a broken system by the time all of the conflicts are resolved. For a while we had Anaconda which solved all of the pain points but it too seems to suffer from dependency hell now.
George Orwell was a writer of English books, not a programmer and whatever he showed us he definitely did not show us that small programming languages constrain our thinking. That's just a very strange link to make, programming languages are not easily compared with the languages that humans use.
What you could say is that a programming languages' 'expressivity' is a major factor in how efficient it is in taking ideas and having them expressed in a particular language. If you take that to an extreme (APL) you end up with executable line-noise. If you take it to the other extreme you end up some of the worst of Java (widget factory factories). There are a lot of good choices to be found in the middle.
> What you could say is that a programming languages' 'expressivity' is a major factor in how efficient it is in taking ideas and having them expressed in a particular language
Rust does. You have editions to do breaking changes at the surface level. But that is per crate (library) and you can mix and match crates with different editions freely.
Thry do reserve the right to do breaking changes for security fixes, soundness fixes and inference changes (i.e. you may need to add an explicit type that was previously inferred but is now ambiguous). These are quite rare and usually quite small.
I'd normally agree that what you say is good enough in practice, but I question whether it meets GP's "absolute non-negotiable" standards. That specific wording is the reason I asked the question in the first place; it seemed to me that there was some standard that apparently wasn't being met and I was wondering where exactly the bar was.
Ada does. It has been through 5 editions so far and backwards compatibility is always maintained except for some small things that are documented and usually easy to update.
I'd normally be inclined to agree that minor things are probably good enough, but "absolute non-negotiable" is a rather strong wording and i think small things technically violate a facial reading, at least.
On the other hand, I did find what I think are the relevant docs [0] while looking more into things, so I got to learn something!
> except for some small things that are documented
I can't think of any established language that doesn't fit that exact criteria.
The last major language breakage I'm aware of was either the .Net 2 to 3 or Python 2 to 3 changes (not sure which came first). Otherwise, pretty much every language that makes a break will make it in a small fashion that's well documented.
Java rules here. You can take any Java 1.0 (1995) codebase and compile it as-is on a recent JDK. Moreover, you can also use any ancient compiled Java library and link it to modern Java app. Java source and bytecode backward compatibility is fantastic.
Java is very good here, but (and not totally it's fault) it did expose internal APIs to the userbase which have caused a decent amount of heartburn. If your old codebase has a route to `sun.misc.unsafe` then you'll have more of a headache making an upgrade.
Anyone that's been around for a while and dealt with the 8->9 transition has been bit here. 11->17 wasn't without a few hiccups. 17->21 and 21->25 have been uneventful.
Can confirm; my team spent the past 9 months upgrading an application JDK 8 -> 17, and there were breaking changes even after we got it compiling + running
Java has had some breaking changes (e.g., [0, 1]), though in practice I have to say my experience tends to agree and I've been fortunate enough to never run into issues.
It's probably borderline due to the opt-in mechanism, but Go did make a technically backwards-incompatible change to how its for loops work in 1.22 [0].
PHP has had breaking changes [1].
Ruby has had breaking changes [2] (at the very least under "Compatibility issues")
Not entirely sure whether this counts, but ECMAScript has had breaking changes [3].
The interesting thing about Go's loopvar change is that nobody was able to demonstrate any real-world code that it broke (*1), while several examples were found of real-world code (often tests) that it fixed (*2). Nevertheless, they gated it behind go.mod specifying a go version >= 1.22, which I personally think is overly conservative.
*1: A great many examples of synthetic code were contrived to argue against the change, but none of them ever corresponded to Go code anyone would actually write organically, and an extensive period of investigation turned up nothing
*2: As in, the original behavior of the code was actually incorrect, but this wasn't discovered until after the loopvar change caused e.g. some tests to fail, prompting manual review of the relevant code; as a tangent, this raises the question of how often tests just conform to the code rather than the other way around
You certainly won't find me arguing against that change, and the conservatism is why I called it borderline. The only reason I bring it up is because of the "absolute non-negotiable" bit, which I took to probably indicate a very exacting standard lest it include most widespread languages anyways.
Yes, I think it's also a good example of how "absolute" backwards compatibility is not necessarily a good thing. Not only was the old loopvar behavior probably the biggest noob trap in Go (*), it turned out not to be what anyone writing Go code in the wild actually wanted, even people experienced with the language. Everyone seems to have: a) assumed it always worked the way it does now, b) wrote code that wasn't sensitive to it in the first place, or c) worked around it but never benefitted from it.
*: strongest competitor for "biggest noob trap" IMO is using defer in a loop/thinking defer is block scoped
Strongly agree there. IMO breaking backwards compatibility is a tradeoff like any other, and the flexibility non-hardline stances give you is handy for real-world situations,
I'd normally agree with you in practice, but since "close enough" seems likely to cover most mainstream languages in use today I figured "absolute non-negotiable" probably was intended to mean a stricter standard.
C# for instance isn't such a "small language", it has grown, but code from older versions, that does not use the newer features will almost always compile and work as before.
The thing is that "most of them" seems incongruous with a demand for "absolute non-negotiable" backwards compatibility. If not for that particular wording I probably wouldn't have said anything.
> Note that I think parent may have been rhetorically asking, or asking with heavy sarcasm.
Probably neither. It is what you ask when you read the guile manual. Scheme documentation in general is surprisingly bad, considering how simple it is compared to a complex language like Rust for instance. Books like SICP are good for the academically inclined, but are too verbose for anyone learning scheme for a specific purpose like scripting.
> The crux of the matter is that even if one values upgradability and repairability, neither is a frequent need for practically anybody.
Judging reparability and serviceability the same way as you do with other features is absurd, to put it charitably! It is one feature that you rarely use, but brings you huge value when you do use it. You don't realize how much savings we used to extract by progressively upgrading the same desktop PC for two to three generations instead of throwing away the whole PC and buying a new one each time. This dismissal of the feature is bizarrely shortsighted.
> The reality may be that wanting a laptop that's well rounded and competent across the board AND repairable+upgradable is akin to having your cake and eating it too, but that doesn't stop people from wanting it anyway.
I talked about this just two days ago. Unlike how you project it, that ideal is entirely feasible if there was enough investment and a large enough market. Instead, OEMs inflict the opposite on the consumers who take it all in without pushing back. These companies choose and spread suboptimal designs that suit their interests and then insist that it is the only viable way forward. It's absurd that consumers also repeat that falsehood.
> You don't realize how much savings we used to extract by progressively upgrading the same desktop PC for two to three generations instead of throwing away the whole PC and buying a new one each time. This dismissal of the feature is bizarrely shortsighted.
The main things I keep long term are the drives and power supply, and those can be kept on most laptops too.
In the medium term I get a lot of use out of separately upgrading CPU and GPU, but most frameworks can't do that. The 16 gets half a point in that category because the options are still very limited.
A Framework lets me keep the same screen which is cool. And it lets me keep the same chassis which is not as beneficial if it's not a particularly good chassis.
If I'm generous, the extra flexibility in a Framework would save me $200 every 5-8 years. Which leaves me in the hole, further if I'm less generous.
I hope they reach a scale where they can price things better, and I'm willing to pay some extra for what they do, but not as much as they currently charge. Looking at Framework's site I can get the same specs as the author for $1800. Lenovo offers a model with a worse screen but otherwise the same specs for $600. Gigabyte has a fully matching model plus bonus GPU for $1150, and for half of November it was on sale for $1000. And if you want an RTX 5070 then Framework is $2500 and Gigabyte is $1350.
> If I'm generous, the extra flexibility in a Framework would save me $200 every 5-8 years. Which leaves me in the hole, further if I'm less generous.
I think this statement is heavily underestimating the value of a repairable /user serviceable computer.
The value proposition of user serviceable equipment is the same as the value proposition for open source for software. It gives you the FREEDOM and the ABILITY to make the changes you want to make IF you want to make them.
But as it is with open source software, most users are never going to be directly editing the code for postgres, Linux, or any of the other 1000s of open source software that they use on a daily basis - but IF they choose to do so, they can.
> The value proposition of user serviceable equipment is the same as the value proposition for open source for software. It gives you the FREEDOM and the ABILITY to make the changes you want to make IF you want to make them.
This is true to an extent, but I think that's greatly overselling it when phrased that way.
90% of my customization is either during the initial purchase, or it's a RAM/drive upgrade, and I don't need Framework for that. It's only a small portion of customization I lose out on. And in some ways I actually have more ability to customize outside of Framework, for example they only offer two GPU models.
That is my point. Most users - such as yourself, will not make use of the freedom a Framework device provides but there are others who will directly benefit from it. And that freedom is essential.
To use a vehicle analogy - it is the same as getting a car which has parts you can opt to change/replace. Most people may not even be able to do an oil change but this "feature" is nonetheless a VERY important one to have.
My point was that even for people that benefit, the benefit is greatly reduced.
Let's dig in to why it's useful to be able to replace parts on a car. If we analogize the extra flexibility of the Framework to being able to replace all these parts in the engine bay, that sounds really cool, until you realize there are no third party options for the core components and Framework only makes a couple versions. It's still useful in a few circumstances, but it's not this massive unlock of freedom. You can't have a fully customized engine, and the best way to get an engine tailored to your tastes is to abandon the weak after-the-fact customization and go find something that you like from the start.
Even to a user that really values freedom, Framework doesn't properly deliver at this point in time. The Framework freedom is so restricted that in most ways you get more freedom by considering all the non-soldered-RAM laptops from other brands as valid options too.
Edit: And I don't mean this as an indictment of their small company, they're trying, but right now the impact is limited in many ways.
> A Framework lets me keep the same screen which is cool
Probably the last thing I'd want to keep. Screen technology still moves forward at a decent pace. Screens are disposable, backlights fade over time, pixels get stuck, screen burn-in.
The only universal thing I can think of about machines I've upgraded over the years (not laptops, of course) are cases, power supplies, CPU coolers, and as long as the form factor hasn't changed/there hasn't been significant progress, HDD.
Everything else goes with the system. New CPU meant new socket, which also meant new RAM. Need to get rid of that old video card, of course.
I think a major clarification is in order here. I'm not talking about just the framework here. If anything, the problems with framework is the direct result of the absolutely stupid industry-wide product design culture and market tastes. You can see all the major open-ish hardware designers grappling with similar issues - pinephone, System76, Librem... I will explain later why it is so. But here is the point - we need a major shift in both the product design culture and the (non-existent) consumer culture.
Back in the days of modular desktop PCs (which is still alive, but barely holding on and slowly fading away) about a couple of decades ago, there would have been immediate and sharp backlash if any hardware manufacturer pulled the tricks that they do today - soldered-on RAM modules, thermoplastic glue instead of screws, riveted keyboards, irreplaceable ICs that are paired using crypto, permanently locked firmware, etc. That would have shook their sales enough for them to care. Right now, these 'features' lead to short-life hardware (because any broken parts mean everything has to be thrown out), landfills full of e-waste, frequent new purchases, etc. It does nothing good for anyone or the ecosystem, except filling the pockets of trillion dollar MNCs.
The advantage of such consumer pressure is that you'd have a vibrant spare parts market with much more choices. Many people here are complaining about how poor the spare parts market is. Had the consumer choice been more on the side of modularity and reusability, that problem wouldn't have even arisen. It wouldn't be just framework who manufactures such things. In fact, you wouldn't even be able to decide the brand name of the laptop as a whole. Another point is that you're still thinking about laptops as a unit, instead of as a collection of parts. And that would be the case if the industry spent more resources and effort into it. It doesn't have to be bulky as you imagine either. Hardware interfaces, housing and fasteners would have evolved to a more compact, universal and standard form, much like how a dozen different ports were replaced by USB. Right now, you're thinking about how you can transplant parts from your old laptop to the new one. Instead, you could swap parts of a laptop one at a time. Currently, the CPU and GPU cannot be swapped like in a desktop PC. You have to make do with replacing the whole motherboard now. But has anybody demanded replaceable CPUs and GPUs for these? Why are those precluded?
Now about why framework, System76, Librem, Pinephone, etc have problems making such devices. The choices they get is abysmally small. The OEMs and component manufacturers (mostly from China) have created this supply-chain system where they involve in huge-scale exclusive contracts. It's simply too hard to get a fully compatible chipset without signing an NDA that effectively ruins your chances at making open or modular hardware. Those companies are doing an impressive job at making these hardware with what they have.
You may want to dismiss me as too idealistic and dreaming about what could be, instead of dealing with what it is now. But let me point out why we never catch a break. The tech community takes an obstinate and imprudent 'all or nothing' approach to everything. 'Framework is not good because it's too costly, modules are not good enough, GPU cannot be replaced, yada, yada'. Nobody is willing to settle for anything less than perfect. But you need to realize that you are not in the bargaining position here - you don't hold the cards. Your choices are dictated by someone else who is more resourceful and patient in making short-term compromises and playing the long game of shaping the market and making insane profits at the end. The only way to get your way is for everyone to unite and show even more resolve and patience in demanding what you want. That means putting up with some inconveniences for now. But everyone will be rewarded at the end with the perfection you demand.
>about a couple of decades ago, there would have been immediate and sharp backlash if any hardware manufacturer pulled the tricks that they do today - soldered-on RAM modules, thermoplastic glue instead of screws, riveted keyboards, irreplaceable ICs...
That's when this trend started, with Apple's Macbook Pro leading the way, winding up one of the best-selling consumer laptop brands by targeting incoming college freshmen and their grandparents, focusing on cosmetic appeal over dollar cost for performance.
Most buyers don't even know what CPU model their laptop contains, let alone understand the difference between faster or slower processors from different generations. It will always be a tiny segment of the market that appreciates the value of Framework's features.
PCs are the odd ones, all other 8 and 16 bit home computers were vertically integrated, most expansions were done via external buses connected into one of the sides, usually the back or right side.
With the race for thin margins at any cost, if anything thanks to Apple, is that OEMs realised going back to Spectrum, C64, Amiga, Atari ST kind of hardware designs payed off in their bank accounts.
My point was that soldered RAM and lack of upgradeable components didn't inspire much of a backlash back then. It led to Apple dominating the higher end of the consumer laptop market.
Oh yeah I didn't know that one. I do know Logitech has some ultra-thin ones too though. Very good keyboards too. They'd do nice in a laptop as well.
I'd very gladly sacrifice thinness for a decent keyboard. The Thinkpads had an OK compromise for a while but since the Thinkpad T14s gen2 or so they have been horrible as well. My old T490s was still serviceable.
One space that I don't think I've seen explored is building a laptop around a tiny, ultra-low-power passively cooled SoC board that can easily fit beside the keyboard instead of under it in a 12"-16" chassis and saves space that'd otherwise need to be dedicated to cooling. That'd buy a substantial amount of Z-budget for a quality keyboard without blowing up chassis thickness.
Naturally this laptop wouldn't be suited for some types of work due to lack of horsepower, but there's always tradeoffs somewhere.
We're kinda there already. Most recent laptops I've seen have a tiny motherboard not even taking up the whole width of the device. Under the keyboard there's usually the battery.
Don't forget a significant part of the weight has to be towards the front edge so you can tilt the screen back without flipping the whole laptop. Some of my cheaper atom based laptops (with tiny motherboard and batteries) even have a metal bar in there for that purpose.
Right, but my idea was to do something like shove the mainboard up into the bezel above the keyboard and battery into the palm rest, with nothing sitting under the keyboard except maybe ribbon cables. That’d get you a laptop with a thickness of under an inch that still has a keyboard that’s not compromised and keeps weight shifted to the front. It’d simplify repairs to some degree too since there’d be very little stacking.
You are just repeating the same unpopular debunked arguments that the industry makes out of vacuum. Why does anybody have to know the internals of any system to get the advantages of reparability and serviceability? What were independent service personnel for? Did everyone know how to open and repair watches, cars, refrigerators, etc? Did that stop them from getting the benefit?
I always enjoy how Thinkpad bros have been badmouthing MacBooks for two decades, when those have had the best battery life, screen, hinge, case, bluetooth, fan noise and other amenities during all of that time. They were the first to have WiFi.
Apple figured out pretty soon that a laptop doesn't need to be a dragster or M1 Abrams, it needs to be a Volvo.
If you're gaining advantage by changing RAM from sockets to soldered joints, it's probably time to change the system design altogether. It's better to put the DRAM on the same IC/SoC as the processor - on a dedicated die if necessary. Any additional memory requirement can be added as socket based RAM modules. They sure will be slower. But they can be treated as another memory layer, kind of like the optane memory (without persistence) or NUMA. You'd still get significant speed up because a portion of the DRAM is colocated with the CPU now.
This also adds to the core philosophy that I'm trying to push. Modularity and serviceability doesn't necessarily mean sacrificing performance, compactness or security. That's a myth that's too prevalent in the industry.
There is a new system design, it's called LPCAMM. And framework would have used it in the desktop but those CPUs have some flaw that make them not compatible with full speed LPCAMM.
Moving the memory even closer doesn't have all that much advantage. And having super close RAM and sockets is a waste of die space on all those I/O channels. One or other can fit all the needs of any particular CPU.
Framework has stated that it asked AMD if there were any way to make the RAM on Ryzen AI Max APUs (like used in the Framework Desktop) socketed, and AMD said no due to the stability hit that’d entail — the physical distance from the CPU that’d be required with RAM sockets reduces signal integrity too much for it to function.
Which is weird. The entire point of [LP]CAMM[2] is to be able to make that work.
The Framework desktop clocks the memory at 8000MHz. That's well within the limits of the interface. Something is flawed or omitted in those CPUs if they can't handle it.
Lamenting market taste and the resulting mass market designs is basically yelling at clouds.
Simple fact is that most people have different priorities than the “make everything upgradable” crowd would like. That’s not going to change. Why would 90% of the market “unite” with 10% who want a totally different set of tradeoffs?
It’s like asking that all car buyers unite and demand manual transmissions in every car. I love manual cars, but I recognize most people do not want that for most of their driving. So why would the majority demand this feature that they don’t actually want, and which would not be a better experience for most?
I was expecting this reply here. But it's still the same old excuse to do nothing. It's as if we deserve nothing better than what the companies impose upon us. That's such a defeatist stance.
> Different companies "impose" different tradeoffs upon us.
The "different tradeoffs" those companies offer us are a lie. There are other tradeoffs they won't ever explore. But I won't explain it anymore because I did that practically in every single comment of mine in this thread. Just ignoring it and repeating this trope is hardly a counterargument.
> Pick what you like, but expect to pay a premium for a less popular choice.
The argument about choices is also a lie. They don't exist because the market is a heavily captured and manipulated one. You might as well wait for Santa Claus to deliver it instead. This is again something that's repeatedly ignored. We're just arguing in cycles here.
There are a lot of missing choices in modular, serviceable and repairable market - which is why you see so many little complaints in this thread about a company that's sincerely attempting to offer and improve modular options. It's not that there's no demand for it. But the majority consumers just de-incentivices such products out of the market by following the hype and choosing the harmful options.
At least, the majority of the consumers can be forgiven for their ignorance about those tradeoffs. But that's something that the knowledgeable and expert population can solve. The others respect their opinion. But instead of pushing for the common good, they consistently show apathy. It really isn't that big of a deal. The experts have to be more honest and vocal about their own specialities, and the situation will gradually improve. People have rallied and achieved much harder goals.
But the really frustrating aspect is that some people actively sabotage the commons. At this point, I don't believe that the tech influencers are being honest about the interests they serve. And equally bad are the misguided defeatist arguments raised against advocacy for the commons. I really don't understand the motivation behind such excessively cynical takes.
And yet, you completely ignore the possibility that someone could value portability, lightness or even looks of the device far above any points you hold very dear.
I get it, all that you say I would agree on regarding my stationary hardware.
On the go, I have very different demands. And the hardware sellers are not stupid, they know what sells.
> It’s like asking that all car buyers unite and demand manual transmissions in every car. I love manual cars, but I recognize most people do not want that for most of their driving. So why would the majority demand this feature that they don’t actually want, and which would not be a better experience for most?
Um that's like the status quo in Europe lol. We all drive manual here. it's not that unlikely. Automatics are the exception here (and you must learn to drive manual otherwise you get a restricted license)
> I talked about this just two days ago. Unlike how you project it, that ideal is entirely feasible if there was enough investment and a large enough market. Instead, OEMs inflict the opposite on the consumers who take it all in without pushing back. These companies choose and spread suboptimal designs that suit their interests and then insist that it is the only viable way forward. It's absurd that consumers also repeat that falsehood.
Talk is cheap. Reality is a better indicator of what is and isn’t feasible, and it’s not like there haven’t been many attempts towards that ideal, but for whatever reason, Apple’s model is the desirable one, for most.
I've seen it from Netflix, Steam, and several others. People simply love having all their eggs in one basket, and will stubbornly support it long past the state it starts to exploit them. They support security over freedom every time, consistently.
It's a bit crude, but it's also why I'm not surprised AI is catching on so quickly. People will happily outsource their ability to "think" if the product is convincing enough to them. We already spent the last decade or 2 trying to maximize the dopamine hits from social media. Now there's a tech that can (pretend to) understand your individualized needs? Ready to answer to your Beck and call and never makes you feel bad?
Not as cool as thr VR pod dystopia, but I guess I overestimated how much stimulation humanity needed to reject itself.
> People simply love having all their eggs in one basket
It's more accurate to say that people don't like having twelve different interfaces that all do the same thing.
The proper way to do this is, of course, to have a single interface (i.e. a user agent) that interfaces with multiple services using a standard protocol. But every proprietary service wants you to use their app, and that's the thing people hate.
But the services are being dumb, because everyone except for the largest incumbent is better off to give the people what they want. The one that wins is the one with the largest network effect, which means you're either the biggest already or you're better off to implement a standard along with everyone else who isn't the biggest so that in combination you have the biggest network, since otherwise you won't and then you lose.
Yeah, thars a more generous way to put it. People are fine with the illusion of one basket. Thars pretty much how any large website works.
The ideal would be for users to choose their front end and have backends hook into it via protocols. Aka RSS feeds or Email (to some extent). But the allure of being vertically integrated is too great, and users will rarely question it.
>But the services are being dumb, because everyone except for the largest incumbent is better off to give the people what they want.
Yup, agreed. At this point, it's really an issue regulation can fix. Before it's too late.
Even more unimaginative dismissals are not what I wish to debate. I have already explained why this argument is disingenuous at best. Apple's model isn't the best. It just appears so because these companies never put significant effort into better alternatives and the consumers never demanded it. I keep trying to point this out - this is a repeated misdirection tactic employed by these companies and their fans.
I don't think this was understood charitably. The point of the parent is that in practice, when it comes time to update one part, you'll also want to update all or most of the others. So, in practice, you will not see any of these savings.
The potential savings may be significant, but for most people, it may be the case that the actual savings are unlikely. A modular, upgradable laptop may be a niche product for people who want to upgrade each part more frequently, not less.
> I don't think this was understood charitably. The point of the parent is that in practice, when it comes time to update one part, you'll also want to update all or most of the others. So, in practice, you will not see any of these savings.
It's frustrating to have to repeat the same point again and again. That is not correct at all. I have exercised it in practice. What you refer to as in practice are the deliberately crippled and limited options that are available in the market today.
> The potential savings may be significant, but for most people, it may be the case that the actual savings are unlikely. A modular, upgradable laptop may be a niche product for people who want to upgrade each part more frequently, not less.
Completely disagree. That's not what's seen in practice.
> You don't realize how much savings we used to extract by progressively upgrading the same desktop PC for two to three generations instead of throwing away the whole PC and buying a new one each time
Do you actually realise any savings doing that? Pretty sure I never have.
Typically by the time I get around to upgrading, they've changed both the CPU socket and the RAM, so I need a whole new motherboard. And I certainly don't trust a 5-year-old PSU to run a higher-watt load at that point. So most of the time all I'm reusing is the case and maybe a couple of auxiliary SSDs (which aren't a major part of the cost)...
Aren't part of the cost Yet :) actually upgradeable components were priority when motherboards and CPUs were too expensive to upgrade, it was ram ssds and memory that were changed out...
Soon, now actually, it is the inverse. Ram, ssds, high speed network, consumer GPUs, and anything else that needs a modest amount of DRAM.
AMD sockets last nearly a decade, and power supplies come with up to 13-year (or longer) warranties. It's just that it can be difficult to stay the course for the sheer amount of time it would take for you to realize those savings.
> power supplies come with up to 13-year (or longer) warranties
Unfortunately, those warranties don't tend to cover the rest of your components, if the PSU happens to take out a motherboard or GPU as it dies, you are up the proverbial creek.
Having had a couple of older PSUs die spectacularly, I'm not risking re-using a ~$100 component, on the off-chance it fries ~$500 of brand new motherboard/GPU/etc post-upgrade.
Why do people think newer components are more reliable? Is it the same thinking that says newer cars are more reliable? Newer computers? (The answer to all is no.)
Clean the dust out of your PC once a year. It'll last longer than it has any right to.
> Why do people think newer components are more reliable? Is it the same thinking that says newer cars are more reliable?
I'm not making any statement about newer models being more reliable, I'm saying that electronic components age, and hence the risk of failure goes up over time.
If you buy the exact same model of power supply, but one that is manufactured 5 years later, it will (statistically) be more reliable than the unit that's already been in use for 5 years.
Isn’t “reuse the PSU” kind of a tempting trap? I though it was—a cheap part that can take down the rest of your expensive system. I though the advice was to get a new one with each build…
A quality PSU can often last 10 years and multiple builds. Quality in this case just means "has things like over voltage protection, proper wiring included, decent caps, and decent voltage regulation" not "was really expensive". E.g. that $140 85 W Seasonic Focus tier is quality in this regard, the $80 no-name 850 W PSU is what people warn about, and the $400 Seasonic Prime titanium rated PSU is mostly for those scrutinizing VRM designs or wattage limits on the cables to the GPU for their overclock goals.
It's common to upgrade your PSU anyways though as it seems like parts wattages only go up over the years (particularly for the +12v rails) or one may want to cycle out the old system completely for reuse/resale. Generic advice (since most people buy cheapo no name PSUs and upgrade rarely) might be to say to replace just to be on the good side of every situation... but if you're one that knows you got a quality PSU or likes to upgrade your build every other CPU generation, then swapping out the PSU every time is likely a waste.
I've been on the same PSU for I think 13 years now, its currently running my Ryzen 7 3700x and RTX4070 desktop. I suppose if its not a great quality PSU or its already suspected of causing issues then replacing is a good idea.
If the PSU is that crappy, then yes. But these things are supposed to come with over voltage protection, current limiters, resettable fuses etc at the output. Even bad ones are not supposed to cascade their failure to the rest of the system.
But let's think of a better option. What if all spare parts came with an expiry date and a service schedule? On top of giving us a baseline to retire the part, the manufacturer will also be forced to divulge an indirect quality score (useful lifetime) and compete with others on it. If this sounds too fantastic, we sort of had this in operation half a century ago. I don't think a lot of people remember that era.
Oh neat, I was not aware. In the 70’s then, computer parts came with an expiration date? I wonder why they stopped, was it a tradition inherited from car, radio, or appliance parts, or something, where the idea of a wear-part is (or at least was) somewhat more developed?
> Oh right he's not motivated by money he just wants to make the world better, right.
No. The reason is the same as why his Optimus robots will take over all menial jobs from us, work hard to earn money for us, eliminate poverty forever and leave us to do whatever we want.
I'm puzzled by your top level comment getting downvoted. If people can't recognize the motives or they're that much into hero worship, I say that we're in for a long winter.
I get puzzled by HN all the time. I never in my life could've imagined that the pull off the "successful billionaire genius" could be so strong with so many people.
(Surprisingly though, that's enough for them to recognize that you're a human. Their models can identify your complex thought progression in your prompts - no matter how robotic your language is.)
The REAL problem here is the hideous narrative some of these CEOs spin. They swing the LLMs around to convince everyone that they are replaceable, thereby crashing the value of the job market and increasing their own profits. At the same time, they project themselves as some sort of super-intelligent divine beings with special abilities without which the world will not progress, while in reality they maintain an exclusive club of wealthy connections that they guard jealously by ruining the opportunities for the others (the proverbial 'burning the ladder behind them'.) They use their PR resources to paint a larger-than-life image that hides the extreme destruction they leave behind in the pursuit of wealth - like hiding a hideous odor with bucketfuls of perfume. These two problems are the two sides of a coin that expose their duplicity and deception.
PS: I have to say that this doesn't apply to all CEOs. There are plenty of skilled CEOs, especially founders, who play a huge role in setting the company up. Here I'm talking about the stereotypical cosmopolitan bunch that comes to our mind when we hear that word. The ones who have no qualms in destroying the world for their enjoyment and look down upon normal people as if you're just fodder for them.
reply