For who? Regular people are quite famously not clamouring for more AI features in software. A Siri that is not so stupendously dumb would be nice, but I doubt it would even be a consideration for the vast majority of people choosing a phone.
I will never as long as I live understand the argument that AI development is more fun. If you want to argue that you’re more capable or whatever, fine. I disagree but I don’t have any data to disprove you.
But saying that AI development is more fun because you don’t have to “wrestle the computer” is, to me, the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.
> I will never as long as I live understand the argument that AI development is more fun.
Some people find software architecture and systems thinking more fun than coding. Some people find conducting more fun than playing an instrument. It's not too mysterious.
I am one of those, but that's why I went into the ops side of things and not dev, although the two sides have been merging for a while now and even though I deal with infrastructure, I do so by writing code.
I don't mind ops code though. I dislike building software as in products, or user-facing apps but I don't mind glue code and scripting/automation.
Don't ask me to do leetcode though, I'll fail and hate the experience the entire time.
That is accurate for me. I have never enjoyed coding puzzles or advent of code. My version of that is diving into systems and software when it breaks. That is fun...as long as my job or income is not on the line
AI lets you pick the parts you want to focus on and streamline the parts you don't. I get zero joy out of wrestling build tools or figuring out deploy scripts to get what I've built out onto a server. In the past side projects would stall out because the few hours per week I had would get consumed by all of the side stuff. Now I take care of that using AI and focus on the parts I want to write.
Also just.. boring code. Like I'm probably more anti-AI than most, but even I'll acknowledge it's nice to just be like... "hey this array of objects I have, I need sorted by this property" and just have it work. Or how to load strings from exotic character encodings. Or dozens of other bitchy little issues that drag software dev speed down, don't help me "grow" as a developer, and/or are not interesting to solve, full stop.
I love this job but I can absolutely get people saying that AI helps them not "fight" the computer.
I've always believed that the dozens of 'bitchy little issues' are the means to grow as a developer.
Once you've done it, you'll hopefully never have to do it again (or at worse be derivatives). Over time you'll have a collection of 'how to do stuff'.
I think this is the path to growth. Letting a LLM do it for you is equivalent to it solving a hard leetcode problem. You're not really taxing your brain.
My point is that it's tempting and irresistible (based on other comments in this thread) to move from basic attribute sorting, to basic CRUD, SQL queries, CSS/Tailwind, typescript error resolution then using it for Dijkstra, because why not?, it's so nice.
Then we're just puppetmasters pulling the strings (which some think this is the way the industry is going).
> I get zero joy out of wrestling build tools or figuring out deploy scripts to get what I've built out onto a server.
And for me (and other ops folks here I'd presume), that is the fun part. Sad, from my career perspective, that it's getting farmed out to AI, but I am glad it helps you with your side projects.
Yeah, that is one of my main uses for AI: getting the build stuff and scripts out of the way so that I can focus on the application code. That and brainstorming.
In both cases, it works because I can mostly detect when the output is bullshit. I'm just a little bit scared, though, that it will stop working if I rely too much on it, because I might lose the brain muscles I need to detect said bullshit.
Im super interested to know how juniors get a long. i have dealt with build systems for decades and half the time its just use google or stackoverflow to get past something quickly, or manually troubleshoot deps. now i automate that entirely. and for code, i know what good or not, i check its output and hve it redo anything t5hat doesnt pass my known stndards. It makes using it so much easier. the article is so on point
Seems some people I know who really like AI aren't particularly good with their editors. Lots of AI zealots use the "learn your tools" when they are very slow with their editors. I'm sure that's not true across the board, but the sentiment that it's not worth it to get really advanced with your editor has been pretty prevalent for a very long time.
I don't care if you use AI but leave me alone. I'm plenty fast without it and enjoy the process this author callously calls "wrestling with computers."
Of course this isn't going to help with the whole "making me fast at things I don't know" but that's another can of worms.
Agreed but it's not even that. I worked with someone who was insanely fast with RubyMine. All the shortcuts were second-nature and shit just flew onto the screen. So generally, the argument would be to be really good with whatever editor you have. It also goes outside of that, like snippets are still very viable for lots of boilerplate.
At the same time, one of the best developers I worked with was a two-finger typist who had to look at the keyboard. But again, I don't care if you're going to use AI (well, that's not entirely true but not going to get into it) but the tone of this article that "You should learn it, " I take issue with.
Isn't that because the "best in class" LLM-enabled IDE or dev environment keeps changing every few months, or even more frequently if you're price conscious and want to use the most powerful coding models?
~20 years working in tech for me, mostly big companies, and I’ve never been more miserable. I also can’t stop myself from using Claude Code and the like.
I think it’s a bit like a gambling addiction. I’m riding high the few times it pays off, but most of the time it feels like it’s just on the edge of paying off (working) and surely the next prompt will push it over the edge.
It feels like this to me too, whenever I give it a try. It's like a button you can push to spend 20 minutes and have a 50/50 chance of either solving the problem with effortless magic, or painfully wasting your time and learning nothing. But it feels like we all need to try and use it anyway just in case we're going to be obsolete without it somehow.
If it doesn't work, it's an annoyance and you have to argue with it.
If it does work, it's one more case where maybe with the right MCP plumbing and/or a slightly better model you might not be needed as part of this process.
Feels a bit lose-lose.
> I also can’t stop myself from using Claude Code and the like.
just.. uninstall it? i've removed all ai tooling from both personal+work devices and highly recommend it. there's no temptation to 'quickly pull up $app just to see' if it doesn't exist
It’s become a core expectation at work now (Meta). If I’m not actively using it, then I’ll be significantly dinged in performance reviews. Honestly, it’s forced me to consider going to work elsewhere.
It does _feel_ like the value and happiness will come some versions down the road when I can actually focus on orchestration, and not just bang my head on the table. That’s the main thing that keeps me from just removing it all in personal projects.
Is this coming from the hypothesis / prior that coding agents are a net negative and those who use them really are akin to gambling addicts that are just fooling themselves?
The OP is right and I feel this a lot: when Claude pulls me into a rabbit hole, convinces me it knows where to go, and then just constantly falls flat on its face and we waste like several hours together, with a lot of all caps prompts from me towards the end. These sessions last in a way that he mentions: "maybe its just a prompt away from working"
But I would never delete CC because there are plenty of other instances where it works excellent and accelerates things quite a lot. And additionally, I know we see a lot of "coding agents are getting worse!" and "METR study proves all you AI sycophants are deluding yourselves!" and I again understand where these come from, agree with some of the points they raise, but honestly: my own personal perception (which I argue is pretty well backed up by benchmarks and by Claude's own product data which we don't see -- I doubt they would roll out a launch without at least one or more A/B tests) is that coding agents are getting much better, and that as a verifiable domain these "we're running out of data!" problems just aren't relevant here. The same way alphago gets superhuman, so will these coding agents, it's just a matter of when, and I use them today because they are already useful to me.
no, this is coming from the fact OP states they are miserable. that is unsustainable. at the end of the day the more productive setup is the one that keeps you happy and in your chair long term, as you'll produce nothing if you are burnt out.
This is definitely the feeling i get. Sometimes it works amazingly well that I think "Oh may be the hype was right all along, have I become the old guy yelling at claude?" but the other times it fails spectacularly, adds a really nasty bug which everyone misses for a month or cant even find the file I find by searching.
I am also now experimenting with my own version of opencode and I change models a lot, and it helps me learn how each model fails at different tasks, and it also helps me figure out the most cost effective model for each task. I may have spent too much time on this.
> is more fun because you don’t have to “wrestle the computer”
Indeed, of all the possible things to say!
AI "development" /is/ wrestling the computer. It is the opposite of the old-fashioned kind of development where the computer does exactly what you told it to. To get an AI to actually do what I want and nothing else is an incredibly painful, repetitive, confrontational process.
I think you're looking at it from the wrong angle. Wrestling the computer is stuff like figuring out how to recite the right incantation so Gradle will do a multi-platform fat bundle, and then migrate to the next major Gradle version. Unless you have a very specific set of kinks, tasks like these will make you want to quit your career in computers and pick up trash on the highway instead.
You very likely have some of these toil problems in your own corner of software engineering, and it can absolutely be liberating to stop having to think about the ape and the jungle when all you care about is the banana.
Now we have to figure out how to recite the right incantation to Claude to get it to recite the right incantation to Gradle in an exchange redolent of "guess the verb" from old Adventure games. Best case if you get it wrong: nothing happens. Worst case: grue will eat you.
Sanchez's Law of Abstraction applies. You haven't abstracted anything away, just added more shit to the pile.
There’s no incantation though. You ask Claude in whatever terms feel right to you to do the thing, say "update the gradle config to build a multi platform jar", and it does make it happen.
The hard part of software engineering, and indeed many other pursuits, is working out what it is you actually need to happen and articulating that clearly enough for another entity to follow your instructions.
Using English, with all its inherent ambiguity, to attempt to communicate with an alien (charitably) mind very much does /not/ make this task any easier if the thing you need to accomplish is of any complexity at all.
> Using English, with all its inherent ambiguity, to attempt to communicate with an alien (charitably) mind very much does /not/ make this task any easier if the thing you need to accomplish is of any complexity at all.
This just isn't the case.
English can communicate very simply a set of "if.. then.." statements and an LLM can convert them to whatever stupid config language with I'm dealing with today.
I just don't care if Cloudflare's wrangler.toml uses emojis to express cases or AWS's Cloudformation required some Shakespearean sonnet to express the dependencies in whatever the format of the day is.
Or don't get me started on trying to work out which Pulami Google module I'm supposed to use for this service. Ergh.
I can express very clearly what I want, let a LLM translate it then inspect the config and go "oh that's how you do that".
It's great, and is radically easier than working through some docs written by a person who knows what they are doing and assumed you do too.
Expressing "I want to build a Java app in a single file that I can execute on Windows, MacOS, and Linux" is absolutely straightforward and non-ambiguous in English, whereas it requires really a lot of arcane wizardry in build tooling languages to achieve the desired result.
Claude will understand and carry out this fairly complex task just fine, so I doubt you have actually worked with it yet.
No, it is not. What you are doing is something not too different from asking your [insert here freelance platform] hired remote dev to make an app and enter a cycle of testing the generated app and giving feedback, it is not wrestling the computer.
Fun is a feeling, so one can't really have an argument that something is fun or not - that would be a category error no?
You've got a good analogy there though, because many great and/or famous painters have used teams of apprentices to produce the work that bears their (the famous artist's) name.
I'm reminded also of chefs and sous-chefs, and of Harlan Mill's famous "chief surgeon plus assistants" model of software development (https://en.wikipedia.org/wiki/Chief_programmer_team). The difference in our present moment, of course, is that the "assistants" are mechanical ones.
(as for how fun this is or isn't - personally I can't tell yet. I don't enjoy the writing part as much - I'd rather write code than write prompts - but then also, I don't enjoy writing grunt code / boilerplate etc., and there's less of that now, - and I don't enjoy having to learn tedious details of some tech I'm not actually interested in in order to get an auxiliary feature that I want, and there's orders of magnitude less of that now, - and then there are the projects and programs that simply would never exist at all if not for this new mechanical help in the earliest stages, and that's fun - it's a lot of variables to add up and it's all in flux. Like the French Revolution, it's too soon to tell! - https://quoteinvestigator.com/2025/04/02/early-tell/)
i like what software can do, i don't like writing it
i can try to give the benefit of the doubt to people saying they don't see improvements (and assume there's just a communication breakdown)
i've personally built three poc tools that proved my ideas didn't work and then tossed the poc tools. ive had those ideas since i knew how to program, i just didn't have the time and energy to see them through.
I like this. I'm going to see if my boss will go for me changing my title from Solutions Architect to Solutions Commissioner. I'll insist people refer to me as "Commissioner ajcp"
Plenty of people will tell you that they enjoy solving business problems.
Well, I'll have to take their word for it that they're passionate about maximizing shareholder value by improving key performance indicators, I know I personally didn't sign up for being in meetings all day to leverage cross functional synergies with the goal of increasing user retention in sales funnels, or something along those lines.
I'm not passionate about either that or mandatory HR training videos.
> I will never as long as I live understand the argument that AI development is more fun
AI is more fun for programmers that should've gone into management instead, and prefer having to explain things in painstaking detail in text, rather than use code. In other words, AI is for people that don't like programming that much.
Why would you even automate the most fun part of this job? As a freelance consultant, I'd rather have a machine to automate the whole boring business side so I could just sit in front of my computer and write stuff with my own hands.
It’s more like saying you love painting, but you’re glad you no longer have to hike into the wilderness, crush minerals, boil oils, and invent pigments from scratch before you can put brush to canvas.
If you have to work in a language or framework with a lot of arbitrary-seeming features, ugly or opaque translation layers, or a lot of boiler-plate, then I absolutely understand the sentiment.
Programming a system at a low-level from scratch is fun. Getting CSS to look right under a bunch of edge cases - I won't judge that programmer too harshly for consulting the text machine.
This is especially true considering it's these shallow but trivia-dominated tasks which are the least fun and also which LLMs are the most effective at accomplishing.
I don't care about technology for what it is. I care about it for what it can do. If I can achieve what I want by just using plain English, I'm going to achieve what I want faster and more thoroughly enjoy the process. Just my two cents.
People have given most of the answers, but here's another one: At work, when I write code, I spend a lot of time designing it, making good interfaces, good tests, etc. It gave me joy to carefully craft it.
At home, I never had the time/will to be as thorough. Too many other things to do in life. Pre-LLMs, most of my personal scripts are just - messy.
One of the nice things with LLM assisted coding is that it almost always:
1. Gives my program a nice interface/UI
2. Puts good print/log statements
3. Writes tests (although this is a hit or miss).
Most of the time it does it without being asked.
And it turns out, these are motivation multipliers. When developing something, if it gives me good logs, and has a good UI, I'm more likely to spend time developing it further. Hence, coding is now more joyful.
There are some things that I think become more fun, like dealing with anything you don't actually find interesting. I recently made a quick little web app and I hand-coded the core logic since I find that part fun. But I vibe-coded the front-end, especially the CSS, because I just don't find that stuff very interesting and I'm less picky about the visuals. Letting the AI just do the boring parts for me made the overall project more fun I think.
But it’s not really an argument, it’s a statement about feelings. Some people really do prefer coding with AI. Is that so strange? Often we’ve spent 1 or 2 decades writing code and in our day jobs we don’t see a lot of novel problems come our way at the level of the code anymore (architecture, sure, still often novel). We’ve seen N number of variations on the same thing; we are good at doing them but get little joy in doing them for the N + 1th time. We find typing out api serializers and for loops and function definitions and yaml boilerplate just a little boring, if we already have a picture in our head of what we want to do. And we like being able to build faster and ship to our users without spending hours and extra brain cycles dealing with low-level complexity that could be avoided.
I am happy to accept that some people still prefer to write out their code by hand… that’s ok? Keep doing it if you want! But I would gently suggest you ask yourself why you are so offended by people that would prefer to automate much of that, because you seem to be offended. Or am I misreading?
And hey, I still enjoy solving interesting problems with code. I did advent of code this year with no LLM assistance and it was great fun. But most professional software development doesn’t have that novelty value where you get to think about algorithms and combinatorical puzzles and graphs and so on.
Before anyone says it, sure, there is a discussion to be had about AI code quality and the negative effects of all this. A bad engineer can use it to ship slop to production. Nobody is denying that. But I think that’s a separate set of questions.
Finally, I’m not sure painting is the best analogy. Most of us are not creating works of high art here. It’s a job, to make things for people to use, more akin to building houses than painting the Sistine Chapel. Please don’t sneer at use if we enjoy finding ways to put up our drywall quicker.
You're never really wrestling the computer. You're typically wrestling with the design choices and technological debt of decisions that were in hindsight bad ones. And it's always in hindsight, at the time those decision always seem smart.
Like with the rise of frameworks, and abstractions who is actually doing anything with actual computation?
Most of the time it's wasting time learning some bs framework or implementing some other poorly designed system that some engineer that no longer works at the company created. In fact the entire industry is basically just one poorly designed system with technological debt that grows increasingly burdensome year by year.
It's very rarely about actual programming or actual computation or even "engineering". But usually just one giant kludge pile.
I have been developing for 30 years professionally and 10 years before that as a hobbyist or in school
Development is solely to exchange labor for money.
I haven’t written a single line of code “for fun” since 1992. I did it for my degree between 1992-1996 while having fun in college and after that depending on my stage in life, dating, hanging out with friends, teaching fitness classes and doing monthly charity races with friends, spending time with my wife and (step)kids, and now enjoying traveling with my wife and friends, and still exercising
And the thing I don't get about how excited people are is that if what LLMs really do is change software development from coding to code review, which is the part of software development that is universally hated.
I understand this sentiment but, it is a lot of fun for me. Because I want to make a real thing to do something, and I didn't get into programming for the love of it, I got into it as a means to an end.
It's like the articles point: we don't do assembly anymore and no one considers gcc to be controversial and no one today says "if you think gcc is fun I will never understand you, real programming is assembly, that's the fun part"
You are doing different things and exercising different skillsets when you use agents. People enjoy different aspects of programming, of building. My job is easier, I'm not sad about that I am very grateful.
Do you resent folks like us that do find it fun? Do you consider us "lesser" because we use coding agents? ("the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.") <- I don't really care if you consider this "true" painting or not, I wanted a painting and now I have a painting. Call me whatever you want!
> It's like the articles point: we don't do assembly anymore and no one considers gcc to be controversial and no one today says "if you think gcc is fun I will never understand you, real programming is assembly, that's the fun part"
The compiler reliably and deterministically produces code that does exactly what you specified in the source code. In most cases, the code it produces is also as fast/faster than hand written assembly. The same can't be said for LLMs, for the simple reason that English (and other natural languages) is not a programming language. You can't compile English (and shouldn't want to, as Dijkstra correctly pointed out) because it's ambiguous. All you can do is "commission" another
> Do you resent folks like us that do find it fun?
For enjoying it on your own time? No. But for hyping up the technology well beyond it's actual merits, antagonizing people who point out it's shortcomings, and subjecting the rest of us to worse code? Yeah, I hold that against the LLM fans.
That a coding agent or LLM is a different technology than a compiler and that the delta in industry standard workflow looks different isn’t quite my point though: things change. Norms change. That’s the real crux of my argument.
> But for hyping up the technology well beyond it's actual merits, antagonizing people who point out it's shortcomings, and subjecting the rest of us to worse code? Yeah, I hold that against the LLM fans.
Is that what I’m doing? I understand your frustration. But I hope you understand that this is a straw man: I can straw man the antagonists and AI-hostile folks but the point is the factions and tribes are complex and unreasonable opinions abound. My stance is that people can dismiss coding agents at their peril, but it’s not really a problem: taking the gcc analogy, in the early compiler days there was a period where compilers were weak enough that assembly by hand was reasonable. Now it would be just highly inefficient and underperformant to do that. But all the folks that lamented compilers didn’t crumble away, they eventually adapted. I see that analogy as being applicable here, it may be hard to see the insanity of coding agents because we’re not time travelers from 2020 or even 2022 or 3. But this used to be an absurd idea and is now very serious and highly adopted. But still quite weak!! Still we’re missing key reliability and functionality and capabilities. But if we got this far this fast, and if you realize that coding agent training is not limited in the same way that e.g. vanilla LLM training is by being a verifiable domain, we seem to be careening forward. But by nature of their current weakness, absolutely it is reasonable not to use them and absolutely it is reasonable to point out all of their flaws.
Lots of unreasonable people out there, my argument is simply: be reasonable.
As others has already been pointed out, not all new technologies that are proposed are improvements. You say you understand this, but the clear subtext of the analogy to compilers is that LLM driven development are a obvious improvement and if we don't adopt them we'll find ourselves in the same position as assembly programmers who refused to learn compiled languages.
> Is that what I’m doing?
Initially I'd have been reluctant to say yes, but this very comment is laced with assertions that we'd better all start adopting LLMs for coding or we're going to get left behind [0]
> taking the gcc analogy, in the early compiler days there was a period where compilers were weak enough that assembly by hand was reasonable. Now it would be just highly inefficient and underperformant to do that
No matter how good LLMs get at translating english into programs, they will still be limited by the fact that their input (natural language) isn't a programming language. This doesn't mean it can't get way better, but it's always going to have some of the same downsides of collaborating with another programmer.
[0] This is another red flag I would hope programmers would have learned to recognize. Good technology doesn't need to try to threaten people into adopting it.
My intention was to say: you won't get left behind you will just get left slightly behind the curve until things reach a point where you feel you have no choice but to join the dark side. Like gcc/assembly: sure maybe there were some hardcore assembly holdouts but any day they could and probably did jump on the bandwagon. This is also speculation, I agree, but my point is: not using LLMs/coding agents today is very very reasonable, and the limitations that people often bring up are also very reasonable and believable.
> No matter how good LLMs get at translating english into programs, they will still be limited by the fact that their input (natural language) isn't a programming language.
Right but engineers routinely convert natural language + business context into formal programs, arguably an enormously important part of creating a software product. What's any different here? Like a programmer, the creation process is two-way. The agent iteratively retrieves additional information, asks questions, checks their approach, etc etc.
> [0] This is another red flag I would hope programmers would have learned to recognize. Good technology doesn't need to try to threaten people into adopting it.
I think I was either not clear or you misread my comment: you're not going to get left behind any more than you want to. Jump in when you feel good about where the technology is and use it where you feel it should be used. Again: if you don't see value in your own personal situation with coding agents, that is objectively a reasonable stance to hold today.
No it’s certainly not, and if you do want to lump coding agents into blockchain and NFTs that’s of course your choice but those things did not spur trillions of dollars of infra buildout and reshape entire geopolitical landscapes and have billions of active users. If you want to say: coding agents are not truly a net positive right now, that’s I think a perfectly reasonable opinion to hold (though I disagree personally). If you want to say coding agents are about as vapid as NFTs that to me is a bit less defensible
What part of making a movie is fun? Acting, costumes, sets, camerawork, story, dialogue, script, sound effects, lighting, editing, directing, producing, marketing?
Creating software has a similar number of steps. AI tools now make some of them much (much) easier/optional.
All parts of making a movie are fun. If you hire people who are passionate at each task, you will get 1000x better result. It may be 10 days late but better to be late than slop.
One can have fun with all manner of things. Take wood-working for example. One can have fun with a handsaw. One can also have fun with a table saw. They're both fun, just different kinds
What about a table saw with no guard, a wobbly blade, and directions from management to follow a faint line you'd have a hard time doing with a scroll saw?
Thats not quite true. The “lone genius” type went against the grain and created something new while still showing some skill (well, that’s debatable in modern art).
I have zero desire to go hunt down and create a wrapper to avoid some kernel bug because what I want to do can’t be implemented because of an edge case of some CPU-package incompatibility.
I have found in my software writing experience that the majority of what I want to write is boiler plate with small modifications but most of the problems are insanely hard to diagnose edge cases and I have absolutely no desire nor is it a good use of time in my opinion to deal with structural issues in things that I do not control.
The vast majority of code you do not control because you aren’t the owner of the framework or library your language or whatever and so the Bass majority of software engineering is coming up with solutions to foundational problems of the tools you’re using
The idea that this is the only true type of software engineering is absurd
True software engineering is systems, control and integration engineering.
What I find absolutely annoying is that there’s this rejection of the highest level Hofstetter level of software architecture and engineering
This is basically sneered at over the idea of “I’m gonna go and try to figure out some memory management module because AMD didn’t invest in additional SOC for the problems that I have because they’re optimized for some Business goals.”
Well, washing cloths has definitely become "more fun" since the invention of washing machines... Cleaning your flat has become "more fun" since vacuum cleaners. Writing has become "more fun" since editors overtook typewriters. Bedroom studios power most of the clubs I know. Navigating a city as a blind pedestrian has definitely become more fun since the introduction of the oh-so-evil-screentime-bad smartphone. I could go on forever. Moving has become more fun since the invention of the wheel... See?
> not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.
What if I have a block of marble and a vision for the statue struggling from inside it and I use an industrial CNC lathe to do my marble carving for me. Have I sculpted something? Am I an artist?
What if I'm an architect? Brunelleschi didn't personally lay all the bricks for his famous dome in Florence --- is it not architecture? Is it not art?
I would call the designing of the building art, yes. But I wouldn’t call it construction.
I would also call designing a system to be fed into an LLM designing. But I wouldn’t call it programming.
If people are more into the design and system architecture side of development, I of course have no problem with that.
What I do find baffling, as per my original comment, is all the people saying basically “programming is way more fun now I don’t have to do it”. Did you even actually like programming to begin with then?
I do think a huge number of people are infatuated with the idea of programming but hate the act itself. I have tried to teach a lot of people to code and it is rare to find the person who is both enthusiastic and able to maintain motivation once they realize every single character matters and yes, that bracket does have to be closed.
Of course not everyone who programs AI style hate programming, but I do think your take explains a large chunk of zealotry: It has become Us v. Them for both sides and each is staking out their territory. Telling the vibe coder they are not programming hurts their feelings much like telling a senior developer all their accumulated experience and knowledge is useless if not today, for sure some day soon!
> I do think a huge number of people are infatuated with the idea of programming but hate the act itself. I have tried to teach a lot of people to code and it is rare to find the person who is both enthusiastic and able to maintain motivation once they realize every single character matters and yes, that bracket does have to be closed.
I think it's legitimate that someone might enjoy the act of creation, broadly construed, but not the brick-by-brick mechanics of programming.
Did the CNC lathe decide where to make the cuts based on patterns from real artists it was trained on to regurgitate a bland copy of real artists work?
On what planet is concentrating an increasingly high amount of the output of this whole industry on a small handful of megacorps “democratising” anything?
Software development was already one of the most democratised professions on earth. With any old dirt cheap used computer, an internet connection, and enough drive and curiosity you could self-train yourself into a role that could quickly become a high paying job. While they certainly helped, you never needed any formal education or expensive qualifications to excel in this field. How is this better?
The open models don't have access to all the proprietary code that the closed ones have trained on.
That's primarily why I finally had to suck it up and sign up for Claude. Claude clearly can cough up proprietary codebase examples that I otherwise have no access to.
Given that very few of the "open models" disclose their training data there's no reason at all to assume that the proprietary models have an advantage in terms of training on proprietary data.
As far as I can tell the reason OpenAI and Anthropic are ahead in code is that they've invested extremely heavily in figuring out the right reinforcement learning training mix needed to get great coding results.
Some of the Chinese open models are already showing signs of catching up.
> deergomoo: On what planet is concentrating an increasingly high amount of the output of this whole industry on a small handful of megacorps “democratising” anything?
> simonw: It's better because now you can automate something tedious in your life with a computer without having to first climb a six month learning curve.
Completely ignores, or enthusiastically accepts and endorses, the consolidation of production, power, and wealth into a stark few (friends), and claims superiority and increased productivity without evidence?
This may be the most simonw comment I have ever seen.
At the tail end of 2023 I was deeply worried about consolidation of power, because OpenAI were the only lab with a GPT-4 class model and none of their competitions had produced anything that matched it in the ~8 months since it had launched.
I'm not worried about that at all any more. There are dozens of organizations who have achieved that milestone now, and OpenAI aren't even definitively in the lead.
A lot of those top-class models are open weight (mainly thanks to the Chinese labs) and available for people to run on their own hardware.
By feel. Not everyone uses all the buttons all the time, but stuff you use a lot is easily operated without taking eyes off the road. It pairs well with the other upside of physical controls, the manufacturer can’t move them out from under you with a software update.
Formatted? I guess not really, because it’s trivially easy to reformat it. But how it’s structured, the data structures and algorithms it uses, the way it models the problem space, the way it handles failures? That all matters, because ultimately the computer still has to run the code.
It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.
Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.
Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.
It’s funny, I never connected my G5 to the network or accepted any of the optional T&Cs, so there’s now numerous places in the UI that say “accept terms to see personalised content”.
> You don't need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far.
Every time I see this I wonder how many amateur/hobbyist programmers it sets up for disappointment. Unless your definition of “pretty far” is “a small number of the part ones”, it’s simply not true.
In the programming world I feel like there's a lot of info "for beginners" and a lot of folks / activities for experts.
But that middle ground world is strange... a lot of it is a combo of filling in "basics" and also touching more advanced topics at the same time and the amount of content and just activities filling that in seems very low. I get it though, the middle ground skilled audience is a great mix of what they do or do not know / can or can not solve.
This is also true of a lot of other disciplines. I’ve been learning filmmaking lately (and editing, colour science, etc). There’s functionally infinite beginner friendly videos online on anything you can imagine. But very little content that slowly teaches the fundamentals, or presents intermediate skills. It’s all “Here’s 5 pieces of gear you need!” “One trick that will make your lighting better”. But that’s mostly it. There’s almost no intermediate stuff. No 3 hour videos explaining in detail how to set up an interview properly. Stuff like that.
I've found the best route at that point is just... copying people who are really good. For my interest (3d modeling) if you want voice-over and directions, those are all pretty basic, but if you want to see how someone approaches a large, complex object, I will literally watch a timelapse of someone doing it and scrub the video in increments to see each modifier/action they took. It's slow but that's also how I built some intuition and muscle memory. That's just the way...
Makes sense that that's the case: there's usually a limited amount of beginner's knowledge, and then you get to the medium level by arbitrary combinations of that beginner's knowledge, of which there's an exponential number, making it less likely that someone has produced something about that specific combination. Then at the expert level, people can get real deep into some obscure nitty-gritty detail, and other experts will be able to generalise from that by themselves.
It's one of the worst parts of being self taught,
beginner level stuff has a large interest base because everyone can get into it.
Advanced level stuff usually gets recommended directly by experts or will be interesting to beginners too as a way of seeing the high level.
Mid level stuff doesn't have that wide appeal, the freshness in the mind of the experts, or the ease of getting into, so it's not usually worth it for creators if the main metric is reach/interest
Structured (taught) learning is better in this regard, it at least gives you structure to cling on to at the mid level
Yes, and it's hard to point to reference material to newcomers. Hey, yeah that's actually a classic problem, let me show you some book about this... oh there's none. Maybe I should start creating them, but that is of course hard.
But also, the middle ground is often just years of practice.
Realize in anything, there are people who are much better than even the very best. The people doing official collegiate level competitive programming would find AoC problems pretty easy.
>The people doing official collegiate level competitive programming would find AoC problems pretty easy.
I used to program competitively and while that's the case for a lot of the early day problems, usually a few on the later days are pretty tough even by those standards. Don't take it from me, you can look at the finishing times over the years. I just looked at some today because I was going through the earlier years for fun and on Day 21/2023, 1 hour 20 minutes got you into the top 100. A lot of competitive programmers have streamed the challenges over the years and you see plenty of them struggle on occasion.
People just love to BS and brag, and it's quite harmful honestly because it makes beginner programmers feel much worse than they should.
The actual number is going to be higher as more people will have finished the puzzles since then, and many people may have finished all of the puzzles but split across more than one account.
Then again, I'm sure there's a reasonable number of people who have only completed certain puzzles because they found someone else's code on the AoC subreddit and ran that against their input, or got a huge hint from there without which they'd never solve it on their own. (To be clear, I don't mind the latter as it's just a trigger for someone to learn something they didn't know before, but just running someone else's code is not helping them if they don't dig into it further and understand how/why it works.)
There's definitely a certain specific set of knowledge areas that really helps solve AoC puzzles. It's a combination of classic Comp Sci theory (A*/SAT solvers, Dijkstra's algorithm, breadth/depth first searches, parsing, regex, string processing, data structures, dynamic programming, memoization, etc) and Mathematics (finite fields and modular arithmetic, Chinese Remainder Theorem, geometry, combinatorics, grids and coordinates, graph theory, etc).
Not many people have all those skills to the required level to find the majority of AoC "easy". There's no obvious common path to accruing this particular knowledge set. A traditional Comp Sci background may not provide all of the Mathematics required. A Mathematics background may leave you short on the Comp Sci theory front.
My own experience is unusual. I've got two separate bachelors degrees; one in Comp Sci and one in Mathematics with a 7 year gap between them, those degrees and 25+ years of doing software development as a job means I do find the vast majority of AoC quite easy, but not all of it, there are still some stinkers.
Being able to look at an AoC problem and think "There's some algorithm behind this, what is it?" is hugely helpful.
The "Slam Shuffle" problem (2019 day 22) was a classic example of this that sticks in my mind. The magnitude of the numbers involved in part 2 of that problem made it clear that a naive iteration approach was out of the question, so there had to be a more direct path to the answer.
As I write the code for part 1 of any problem I tend to think "What is the twist for part 2 going to be? How is Eric going to make it orders of magnitude harder?" Sometimes I even guess right, sometimes it's just plain evil.
Sorry to focus on just one aspect of your (excellent) post, but do you have recommendations for reading up on A*/SAT beyond wikipedia? I'm mostly self-taught (did about a minor's worth of post-bacc comp sci after getting a chemistry degree) and those just hasn't come up much, e.g. I don't see A* mentioned at a first glance through CLRS and only in passing in Skiena's algorithms book. Thank you!
Not sure. I covered them during my Comp Sci degree in the mid/late 90s. I'm probably not even implementing them properly but whatever I do implement tends to work.
Just checked my copy of TAOCP (Vol 3 - Sorting and Searching) and it doesn't mention A* or SAT.
Yeah, getting 250 or so stars is going to be straightforward, something most programmers with a couple of years of experience can probably manage. Then another 200 or so require some more specialized know-how (maybe some basic experience with parsers or making a simple virtual machine or recognizing a topology sort situation). Then probably the last 50 require something a bit more unusual. For me, I definitely have some trouble with any of the problems where modular inverses show up.
It's just bluffing, lying. People lie to make others think they're hot shit. It's like the guy in school who gets straight A's and says he never studies. Yeah I'll bet.
They... sort of are though? A year or two ago I just waited until the very last problem, which was min-cut. Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time. There are algorithms that don't even require all the high-falutin graph theory.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
You say in your comment: "Anybody with a computer science education ... should be able to tackle this one" which is directly opposed to what they advertise: "You don't need a computer science background to participate"
"Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time."
I have a computer science education and I have no idea what you're talking about. The prompt "Proof." ?
Most people who study Comp Sci never use any of what they learned ever again, and most will have forgotten most of what they learned within one or two years. Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
Holy fuck. I should just grow coconuts or something in the remote Philippines.
> Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
But we are talking about Advent of Code here, which is a set of fairly contrived, theoretical, in vitro learning problems that you don't really see in the real software engineering world either.
Got to agree. I'm even surprised at just how little progress many of my friends and ex-colleagues over the years make given that they hold down reasonable developer jobs.
My experience has been "little progress" is related to the fact that, while AoC is insanely fun, it always occurs during a time of year when I have the least free time.
Maybe when I was in college (if AoC had existed back then) I could have kept pace, but if part of your life is also running a household, then between wrapping up projects for work, finalizing various commitments I want wrapped up for the year, getting together with family and friends for various celebrations, and finally travel and/or preparing your own house for guests, I'm lucky if I have time to sit down with a cocktail and book the week before Christmas.
Seeing the format changed to 12 days makes me think this might be the first time in years I could seriously consider doing it (to completion).
Yep, the years I've made it the furthest have been around the 11-12 day mark. The inevitably life and kids and work get in the way and that's it for another year. Changing to a 12 day format is unlikely to affect me at all :)
In order to complete AoC you need more than just the ability to write code and solve problems. You need to find abstract problem-solving motivating. A lot of people don't see the point in competing for social capital (internet points) or expending time and energy on problems that won't live on after they've completed them.
I have no evidence to say this, but I'd guess a lot more people give up on AoC because they don't want to put in the time needed than give up because they're not capable of progressing.
Yeah, time is almost certainly the thing that kills most people's progress but that's not the root cause.
I think it comes down to experience, exposure to problems, and the ability to recognise what the problem boils down to.
A colleague who is an all round better coder than me might spend 4 hours bashing away trying to solve a problem that I might be able to look at and quickly recongise it is isomorphic to a specific classic Comp Sci or Maths problem and know exactly how best to attack it, saving me a huge amount of time.
Spoiler alert: Take the "Slam Shuffle" in 2019 Day 22 (https://adventofcode.com/2019/day/22). I was lucky that I quickly recognised that each of the actions could be represented as '( a*n + b ) mod noscards' (with a and b specific to the action) and therefore any two actions like this can be combined into the same form. The optimal solution follows relatively simply from this.
Doing all of the previous years means there's not much new ground although Eric always manages to find something each year.
There have also been some absolutely amazing inventions along the way. The IntCode Breakout game (2019) and the adventure game (can't remember the year) both stick in my mind as amazing constructions.
That's exactly why I don't do more than I do. I do some of the easy ones and it's fun. Then it gets a little harder and I start wondering how much time I want to put into this.
And then something shiny and fun comes along during a problem that I'm having trouble with, and I just never come back.
It's hard for most people to focus on a single thing for a long period of time. Motivation tends to come and go. I started the 2024 solutions in 2025, without the pressure and got to the end this way (not without help though TBH). Secondary motivation can help, like being bored or wanting to learn another programming language.
I've never tried AoC prior but with other complex challenges I've tried without much research, there comes a point where it just makes more sense to start doing something on the backlog at home or a more specific challenge related to what I want to improve on.
I find the problem I have is once I get going on a problem I can't shake it out of my head. I end up lying in bed for hours pleading with my brain to let it go if I've not found the time to finish it during the crumbs of discretionary time in the day!
This type of problem has very little resemblance to the problems I solve professionally - I’m usually one level of abstraction up. If I run into something that requires anything even as complicated as a DAG it’s a good day.
I think this has a lot more to do with time commitment. Once the problems take more than ~1 hour I tend to stop because I have stuff to do, like a job that already involves coding.
Because like 80% of AoC problems require deep Computer science background and deeply specific algorithms almost nobody is using in their day to day work.
It's totally true. I was doing Advent of Code before I had any training or work in programming at all, and a lot of it can be done with just thinking through the problem logically and using basic problem solving. If you can reason a word problem into what it's asking, then break it down into steps, you're 90% of the way there.
Comparing previous years, they're exactly what I'd expect, to be honest. Only people serious about completion will...well...complete it. Even if they do not know any code, if you pick something well-documented like Python or whatever, it should not be a tremendous challenge so long as you have the drive to finish the event. Code isn't exactly magic, though it does require some problem-solving and dedication. Since this is a self-paced event that does not offer any sort of immediate reward for completion, most people will drop out due to limited bandwidth needing to be devoted to everything else in their lives. That versus, say, a college course where you paid to be there and the grade counts toward your degree; there's simply more at stake when it comes to completing the course.
But, speaking to the original question as to the number of newbies that go all the way, I'd say one cannot expect to increase their skills in anything if one sticks in their comfort zone. It should be hard, and as a newbie who participated in previous years, I can confirm it often is. But I learned new things every time I did it, even if I did not finish.
I have to say, I've read many out-of-touch comments on HN over the years but this is definitely among the most out there, borderline delusional comments I've ever seen!
The idea that anyone who doesn't know any code would:
1) Complete in Advent of Code at all.
2) Complete a single part of a single problem.
let alone, complete the whole thing without it being a "tremendous challenge"...
is so completely laughable it makes me question whether you live on the same planet as the rest of us here.
Getting a person who has never coded to write a basic sort algorithm (i.e. bubble sort) is already basically impossible. I work with highly talented non coder co-workers who all attended tier-1 universities (e.g. Oxford, Harvard, Stanford) but for finance/business related degrees, I cannot get them to write while/foreach loops in Python, and simply using Claude Code is way too much for them.
If you are even fully completing one Advent of Code problem, you are in the top 0.1% of coders, completing all of them puts you in the top 0.001%.
I can't begin to describe how valuable your input has been through this whole thread about something you're quite possessive and passionate about, which surely places you in a position to aggressively dismiss any other possible way of looking at it! Wow, love learning about new perspectives on HN!
Wishing you best of luck in AoC, Life and Love but I imagine someone like you doesn't need it, being a complete toolbox and all.
P.S.: Tell your coworkers I'm sorry they have to put up with you.
You're the person saying Advent of Code is "so easy" that anyone even people with no coding ability at all should find it do-able, which is totally diminishing the difficulty of the problems, and asserting your own genius, i.e. that you found it totally trivial.
I am the person saying that actually, stuff like Advent of Code is incredibly difficult and 99% of active programmers aren't able to complete it, let alone people who don't code.
I am not an elitist at all, unlike yourself, I don't find completing "Advent of Code" easy, in fact, it would take me a long time to complete it, more time than I have available in my busy life in the average December. And I doubt I would be able to complete it 100% without looking up help, getting hints, or using LLMs to help.
You clearly didn't read my whole original comment before mouthing off. Go back and do that, you'll find that I pointed out most do not complete it, that it is supposed to be challenging and I never called it "easy" as you imply ("not tremendously difficult" =/= "easy")
Heck, I even talked about having to be serious about completion, and you could not bother to read the whole comment, then proceed to call me delusional? FFS, I am now praying for your co-workers and I'm not even religious.
Did YOU even read your original comment? You asserted that people who have never coded could complete the event!
Did you realize only roughly 500 people of the > 1M who are registered for advent of code even complete it?
You said "it should not be a tremendous challenge", i.e. not that big of a deal even if you don't know how to code. Which is absolutely diminishing the difficulty of the event, I mean, come on man...
This is why I'm asserting you are quietly oblivious to the abilities of most people. I am asserting that most people who CAN code, cannot complete the event, yet alone non-coders. I am a very active coder (for fun mostly these days, but also sometimes for work), but I could not complete Advent of Code. Maybe if I took all of December off work to dedicate serious time, but even then I wonder if it's possible without looking at hints/LLM-help etc.
I often try and help my co-workers who are working on AI based side-projects for fun, so I have a strong insight into the abilities of non-coding smart people, and the reality is that yes, they get very turned off as soon as you get anything more complex than for-loops and if-statements. This isn't me being mean to co-workers, this is the reality of things I have experienced. It's not a brains thing, they can understand more complex stuff, but they don't want to, they find it annoying, boring, not worth the time/effort etc. So the idea of them learning dynamic programming, DFS/BFS, more complex data structures etc, is well, just not going to happen.
My point is that you are effectively saying, "oh just about anyone can do Advent of Code if they want to", is totally not grounded in any sort of reality.
The amount of injected implication you are imposing on everything I said...this is some seriously unhinged gaslighting in effort to obfuscate the fact that you came out of the gate calling someone delusional over a comment you barely understood. We're wasting each other's time, so I'm out.
mh, maybe it's cheating because it's still a STEM degree but I have a PhD in physics without any real computer science courses (obviously had computational physics courses etc. though) and I managed to 100% solve quite a few years without too much trouble. (though far away from the global leaderboard and with the last few days always taking several hours to solve)
I have a EE background not CS and haven't had too much trouble the last few years. I'm not aiming to be on the global leader board though. I think that with good problem solving skill, you should be able to push through the first 10 days most years. Some years were more front loaded though.
Agreed. I have a CS background and years of experience but I don't get very far with these. At some point it becomes a very large time commitment as well which I don't have
In general, the problems require less background knowledge than other coding puzzles. They're not always accessible without knowing a particular algorithm, but they're more 'can you think through a problem' than 'have you done this module'.
That's not the same as saying they're easy, but it's a different kind of barrier, and (in my opinion) more a test of 'can you think?' than 'did you do a CS degree?'
> you won't get stuck because of a word you don't understand or a concept you've never heard of
I very much disagree here. To make any sort of progress in AoC, in my experience, you need at least:
- awareness of graphs and how to traverse them
- some knowledge of a pathfinding algorithm
- an understanding of memoisation and how it can be applied to make deeply recursive computations feasible
Those types of puzzle come up a lot, and it’s not anything close to what I’d expect someone with “just a little programming knowledge” to have.
Someone with just a little programming knowledge is probably good with branches and loops, some rudimentary OOP, and maybe knows when to use a list vs a map. They’re not gonna know much about other data structures or algorithms.
They could learn them on the go of course, but then that’s why I don’t think basic coding knowledge is enough.
The difference is you still need to express creativity in your use of GarageBand and iMovie. There is nothing creative about typing "give me a picture of x doing y" into a form field.
Also, "democratizing"? Please. We're just entrenching more power into the small handful of companies who have been able to raise and set fire to unfathomable amounts of capital. Many of these tools may be free or cheap to use today, but there is nothing for the commons here.
> Do we pour billions into educating users not to click "yes" to every prompt they see?
Yes, obviously yes. In the same way we teach people to operate cars safely and expect them to carry and utilise that knowledge. Does it work perfectly? Of course not, but at least we entertain the idea that if you crash your car into a wall because you’re not paying attention it might actually be your fault.
Computers are a critical aspect of work and life. While I’m a big proponent of making technology less of a requirement in day to day life—you shouldn’t need to own a smartphone and download an app to pay for parking or charge your car—but in cases where it is reasonable to expect someone to use a computer, it’s also reasonable to expect a baseline competency from the operator. To support that, we clearly need better computer education at all ages.
By all means, design with the user’s interests at front of mind and make doing the right thing easiest, but at some point you have to meet in the middle. We can’t reorient entire industry practices because some people refuse to read the words in front of them.
Now, I'm not going to say we shouldn't try to move the needle. More education around this is unquestionably a good thing.
But this sounds an awful lot like trying to avoid changing the technology by changing human nature. And that's a fool's errand.
There are always going to be a significant percentage of users you're never going to reach when it comes to something like this. That means you can never say "...and now we can just trust people to use their devices wisely!"
Fundamentally, the issue with people clicking things isn't really a problem because it's new technology. It's a problem because they're people. People fall for scams all the time, and that doesn't change just because it's now "on a computer".
> People fall for scams all the time, and that doesn't change just because it's now "on a computer".
But that's exactly the issue. You won't prevent someone from wiring money to Nigeria by restricting what apps they can install on their phone while allowing the official bank app which supports wire transfers.
If someone is willing to press any sequence of buttons a scammer tells them to then the only way to prevent them from doing something at the behest of the scammer is to prevent them from doing it at all.
But that's hardly practical, because you're going to, what? Prevent anyone from transferring money even for legitimate reasons? Prevent people from reading their own email or DMs so they can't give a scammer access to sensitive ones?
The alternatives are educating people to not fall for scams, or completely disenfranchising them so that they're not authorized to make any choices for themselves. What madness can it be that we could choose the second one for ordinary adults?
Adults would choose a locked-down, secure phone for themselves.
Arguably they already do and the numbers wanting an open phone are relatively trivial and the market ends up the way it has.
I do these days, happily, and I speak as someone who owned a Neo Freerunner and an N900. My phone is far too important as a usable, stable device to want to fuck around treating it as an open platform any more.
> Arguably they already do and the numbers wanting an open phone are relatively trivial and the market ends up the way it has.
The market is consolidated into Apple and Google and neither of them actually offers this. Taking away everyone's choices and then saying "look how few people are choosing the thing that isn't available" is a bit of a farce.
Android was sold as being "open" and at first it mostly was, so the people who wanted that got an Android device and everything else disappeared. Then Google closed Android over time, at first in subtle ways that weren't immediately obvious and now they're just telling everyone to DIAF. But by then the alternatives were gone.
I mean it seems like your argument is "nobody wants this thing that people keep getting mad that nobody offers". Obviously people want it; otherwise who are all of these people?
There are something like 50 million people in the world whose occupation is software developer. Hundreds of millions, maybe billions, who are some other kind of engineer or scientist or teacher or just curious enough to want to learn things or stubborn enough to not want someone else taking away their choices. Every kid should have a device that lets them experiment rather than one that locks in them in a cage -- it's not like they're doing banking or dealing with national secrets to begin with.
I disagree. There’s nowhere near a billion who give a crap, if there were that many it would be a market that is served and could be highly profitable.
As it is there have always been phones that are open to greater or lesser extent and they have always been market failures, even among geeks.
Personal vehicles have turned out to be A Bad Idea, and now the consensus appears to be we should be moving toward more -- perhaps exclusive -- use of public transport, rather than expect people to own a car.
I'm beginning to wonder if the same isn't true of personal "general purpose computing" devices. 99% of people would choose the locked down device, especially if it makes their favorite apps available: Instagram, Netflix, etc. Which it may not if it were open, because then it could not provide guarantees against piracy or tampering by the end user. But still, from an end user perspective, knowing that stuff from bad actors will be prevented or at least severely hampered is a source of peace of mind.
Nintendo figured this out 40 years ago: buy our locked down system, and we can provide a guarantee against the enshittification spiral that tanked the home video game market in 1983, leading to landfills full of unsold cartridges. It sold like hotcakes.
For who? Regular people are quite famously not clamouring for more AI features in software. A Siri that is not so stupendously dumb would be nice, but I doubt it would even be a consideration for the vast majority of people choosing a phone.
reply