This is a point I see discussed surprisingly little. Given that many (most?) programmers like designing and writing code (excluding boilerplate), and not particularly enjoy reviewing code, it certainly feels backwards to make the AI write the code and relegate the programmer to reviewing it. (I know, of course, that the whole thing is being sold to stakeholders as "LoC machine goes brrrr" – code review? what's that?)
Creativity is fun. AIs automate that away. I want an AI that can do my laundry, fold it, and put it away. I don't need an AI to write code for me. I don't mind AI code review, it sometimes has a valid suggestion, and it's easy enough to ignore most of the rest of the time.
I was thinking this again just yesterday. Do my laundry correctly and get it put away. Organized my storage. Clean the bathroom. Do the dishes. Catalog my pantry, give me recipes, and keep it correctly stocked. Maybe I'm just a simple creature but like, these are the obvious problems in my life I'll pay to have go away so why are we taking away the fun stuff instead?
There are already cheap, domestic robots for cleaning dishes, cleaning the floor, cleaning clothes, making coffee, heating and cooling food, turning screws, drilling holes and so on. All those robots represent a greater than 90 percent (and sometimes a greater than 99 percent) savings in time relative to doing the same tasks manually. You still have to move the objects they operate on around within your house but that's mostly the only part of the task you have to do.
As someone who played the roomba game quite a bit - you transfer the problem of vacuuming to the problem of very frequent robot cleaning. I've saved more time switching to a high powered central vac than I ever did with constantly cleaning the robot because I had the audacity to own a fluffy dog.
Also people claiming cleaning isn't "creative" or "fun". Steam has a whole genre of games simulating cleaning stuff because the act of cleaning is extremely fun and creative to a lot of people: https://store.steampowered.com/app/246900/Viscera_Cleanup_De... being a great example
Actually I do NOT want my robot to do my laundry for me! And because I'm garbage at painting and comparatively better at laundry, I DO want it to paint for me.
> Also people claiming cleaning isn't "creative" or "fun". Steam has a whole genre of games simulating cleaning stuff because the act of cleaning is extremely fun and creative to a lot of people: https://store.steampowered.com/app/246900/Viscera_Cleanup_De... being a great example
Someone making a game about an activity doesn't mean that the activity is fun or desirable in real life at all.
I mean yes there are people that find comfort in cleaning but they are not the target audience of cleaning simulators at all.
Unfortunately many things aren't dishwasher safe, some things don't fit in the dishwasher, and often certain types of food are not properly washed off in the dishwasher.
> All those robots represent a greater than 90 percent savings in time relative to doing the same tasks manually.
Lol, nope.
Dishwashers solve at best some 50% of the hassle that are the easy to wash table dishes, while being completely unable to clean oven ones. Floor cleaners solve a 5 minutes task in a couple-of-days-long house upkeep. Coffee makers... don't really automate anything, why did you list them here? And there's no automation available for heating and cooling food. And the part about drilling and turning screws also isn't automation at all.
The only thing on your list that is close to solved is clothes cleaning. And there's the entire ironing thing that is incredibly resistant to solving. But yeah, that puts it way beyond 90% solved.
I've been developing with LLMs on my side for months/about a year now, and feels like it's allowing me to be more creative, not less. But I'm not doing any "vibe-coding", maybe that's why?
The creative parts (for me) is coming up with the actual design of the software, and how it all fits together, what it should do and how, and I get to do that more than ever now.
The creative part for me includes both the implementation and the design, because the implementation also matters. The bots get in the way.
Maybe I would be faster if I paid for Claude Code. It's too expensive to evaluate.
If you like your expensive AI autocomplete, fine. But I have not seen any demonstrable and maintainable productivity gains from it, and I find understanding my whole implementation faster, more fun, and that it produces better software.
Maybe that will change, but people told me three years ago that we would be at the point today where I could not outdo the bot;
with all due respect, I am John Henry and I am still swinging my hammer. The steam pile driving machine is still too unpredictable!
> The creative part for me includes both the implementation and the design
The implementations LLMs end up writing are predicable, because my design locks down what it needs to do. I basically know exactly what they'll end up doing, and how, but it types faster than I do, that's why I hand it off while I go on to think about the next design iteration.
I currently send every single prompt to Claude, Codex, Qwen and Gemini (looks something like this: https://i.imgur.com/YewIjGu.png), and while the all most of the time succeed, doing it like this makes it clear that they're following what I imagined they'd do during the design phase, as they all end up with more or less the same solutions.
> If you like your expensive AI autocomplete
I don't know if you mean that in jest, but what I'm doing isn't "expensive AI autocomplete". I come up with what has to be done, the design for achieving so, then hand off the work. I don't actually write much code at all, just small adjustments when needed.
> and I find understanding my whole implementation faster
Yeah, I guess that's the difference between "vibe-coding" and what I (and others) are doing, as we're not giving up any understanding or control of the architecture and design, but instead focus mostly on those two things while handing off other work.
I've made great use of AI by keeping my boundaries clear and my requirements tight, and by rigorously ensuring I understand _every_ line of code I commit
I believe software development will transition to a role closer to director/reviewer/editor, where knowledge of programming paradigms are just as important as now, but also where _communication_ skills separate the good devs from the _great_ devs
The difference between a 1x dev and a 10x dev in future will be the latter knows how to clearly and concisely describe a problem or a requirement to laymen, peers, and LLMs alike. Something I've seen many devs struggle with today (myself included)
> but also where _communication_ skills separate the good devs from the _great_ devs
I think it has been that way since forever. If you look at all the great projects, it’s rare for the guy at the helm to not be a good communicator. And at corporate job, you soend a good chunk of the year writing stuff to people. Even the code you’re writing, you think abou the next person who’s going to read it.
Same. I think there are two types of devs. Those that love designing the individual building blocks and those that wanna stack the blocks together to make something new.
At this point AI is best at the first thing and less good at the second. I like stacking blocks together. If I build a beautiful UI I don't enjoy writing the individual css code for every button but rather composing the big picture.
Not saying either is better or worse. But I can imagine that the people that loves to build the individual blocks like AI less because it takes away something they enjoy. For me it just takes away a step I had to do to get to the composing of the big picture.
The thing is, i love doing both. But there’s an actual rush of enjoyment when I finally figure one of the tenets of a system. It’s like solving a puzzle for me.
After that, it’s all became routine work as easy as drinking water. You explain the problem and I can quicly find the solution. Using AI at this point would be like herding cats. I already know what code to write, having a handful being suggested is distracting. Like feeling a a tune, and someone playing another melody other than the one you know.
> For me it just takes away a step I had to do to get to the composing of the big picture.
You can't successfully build the big picture on the sort of rotten foundation that AI produces though
I don't care how much you enjoy assembling building blocks over building the low level stuff, if you offload part of the building onto AI you're building garbage
Exactly. I loved doing novel implementations or abstractions… and the AI excels at the part where it modifies it slightly for different contexts… aka the boring stuff.
When I say ideas i'm talking in the context of programming... I'm not talking about "I got a great idea for a new social network" and the AI just wrote some spaghetti code for it. When I have the AI write low level code it's stuff like filling out a function implementation of which I already defined high level type classes for... I can focus on high level abstractions, whereas the AI can iterate in the most statistically sensible way to fill in the easy blanks.
By grinding what though? I don't wanna grind "Entering characters with my fingers", I wanna grind "Does this design work for getting X to work as I want", which is exactly the sort of things LLMs help me move faster on.
And yes, if you're just using it as a slot machine, I understand it doesn't feel useful. But I don't think that's how most people use it, at least that's not how I use it.
To be fair, I think a lot of people are in fact using it like a slot machine, which is where a lot of the "AI doesn't help me code" perspectives are coming from.
depends on what abstraction level you enjoy being creative at.
Some people like creative coding, others like being creative with apps and features without much care to how it's implemented under the hood.
I like both, but IMO there is a much larger crowd for higher level creativity, and in those cases AIs don't automate the creativity away, they enable it!
Yes, because ideas are not worth much if anything. If you have an idea of a book, or a painting, and have someone else implement it, you have not done creative work. Literally, you have not created the work, brought it to existence. The creator has done the creativity.
I guess that depends on how much oversight you engage in. A lot of famous masters would oversee apprentices and step in for difficult tasks and to finish the work, yet we still attribute the work to those masters. Most of the work in science is done by graduate students, but we still attribute the lion's share of the credit to PIs.
The parent is making a philosophical argument. The exact Hollywood definitions aren’t important since there are far many more job roles in film production compared to software development. If you insist though just replace creator with producer in his original argument and it’s the same - you can produce a movie without doing the acting yourself.
Most software is developer tools and frameworks to manage electrical state in machines.
Such state management messes use up a lot of resources to copy around.
As an EE working in QA future chips with a goal of compressing away developer syntax art to preserve the least amount of state management possible to achieve maximum utility; sorry self selecting biology of SWEs, but also not sorry.
Above all this is capitalism not honorific obligationism. If hardware engineers can claim more of the tech economy for our shareholders, we must.
There are plenty of other creative outlets that are much less resource intensive. Rich first world programmers are a small subset of the population and can branch out then and explore life rather than believe everyone else has an obligation to conserve the personal story of a generation of future dead.
To me, it's the natural result of gaining popularity that enough people have started to use after the hype train rolled through and are now giving honest feedback. Real honest feedback can feel like a slap in the face when all you have had is overwhelming positive feedback from those aboard the hype train.
The writing has been on the wall with so called hallucinations where LLMs just make stuff up that the hype was way out over its skiis. The examples of lawyers being fined for unchecked LLM outputs being presented as fact type of stories will continue to take the shine off and hopefully some of the raw gungho nature will slow down a bit.
I saw an article today from the BBC where travellers are using LLMs to plan their vacations and getting into trouble going places (sometimes dangerously remote ones) to visit landmarks that don't even exist:
I'm mildly bearish on the human capacity to learn from its mistakes and have a feeling in my gut that we've taken a massive step backwards as civilization.
I could almost understand a lawyer working late the night before a brief is due and just run out of time to review the output of the LLM. How do you not look up travel destinations before heading out? That's just something I can't wrap my head around in any way of trying to be kind and seeing the other side of something
There are a lot of good AI code reviewers out there where they learn project conventions based on prior PRs and make rules from them. I've found they definitely save time and catch things I would have missed - things like cubic.dev or greptile etc etc. Especially helpful for running an open source project where code quality can have high variance and as a maintainer you may feel hesitant to be direct with someone -- the machine has no feelings so it is what it is :)
honestly? this but zoom out. machines are supposed to do the grunt work so that people can spend their time being creative and doing intangible, satisfying things but we seem to have built machines to make art, music and literature in order to free ourselves up to stack bricks and shovel manure.