The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
Someone smart said that AI should replace tasks, not jobs.
There are infinite analogies for this whole thing, but it mostly distills down to artisans and craftsmen in my mind.
Artisans build one chair to perfection, every joint is meticulously measured and uses traditional handcrafted Japanese joinery, not a single screw or nail is used unless it's absolutely necessary. It takes weeks to build one, each one is an unique work of art.
It also costs 2000€ for a chair.
Craftsmen optimise their process for output, instead of selling one 2000€ chair a month, they'd rather sell a hundred for 20€. They have templates for cutting every piece, jigs for quickly attaching different components, use screws and nails to speed up the process instead of meticulous handcrafted joinery.
It's all about where you get your joy in "software development". Is it solving problems efficiently or crafting a beautiful elegant expressive piece of code?
Neither way is bad, but pre-LLM both people could do the same tasks. I think that's coming to an end in the near future. The difference between craftsmen and artisans is becoming clearer.
There is a place for people who create that beautiful hyper-optimised code, but in many (most) cases just a craftsman with an agentic LLM tool will solve the customer's problem with acceptable performance and quality in a fraction of the time.
In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?
Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.
Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.
Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.
If we had done what you say (distribute wealth more evenly between people/corporations) more to the point I don't know if AI would of progressed as it has - companies would of been more selective with their investment money and previously AI was seen at best as a long shot bet. Most companies in the "real economy" can't afford to make too many of these kind of bets in general.
The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.
Transformers grew out of the value-add side (autotranslation), though, not really the ad business side iirc. Value-add work still gets done in high-progressive-tax societies if it's valuable to a large fraction of people. Research into luxury goods is slowed by progressive tax rates, but the actual border between consumer and luxury goods actually rises a bit with redistributed wealth; more people can afford smartphones earlier and almost no one buys superyachts and so reinvestment into general technology research may actually be higher.
Sure. I just know in most companies (seeing the numbers on projects in a number of them across industries now) funding projects which give time for people to think, ponder, publish white papers of new techniques is rare and economically not justifiable against other investments.
Put it this way - to have a project where people have the luxury to scratch their heads for awhile and to bet on something that may not actually be possible yet is something most companies can't justify to finance. Listening to the story of the transformer invention it sounds like one of these projects to me.
They may stand on the shoulders of giants that is true (at the very least they were trained in these institutions) but putting it together as it was - that was done in a commercial setting with shareholder funds.
In addition given the disruption to Google in general LLM's have done I would say, despite Gemini, it may of been better cost/benefit wise for Google NOT to invent the transformer architecture at all/yet or at least not publish a white paper for the world to see. As a use of shareholders funds the activity above probably isn't a wise one.
Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.
Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.
Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.
> Career being the core of one's identity is so ingrained in society
In AMERICAN society. Over there "what do you do?" is in the first 3 questions people ask each other when they meet.
I've known people for 20 years and I don't have the slightest clue what they do for a living, it's never came up. We talk about other things - their profession isn't a part of their personality.
It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.
For a prototype, but something production ready requires almost similar amount of effort than it used to, if you care about good design and code quality.
I really doesn't. I just ditched my wordpress/woocommerce webshop for a custom one that I made in 3 days with Claude, in C# blazor. It is better in every single way than my old webshop, and I have control over every aspect of it. It's totally production ready.
The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.
I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.
3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.
Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.
I find that restricting it to very small modules that are clearly separated works well. It does sometimes do weird things, but I'm there to correct it with my experience.
I just wish I could have competent enough local LLMs and not rely on a company.
The ones approaching competency cost tens of thousands in hardware to run. Even if competitive local models existed would you spend that to run them? (And then have to upgrade every handful of years.)
You can be as specific as you want with an LLM, you can literally tell it to do “clean code” or use a DI framework or whatever and it’ll do it. Is it still work? Yes. But once you start using them you’ll realize how much code you actually write is safely in the realm of boilerplate and the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.
Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.
> the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.
This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.
This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.
Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.
Jevons paradox I doubt applies to software sadly for SWE's; or at least not in the way they hope it does. That paradox implies that there are software projects on the shelf that have a decent return on investment (ROI) but aren't taken up because of lack of resources (money, space, production capacity or otherwise). In general unlike physical goods usually the only resource lacking is now money and people which means the only way for more software to be built is lower value projects now stack up.
AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.
My overwhelming experience is that the sort of developers unironically using the phrase "vibe coding" are not interested in or care about good design and code quality.
If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)
As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.
We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.
In light of OpenAI confessing to shareholders there’s no there there (being shocked by and then using Anthropics MCP, being shocked by and then using Anthropics Skills, opening up a hosted dev platform to milk my awesome LLM business ideas, and now revealing that inline ads a-la Google is their best idea so far to make, you know, make money…), I was thinking about those LLM project statistics. Something like 5-10% of projects are seeing a nice productivity bump.
Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…
AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.
Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.
The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…
Coding is merely a means to an end and not the end itself. Capitalism sees to it that a great many things are this way. Unfortunately only the results matter and not much else. I'm personally very sorry things are this way. What I can change I know not.
Not sure it's the gotcha you want it to be. What you said is true by definition. That is, vibe coding is defined as not caring about code. Not to be confused with LLM-assisted coding.
I care about product quality. If "good design" and "code quality" can't be perceived in the product they don't matter.
I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.
Perhaps for the inexperienced or timid. Code quality is it compiles and design is it performs to spec. Does properly formatted code matter when you no longer have to read it?
Formatted? I guess not really, because it’s trivially easy to reformat it. But how it’s structured, the data structures and algorithms it uses, the way it models the problem space, the way it handles failures? That all matters, because ultimately the computer still has to run the code.
It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.
Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.
Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.
I don't agree, I looked at most of the code the AI wrote in my project, I have a good idea of how it is architectured because I actively planned it. If I have a bug in my orders, I know I have to go to the orders service. Then it's not much harder than reading the code my coworkers write at my daily job.
At this point in reality do you read assembly or libraries anymore?
Years ago it was Programmer -> Code -> Compile -> Runtime
Now today the Programmer is divided into two entities.
Intention/Prompt Engineer -> AI -> Code -> Compile -> Runtime.
We have entered the 'sudo make me a sandwich' world where computers are now doing our bidding via voice and intent. Despite knowing how low level device drivers work I do not care how a file is stored, in what format, or on what medium. I do want it to function with .open and .write which will work as expected with a working instruction set.
Those who can dive deep into software and hardware problems will retain their jobs or find work doing that which AI cannot. The days of requiring an army of six figure polyglots has passed. As for the ability to production or kernel level work is a matter of time.
I'm not sure I'm having more fun, at least not yet, since for me the availability of LLMs takes away some of the pleasure of needing to use only my intellect to get something working. On the other hand, yes, it is nice to be able to have Copilot work away on a thing for my side project while I'm still focused on my day job. The tradeoff is definitely worth it, though I'm undecided on whether I am legitimately enjoying the entire process more than I used to.
You don't have to use LLMs the whole time. For example, I've gotten a lot done with AI and had the time to spend over the holidays on a long time side project... organically coding the big fun thing
Replacing Dockerfiles and Compose with CUE and Dagger
I don't do side projects, but the LLM has completely changed the calculus about whether some piece of programming is worthwhile doing at all. I've been enjoying myself automating all sorts of admin/ops stuff that hitherto got done manually because there was never a clear 1/2 day of time to sit down and write the script. Claude does it while I'm deleting email or making coffee.
For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.
It is clearly an emotional question. My comment on here saying I enjoyed programming with an LLM has received a bunch of downvotes, even though I don't think the comment was derogatory towards anyone who feels differently.
People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.
In physics, color has been redefined as a surface reflectance property with an experiential artefact as a mental correlate. But this understanding is the result of the assumptions made by Cartesian dualism. That is, Cartesian dualism doesn't prove that color as we commonly understand it doesn't exist in the world, only in the mind. No, it defines it to be the case. Res extensa is defined as colorless; the res cogitans then functions like a rug under which we can sweep the inexplicable phenomenon of color as we commonly understand it. We have a res cogitans of the gaps!
Of course, materialists deny the existence of spooky res cogitans, admitting the existence of only res extensa. This puts them in a rather embarrassing situation, more awkward that the Cartesian dualist, because now they cannot explain how the color they've defined as an artefact of consciousness can exist in a universe of pure res extensa. It's not supposed to be there! This is an example of the problem of qualia.
So you are faced with either revising your view of matter to allow for it to possess properties like color as we commonly understand them, or insanity. The eliminativists have chosen the latter.
There's no definition for "color" in physics. Physics does quantum electrodynamics. Chemistry then uses that to provides an abstracted mechanism for understanding molecular absorption spectra. Biology then points out that those "pigments" are present in eyes, and that they can drive nerve signals to brains.
Only once you're at the eye level does anyone start talking about "color". And yes, they define it by going back to physics and deciding on some representative spectra for "primary" colors (c.f. CIE 1931).
Point being: everything is an abstraction. Everything builds on everything else. There are no simple ideas at the top of the stack.
This is unnecessarily pedantic. Your explanation demonstrates that.
> There are no simple ideas at the top of the stack.
I don't know what a "simple idea" is here, or what an abstraction is in this context. The latter has a technical meaning in computer science which is related to formalism, but in the context of physical phenomena, I don't know. It smells of reductionism, which is incoherent [0].
> To untutored common sense, the natural world is filled with irreducibly different kinds of objects and qualities: people; dogs and cats; trees and flowers; rocks, dirt, and water; colors, odors, sounds; heat and cold; meanings and purposes.
It's too early to declare that there are irreducible things in the universe. All of those things mentioned are created in the brain and we don't know how the brain works, or consciousness. We can't declare victory on a topic we don't fully understand. It's also a dubious notion to say things are irreducible when it's quite clear all of those things come from a single place (the brain), of which we don't have a clear understanding.
We know some things like the brain and the nervous system operate at a certain macro level in the universe, and so all it observes are ensembles of macro states, it doesn't observe the universe at the micro level, it's then quite natural that all the knowledge and theories it develops are on this macro scopic / ensemble level imo. The mystery of this is still unsolved.
Also regarding the physics itself, we know that due to the laws of physics, the universe tends to cluster physical matter together into bigger objects, like planets, birds, whatever. But those objects could be described as repeating patterns in the physical matter, and that this repeating nature causes them to behave as if they do have a purpose. The purpose is in the repetition. This is totally inline with reductionism.
> It's too early to declare that there are irreducible things in the universe. [...] We can't declare victory on a topic we don't fully understand.
This isn't a matter of discovering contingent facts that may or may not be the case. This is a matter of what must be true lest you fall into paradox and incoherence and undermine the possibility of science and reason themselves. For instance, doubting rationality in principle is incoherent, because it is presumably reason that you are using to make the argument, albeit poorly. Similar things can be said about arguments about the reliability of the senses. The only reason you can possibly identify when they err is because you can identify when they don't. Otherwise, how could you make the distinction?
These may seem like obviously amateurish errors to make, but they surface in various forms all over the place. Scientists untutored in philosophical analysis say things like this all the time. You'll hear absurd remarks like "The human brain evolved to survive in the universe, not to understand it" with a confidence of understanding that would make Dunning and Kruger chuckle. Who is this guy? Some kind of god exempt from the evolutionary processes that formed the brains of others? There are positions and claims that are simply nonstarters because they undermine the very basis for being able to theorize in the first place. If you take the brain to be the seat of reason, and then render its basic perceptions suspect, then where does that leave science?
We're not talking about the products of scientific processes strictly, but philosophical presuppositions that affect the interpretation of scientific results. If you assume that physical reality is devoid of qualitative properties, and possesses only quantifiable properties, then you will be led to conclusions latent in those premises. It's question begging. Science no more demonstrates this is what matter is like than the proverbial drunk looking for his keys in the dark demonstrates that his keys don't exist because they can't to be found in the well-lit area around a lamp post. What's more, you have now gotten yourself into quite the pickle: if the physical universe lacks qualities, and the brain is physical, then what the heck are all those qualities doing inside of it! Consciousness has simply been playing the role of an "X-of-the-gaps" to explain away anything that doesn't fit into the aforementioned presuppositions.
You will not find an explanation of consciousness as long as you assume a res extensa kind of matter. The most defining feature of consciousness is intentionality, and intentionality is a species of telos, so if you begin with an account of matter that excludes telos, you will never be able to explain consciousness.
But the problem is we don't know how it works. It's not about assuming consciousness is outside of physical reality or something like this, it's simply the fact that we don't have an understanding of it.
For example if we could see and trace all intentional thoughts/acts before they occurred (in matter), intentionality would cease to be a property, it would be an illusion.
All things that we know of in the universe function as physical matter, and we know the brain is a physical thing with 80 billion neurons and trillions of connections. What's the simplest explanation?
1) This is an incredibly complicated physical thing that we don't understand yet (and quite naturally so, with it having an incredible number of "moving parts")
or 2) there are qualitative elements in the universe that we don't have the scientific tools to measure or analyze, even in principle
I go with #1 because that's what every fiber is telling me (although I admit I don't know, of course). And with #1 also comes reductionism. It is a physical system we just don't have the mental models to understand it.
I also want to say there could be another aspect that affects consciousness - namely the appearance of a "present now" that we experience in consciousness. This present moment is not really explained in physics but it could have something to do with how consciousness works. How I don't know but it all relates to how we model physics itself mentally.
To be blunt: it's whatever was in your head when you decided to handwave-away science in your upthread comment in favor of whatever nonsense you wanted to say about "Cartesian dualism".
No, that doesn't work. If you want to discount what science has to say you need to meet it on its own turf and treat with the specifics. Color is a theory, and it's real, and fairly complicated, and Descartes frankly brought nothing to the table.
That doesn't make anything "simple". Analysis operates on existing concepts, which means they're divisible. It's clear words are being thrown around without any real comprehension of them. This is a stubborn refusal to examine coarse and half-baked notions.
> If you want to discount what science has to say you need to meet it on its own turf and treat with the specifics.
Except this isn't a matter of science. These are metaphysical presuppositions that are being assumed and read into the interpretation of scientific results. So, if anything, this is a half-assed, unwitting dabbling in metaphysics and a failure to meet metaphysics on its own turf.
> whatever nonsense you wanted to say about "Cartesian dualism" [...] Descartes frankly brought nothing to the table
That's nice. But I haven't "handwaved-away" science. It is you who have handwaved-away any desire to understand the subject beyond a recitation of an intellectually superficial grasp of what's at stake. To say Descartes has nothing to do with any of this betrays serious ignorance.
it is an abstraction based on how our biological eyes work (this implies "knowledge" of physics)
so it is indirectly based on knowledge of how color works, it's simply not physics as we understand it but it's "physics" as the biology of the eye "understands" it.
red is an abstraction whose connection to how colors work is itself another abstraction, but of a much deeper complexity than 'red' which is a rather direct abstraction as far as abstraction can go nowadays
There is absolutely no knowledge needed for someone to point to something that is red and say "this is red" and then for you to associate things that roughly resemble that color to be red.
Understanding the underlying concepts is irrelevant.
Except I could think they mean the name of the thing, the size of the thing or a million other things. Especially if i have no knowledge of the underlying concept of colors.
reply