Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.
Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.
There is also a chance that a lot of this capex is written off, and the money becomes "sunk". Bad for the current players, but given inference costs as you mention are profitable, after the writeoffs and the market correction the industry continues on variable inference revenue.
The catch is you probably only want to be invested after any writeoffs/corrections if that is your hypothesis. i.e. the future may be AI, but it isn't a straight line, nor is it guaranteed that the current players will be the future AI company of choice. You can be right about the end state and still lose your shirt in between with markets.
I think you can enjoy both aspects - both the problem solving and the craft. There will be people who agree that of course from a rational perspective solving the problem is what matters, but for them personally the "fun" is gone. Generally people that identify themselves as "programmers" as the article does would be the people who enjoy problem solving/tinkering/building.
What if you want to be a better problem solver (in the tech domain)? Where should you focus your efforts? That's what is confusing to me. There is a massive war between the LLM optimists and pessimists. Whenever I personally use LLM tools, they are disappointing albeit still useful. The optimists tell me I should be learning how to prompt better, that I should be spending time learning about agentic patterns. The pessimists tell me that I should be focusing on fundamentals.
In my view its a tool, at least for the moment. Learn it, work out how it works for you, and what it doesn't work for you. But assuming you are the professional they should trust your judgement, and you should also earn that trust. That's why you pay skilled people for. If that tool isn't the best to getting the job done use something else. Of course that professional should be evaluating tools and assuring us/management (whether by evidence or other means) that the most cost efficient and quality product is being built like any other profession.
I use AI, and for some things its great. But I'm feeling like they want us to use the "blunt instrument" that is AI when sometimes a smaller, more fine grained tool/just handcrafting code for accuracy at least for me is quicker and more appropriate. The autonomy window as I recently heard it expressed.
I think the reason for the negativity in this forum (and other threads I've seen over the past few months) is because people are engaged with AI and it seems are deep down not happy with its direction even if they are forced to adapt. That negativity spreads I think to people winning in this which is common in human nature. At least that's the impression I'm getting here and other places. The most commented articles on HN these days are AI (e.g. OpenAI model, some blogger writing about Claude Code gets 500+ comments, etc) which shows a very high level of emotional engagement and have the typical offensive and defensive attitude between people that benefit or lose from this. Also general old school software tech articles are drowned out in comparison; AI is taking all the oxygen out of the room.
My anecdotal observation talking to people: Most tech cycles I've seen have hype/excitement but this is the first one I've been in at least that I've seen a large amount of fear/despair. From loss of jobs, automating all the "good stuff", enriching only the privileged, etc etc people are worried. As loss aversion animals fear is usually more effective for engagement especially if it means a loss of what was before - people are engaged but I suspect negative towards the whole AI thing in general even if they won't say it on the record. Fear also creates a singular focus; when you are threatened/anxious its harder for people to engage with other topics and makes you see AI trend as something you would want to see fail. That paints AI researchers as not just negative; but almost changing their own profession/world for the worse which doesn't elicit a positive response from people.
And for the others, even if they don't have this engagement, the fact that this is drowning out other things can be annoying to some tech workers as well. Other tech talks, articles, research, etc is just silent in comparison.
YMMV; this is just my current anecdotal observations in my limited circle but I suspect others are seeing the same.
The question is not whether you can or can't, but whether it is still worth it long term:
- There is a moat of doing so (i.e. will people actually pay for your SaaS knowing that they could do it too via AI) and..
- How many large scale ideas do you need post AI? Many SaaS products are subscription based and loaded with features you don't need. Most people would prefer a simple product that just does what they need without the ongoing costs.
There will be more software. The question is who accrues the economic value of this additional software - the SWE/tech industry (incumbent), the AI industry (disruptor?) and/or the consumer. For the SWE's/tech workers it probably isn't what they envisioned when they started/studied for this industry.
It seems obvious to me it is the consumer who will benefit most.
I had been thinking of buying an $80 license for a piece of software but ended up knocking off a version in Claude Code over a weekend.
It is not even close to something commercial grade that I could sell as a competitor but it is good enough for me to not spend $80 on the license. The huge upside is that I can customize the software in any way I like. I don't care that it isn't maintainable either. Making a new version in ChagGPT5 is going to be my first project.
Just like a few hours ago I was thinking how I would like to customize the fitness/calorie tracking app I use. There are so many features I like that would be tightly coupled to my own situation and not a mass market product.
This to me seems obvious of what the future of software looks like for everything but mission critical software.
This has a lot of future implications for employment in tech of course, architecture/design decisions, etc. Why would a non-tech company use a SaaS when it just AI up something and have 1-2 engineers on the build accountable? Its a lot cheaper and amortisable over many products saving some companies millions. Not just tech implementors but sales staff would be disrupted. Especially when the SaaS is implementing a standard or requires significant customisation anyway. Buy vs build, product vs implementation, it should all change soon - the silver lining in all of this.
I sadly think if the promise of AI happens this is the likely economic outcome. The last century or so was an anomaly from most of human history; a trend created by the "arms race" of needing educated workers. The prisoner's dilemma was if you trained your workers in more efficient tech you could out-compete and take all the profits from competitors which gave those educated workers the means to strike (i.e. leverage). Now it is the "arms race" of educated AI, rather than workers which could invalidate a lot of assumptions our current society takes for granted in its structure.
That's what AI does. Makes power and politics have even more of a premium vs say learning, intelligence and hard work. Connections, wealth and power. It is almost ironic that our industry is inventing the thing that empowers the people that techies often find useless (as per the above comments) and dis-empowering themselves often shutting the door behind them.
Yes an AI will come up with more insight than many management people as many people state in this thread that a LLM can do their job. Its a mistake to assume that's what they are paid for however.
Agree with most of what you said except for the "big bucks" part. Why would I pay for your product when I can ask the AI to do it? To be honest I think I would rather use that money for anything else if I can spend a little bit of time and get the AI to do it. This is quite deflationary for programming in general and inflationary for domains not disrupted all else being equal. There's a point where Jevon's Paradox fails - after all there's only so much software most normal people want and at that point tech workers value relative to other sectors will decline assuming unequal disruption.
The ability to earn the big bucks as you state is not a function of the value delivered/produced, but the scarcity and difficulty in acquiring said value. That is capitalism. An extreme example is clear air that we breathe - it is currently free, but extremely valuable to most living things. If we made it scarce (e.g. pollution) eventually people would start charging for it; potentially at extortionary prices depending on how rare it becomes.
The only exception I see is if the software encodes a domain that isn't as accessible to people and is kept secret/under wraps, has natural protections (e.g. a government system that is mandatory to use), or is complex and still requires co-ordination and understanding. This does happen, but then I would argue the value is in the adjacent domain knowledge - not in the software itself.
In fact, in many spa towns you have already local taxes, e.g. "climate surcharge" where you actually pay as a tourist for the clean air. Usually it's a local tax that is added on top of your hotel bill.
That's kinda obvious that's their goal especially with the current focus on coding of most of the AI labs in most announcements - it may be augmentation now but that isn't the end game. Everything else these AI labs do, while fun seems like at most a "meme" to most people in relative terms.
Most Copilot style setup's (not just in this domain) are designed to gather data and train/gather feedback before full automation or downsizing. If they outright said it they may not have got the initial usage needed to do so from developers. Even if it is augmentation it feels like at least to me the other IT roles (e.g. BA's, Solution Engineers maybe?) are safer than SWE's going forward. Maybe its because dev's have a skin in the game and without AI its not that easy of a job over time makes it harder for them to see. Respect for SWE as a job in general has fallen in at least my anecdotal conversations mainly due to AI - after all long term career prospects are a major factor in career value, social status and personal goals for most people.
Their end goal is to democratize/commoditize programming with AI as low hanging fruit which by definition reduces its value per unit of output. The fact that there is so much discussion on this IMO shows that many even if they don't want to admit it there is a decent chance that they will succeed at this goal.
Stop repeating their bullshit. It is never about democratizing. If it was, they would start teaching everyone how to program, the same way we started to teach everyone how to read and write not that long ago.
Many tech companies and/or training places did try to though didn't they? I know they do boot camps, coding classes in schools and a whole bunch of other initiatives to get people into the industry. Teaching kids and adults coding skills has been attempted; the issue is more IMO that not everyone has the interest and/or aptitude to continue with it. The problem is that there's parts of the industry/job that aren't actually easy to teach (note not all of it); can be quite stressful and require constant keeping up - IMO if you don't love it you won't stick with it. As software demand grows, despite the high salaries (particularly in the US) and training, supply didn't keep up with demand till recently.
In any case I'm not saying I think they will achieve it, or achieve it soon - I don't have that foresight. I'm just elaborating on their implied stated goals; they don't state them directly but reading their announcements on their models, code tools, etc that's IMO their implied end game. Anthrophic recently announced statistics that most of their model usage is for coding. Thinking it is just augmentation doesn't justify the money IMO put into these companies by VC's, funds, etc - they are looking for bigger payoffs than that remembering that many of these AI companies aren't breaking even yet.
I was replying the the parent comment - augmentation and/or copilots don't seem to be their end game/goal. Whether they are actually successful is another story.
Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.