Hacker Newsnew | past | comments | ask | show | jobs | submit | cm277's commentslogin

Same background as you and I fully agree. Again and again you see market/economic takes from technologists. This is not a technology question (yes, LLMs work), it's an economics question: what do LLMs disrupt?

If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud. Which is an economics disruption, not a technological one: cloud added complexity and points of failure and yet it still disrupted a ton of companies, because it enabled new business models (SaaS for one).

So far, the only disruption I can see coming from LLMs is middleware/integration where it could possibly simplify complexity and reduce overall costs, which if anything will help SaaS (reduction of cost of complements, classic Christensen).


I'll take a crack.

> what do LLMs disrupt? If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud.

"Cost of developing code" is a trivial and incomplete answer.

Coding LLMs disrupt (or will, in the immediate future)

(1) time to develop code (with cost as a second order effect)

(2) expertise to develop code

None of the analogs you provided are a correct match for these.

A closer match would be Excel.

It improved the speed and lowered the expertise required to do what people had previously been doing.

And most importantly, as a consequence of especially the latter more types of people could leverage computing to do more of their work faster.

The risk to B2B SaaS isn't that a neophyte business analyst is going to recreate you app overnight...

... the risk is that 500+ neophyte business analysts each have a chance of replacing your SaaS app, every day, every year.

Because they only really need to get lucky once, and then the organization shifts support to in-house LLM-augmented development.

The only reason most non-technology businesses didn't do in-house custom development thus far was that ROI on employing a software development team didn't make sense for them. Suddenly that's no longer a blocker.

To the point about cloud, what did it disrupt?

(1) time to deploy code (with cost as a second order effect)

(2) expertise to deploy code

B2B SaaS should be scared, unless they're continuously developing useful features, have a deep moat, and are operating at volumes that allow them to be priced competitively.

Coding agents and custom in-house development are absolutely going to kill the 'X-for-Y' simple SaaS clone business model (anything easily cloneable).


This seems to assume that these non-technical people have the expertise to evaluate LLM/agent generated solutions.

The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.

All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.


I think you're underestimating (1) how bad most B2B is (from a bug and security vulnerability perspective) & (2) how little B2B companies' engineers understand about how their customers are using their products.

The frequency of mind-bogglingly stupid 1+1=3 errors (where 1+1 is a specific well-known problem in a business domain and 3 is the known answer) cuts against your 'professional SaaS can do it better' argument.

And to be clear: I'm talking about 'outsourced dev to lowest-cost resources' B2B SaaS, not 'have a team of shit-hot developers' SaaS.

The former of which, sadly, comprises the bulk of the industry. Especially after PE acquisition of products.

Furthermore, I'm not convinced that coding LLMs + scanning aren't capable of surpassing the average developer in code security. Especially since it's a brute force problem: 'ensure there's no gap by meticulously checking each of 500 things.'

Auto code scanning for security hasn't been a significant area of investment because the benefits are nebulous. If you already must have human developers writing code, then why not have them also review it?

In contrast, scanning being a requirement to enabling fast-path citizen-developer LLM app creation changes the value proposition (and thus incentive to build good, quality products).

It's been mentioned in other threads, but Fire/Supabase-style 'bolt-on security-critical components' is the short term solution I'd expect to evolve. There's no reason from-scratch auth / object storage / RBAC needs to be built most of the time.


I’m just imagining the sweat on the poor IT managers’ brow.

They already lock down everything enterprise wide and hate low-code apps and services.

But in this day and age, who knows. The cynical take is that it doesn’t matter and nobody cares. Have your remaining handful of employees generate the software they need from the magic box. If there’s a security breach and they expose customer data again… who cares?


That sweat doesn't lessen dealing with nightmare fly-by-night vendors for whatever business application a department wants.

Sometimes, the devil you know is preferable -- at least then you control the source.

Folks fail to realize the status quo is often the status quo because it's optimal for a historical set of conditions.

Previously... what would your average business user be able to do productively with an IDE? Weighed against security risks? And so the point that was established.

If suddenly that business user can add substantial amounts of value to the org, I'd be very surprised if that point doesn't shift.

It matters AND...


Yeah. I used to manage a team that built a kind of low-code SaaS solution to several big enterprise clients. I sat in on several calls with our sales people and the customer’s IT department.

They liked buying SAP or M$ because it was fully integrated and turnkey. Every SaaS vendor they added had to be SOC2, authenticate with SAML, and each integration had to be audited… it was a lot of work for them.

And we were highly trained, certified developers. I had to sign documents and verify our stack with regulatory consultants.

I just don’t see that fear going away with agents and LLM prompts from frontline workers who have no training in IT security, management, etc. There’s a reason why AI tech needs humans in the loop: to take the blame when they thumbs up what it outputs.


>> 2) expertise to develop code

This is wrong. Paradoxically, you need expertise to develop code with LLM.


For LOB CRUD apps? We blew past that capability point months ago.


After years with a mini, I jumped to an Air just so I could finally get a proper 'netbook' experience. Don't like Chromebooks, Windows is too complex; there is room for a simplified laptop that is easy to use and update but let's you use proper apps without going all the way to a full laptop with pro tools.

I've started to see this as a generational challenge. I am Gen X, I used to run FreeBSD and Linux, I don't mind the complexity and upkeep of a Windows laptop with all the trimmings (I do mind the complexity of the unixes, sorry). But what about Gen Z who are used to simple, powerful technology with simplified apps and UIs? why would they/should they put up with legacy UX and ways of working?

My guess is that where Microsoft is going with the new Office apps which are just web apps with thicker clients. Simplify, simplify until we can all work with iPads, Windows/ARM or whatever. Makes sense to be honest, although I'll probably keep a Thinkpad around the way old mechanics keep a set of tools in the garage although they will probably never use them again.


The iPad can work wonder if your workflow suits it. But it's the antithesis of power users. It's very tied to a cloud approach, but when you don't control the cloud backend, nor the app, it's hard to customize your workflow. Which is kinda the first step to mastery.


> Gen Z who are used to simple, powerful technology with simplified apps and UIs? why would they/should they put up with legacy UX and ways of working?

I disagree with the premise. The modern UIs are rife with more special cases, hidden gestures and non-transferable knowledge than the old “one mouse button is enough” or even early windows’ ugly but constant model. Gen Z has harder UI, over a superficial simplicity that is really just a constrained interaction space.

The problem for zoomers is now when they use a deep interaction model, the new complexity of UI becomes a frustration multiplier rather than fixed cost.


That and the visual language is so ambiguous and slapdash. Discovery is so much harder these days. And with every changing widget layouts, it's so hard to have a spatial memory if where to interact! Word in Windows 3.1 was far easier.


Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.


Well, there’s clauses in their relationship with OpenAI that sever the relationship when AGI is reached. So it’s actually not in Microsoft’s interests for OpenAI to get there


I haven't heard of this. Can you provide a reference? I'd love to see how they even define AGI crisply enough for a contract.


> I'd love to see how they even define AGI crisply enough for a contract.

Seems to be about this:

> As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.

https://www.reuters.com/technology/openai-seeks-unlock-inves...


This reminds me of the old XKCD about inventing new standards... fine, you get an EU Inc corporate model. What's the labor law applied for employees? what is the tax regime, and which countries will take in taxes? what about oh, I don't know liability, insurance, debt and bankruptcy, etc, etc.?

A company is a legal person within a jurisdiction --of which all of the laws apply to every person. You can't have an EU Inc without a federal EU. Heck even the US doesnt have a US Inc. This is naive at best.


UI is fashion-driven like clothing or furniture or car design. That's not new, it's just hard to admit for us techies that such a thing exists in our world. And just like with fashion, some changes are not for 'better' but for 'cooler' or 'more interesting'. The question is how far on the 'worse' scale you're willing to go to get up on the 'cool' scale. Otherwise, we'd all still be running Windows Server 2000...


Windows Server 2000 was great though! Or maybe that’s what you’re saying. I’m fine with all kinds of UX flash as long as it can be disabled; especially animation.


Or perhaps, it's for "designers keeping themselves employed"


Agreed. Text is used for a lot of things. A fantastic text parser/generator that doesn't need regex and can extract /meaning/ would have been a sci-fi fever dream even a decade ago. So, LLMs will definitely have their use and will probably disrupt several industries.

But this hype-storm just reminds me of the fever-dream blogs about the brave new world of the Internet back when hypertext became widely used in '93 or so (direct democracy, infinite commerce, etc, etc). Yes, of course, the brave new world came along, but it needed 3G and multi-touch screens as well and that was 15 years later and a whole different set of companies made money and ruled the world than those that bet on hypertext.


I havent coded in years, so I'll take your word for the potential of AI in SWE. But, software development has guardrailed against bad code with unit testing, CI/CD, etc. Also, productivity / output can be measured more-or-less well. Partly for that reason, it's also used to efficiency shifts (say from C++ to Java; or Perl to anything...) and those are not usually massive, all-or-nothing changes.

Where's the equivalent in customer support? or document creation? or any of these other mythical AI use cases? genuinely asking.

The article makes a good case that the SaaS bubble is deflating and needs a new hype cycle to keep investment up. AI makes sense for that, so at least that's one good use case :-)


So, serious question: if OpenAI is "a few thousand days from AGI" and about to dominate the GenAI space, why can they not hold on to execs? why is there no amount of options/money they can use to retain them with?


I mostly agree with


Agreed; I dont remember the source but I much prefer the Marines → Navy → Police continuum. Some circumstances require a highly capable team with high communication, aligned goals and motives, who can take decisions individually or at a low enough level. Some circumstances require bureaucracy, process, external and internal controls.

The dumb "Founder mode" discourse hides away two things: a) scale forces you to climb that ladder towards bureaucracy and controls anyway, b) it's scope-specific. You don't want to go "Founder mode" on phone support. Or accounts payable, or probably HR. There are specific objectives, projects and also circumstances that need a more hands-on approach. And honestly a "Marines" analogy where the team is tight and authorized to make decisions, is better than some micro-managing, coke-fueled "Founder mode".


Most at-scale firms struggle to create the authority internally for specific teams to act, as you call, "Marines" within the bounds of their responsibility.

Something I'd term as an "authority budget", that is, not an approved annual dollars budget of what they can spend, but a defined amount/area of authority that they can flex without needing to escalate.

The most stifling thing to any high performance employee is to have no sense of control, to have the ground constantly shifting under them OR feeling like their company is actively trying to protect themselves from you & making your job harder.

Yet this is the average case for many larger orgs.


There are good reasons for this from an organizational perspective as it reduces risk from a “lone wolf” making a catastrophic decision. It’s a good idea to have checks and balances when billions of dollars of people’s investments are at risk. Yes, the company may miss out on a few big victories from star performers, but it avoids catastrophic risks from overly allocated authority to a single individual.


Indeed, often a mix of both types of processes is needed within a company.

For those who are not familiar with it, check out Jeff Bezos’ 1997 Letter to Shareholders on irreversible (Type 1) vs. reversible (Type 2) decision making.


Can you link to the letter you are referring to? The top few results of my search had no mention of reversible decision making.


Try searching for 1-way vs 2-way decisions



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: