Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Your laptop is unquestionably better because it was constructed with the help of automated CNC machines and PCB assembly as opposed to workers manually populating PCBs.

This is a fundamentally flawed analogy, because the problems are inverted.

CNC and automated PCB assembly work well because creating a process to accurately create the items is hard, but validation that the work is correct is easy. Due to the mechanics of CNC, we can't manufacture something more precise than we can measure.

LLMs are inverted; it's incredibly easy to get them to output something, and hard to validate that the output is correct.

The analogy falls apart if you apply that same constraint to CNC and PCB machines. If they each had a 10% chance of creating a faulty product in a way that can only be detected by the purchaser of the final product, we would probably go back to hand-assembling them.

> Some companies can try to use automation to stay in place with lower headcount, but they’ll be left behind by competition that uses automation to move forward.

I suspect there will be a spectrum, as there historically has been. Some companies will use AI heavily and get crazy velocity, but have poor stability as usage uncovers bugs in a poorly understood codebase because AI wrote most of it. Others will use AI less heavily and ship fewer features, but have fewer severe bugs and be more able to fix them because of deep familiarity with the codebase.

I suspect stability wins for many use cases, but there are definitely spaces where being down for a full day every month isn't the end of the world.



I would not call validation of PCBs easy conceptually. We had to develop a lot of automation in validation to enable today production. Optical verification has been standard for many years, and X-ray is now getting commonplace also. Flying probes commonly used. Functional automated testing is standard for any non-hobby product. But you are right in that the automation overall was/is bottlenecks on the ability to do QA - which not only prevents defects from being sent out, but is also critical to systematic improvements in the production process (both tuning and new iteration). And I believe you are right to call out LLM based systems to be weak in this area, and it is a limiting factor. I believe that automated QA will be more and more critical to positive LLM impact.


Validation that a PCB was manufactured correctly is... easy. Disagree, but how about VLSI. It's hugely automated. Moore's Law is exponential but team sizes aren't. That productivity gap is made up for with huge amounts of automation. And nothing is easy about manufacturing validation of an ASIC.

I do think one primary difference between physical objects and software is we bother to have precise specifications that one can validate against, and I think that's what you're trying to get at. If all software had that then software could have an "easy" validation story too, I suppose.

I have mixed feelings about precise specifications in software. On the one hand the hardware engineer in me thinks everything should have an exact specification. On the other hand, that's throws away the "soft" advantage which is important for some types of software. So there is a spectrum.


FWIW I don't think there's anything factually wrong with what you said, but I think misses the parent's point. They would be incredibly naïve to say that hardware is easy. But I think they were using "easy" as a relative word, not absolute. As is natural in these conversations, but also easily leads to misunderstanding.

  > I do think one primary difference between physical objects and software is we bother to have precise specifications that one can validate against
Having been on the hardware side and now on software (specifically ML) this is one of the biggest differences I've noticed. It's a lot harder to validate programs. But I think the part that concerns me more is the blasé or even defensive attitude. In physical engineering it often felt "it's the best we can do for now" with people often talking about ideas and trying to make it work. It seemed of concern to management too. But in software it feels a lot more like "it gives the right output" and "it passes the test cases" (hit test cases aren't always robust and don't have the same guarantees as in physical design) and call it done. The whole notion of Test Driven Development even seems absurd. Tests are a critical part of the process, but to drive the process is absurd. It just seems people are more concerned with speed than velocity. A lack of depth, and I even frequently see denial of depth. In physical it seems like we're always trying to go deeper. In software it seems like we're always trying to go wider.

This isn't to say that's the case everywhere, but it is frequent enough. There's plenty of bad physical engineering teams and plenty of great software teams. But there's definitely differences in approaches and importantly differences in thresholds. The culture too. I've never had a physical engineer ask me "what's the value?", clarifying that they mean monetary value. I've had managers do that, but not fellow engineers. The divide between the engineering teams and business teams was clearer. Which I think is a good thing. Engineers sacrifice profit for product. Business sacrifices product for profit. The adversarial nature keeps balance


Be careful of Lemon Markets[0]. The problem with them is that they create a stable low quality state. They tend to happen when product quality is not distinguishable at time of purchase.

Which I think we already see a fair amount of this in tech. Even as very tech literate people it can be hard to tell. But companies are definitely pushing to move fast and are willing to trade quality for that. If you're trying to find the minimum quality that a consumer is still willing to pay for, you're likely in a lemon market.

I mean look at Microsoft lately. They can't even get windows 11 right. There's clear quality control issues that are ruining the brand. Enough that us techies are joking that Microsoft is going to bring about the year of Linux, not because Linux has gotten better (also true) but because Microsoft keeps shooting itself in the foot. Or look at Apple with the new AirPods, they sound like shit. Same with Apple intelligence and liquid glass. A big problem (which helps lemon markets come into existence and be stable) is that competition is weak, with a very high barrier to entry. The market is centralized not only because the momentum and size of existing players (still major factor) but because it takes a lot of capital to even attempt to displace them. That's probably more money and more time than the vast majority of investors are willing to risk and the only ones with enough individual wealth are already tied to the existing space.

I think you also have it exactly right about LLMs and AI. A good tool makes failures clear and easy to identify. You design failure modes, even in code! But these machines are designed for human preference. Our methods that optimize for truth, accuracy, and human sounding language simultaneously optimize for deception. You can't penalize the network for wrong outputs if you don't recognize they are wrong.

A final note: you say velocity, I think that's inaccurate. Velocity has direction. It's more accurate to say speed.

[0] https://en.wikipedia.org/wiki/The_Market_for_Lemons




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: