Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> However the solutions are absolutely useless for anyone else but the implementer.

Disposable code is where AI shines.

AI generating the boilerplate code for an obtuse build system? Yes, please. AI generating an animation? Ganbatte. (Look at how much work 3Blue1Brown had to put into that--if AI can help that kind of thing, it has my blessings). AI enabling someone who doesn't program to generate some prototype that they can then point at an actual programmer? Excellent.

This is fine because you don't need to understand the result. You have a concrete pass/fail gate and don't care about underneath. This is real value. The problem is that it isn't gigabuck value.

The stuff that would be gigabuck value is unfortunately where AI falls down. Fix this bug in a product. Add this feature to an existing codebase. etc.

AI is also a problem because disposable code is what you would assign to junior programmers in order for them to learn.



> AI is also a problem because disposable code is what you would assign to junior programmers in order for them to learn.

It's also giving PHBs the ability to hand ill-conceived ideas to a magic robot, receive "code" they can't understand, and throw it into production. All the while firing what real developers they had on staff.


I expect many of those companies to fail in the 3mo-2y timeline, so in many ways I welcome PHBs to embrace their full stupidity. Same for the people who funded them.

I do feel semi-sorry for anyone who paid for the services by those companies, though. Maybe something good will arise from that too, in the end; for example, it'd be nice if US society taught more critical reading skills to its members.

The interesting game for the non-PHBs among us is figuring out if/how we can use LLMs in less risky ways, and what all is possible there. For example, I'd love to see work put into LLMs helping with formal correctness of software; there's a hard backstop there where either the proof checks or it doesn't. Code changes needed to enable less-painful proofs would hopefully largely be refactorings, where reviews should be easier and it might even work out to fuzz test that the old and new implementations return matching output for same input. Or similarly, LLM-powered test coverage improver that only writes new tests (old school/branch-based/mutation-based, there's plenty of room there).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: