I perfer building and using software that is robust, heavily tested and thoroughly reviewed by highly experienced software engineers who understand the code, can detect bugs and can explain what each line of code they write does.
Today, we are now in the phase where embracing mediocre LLM generated code over heavily tested / scrutinized code is now encoraged in this industry - because of the hype of 'vibe coding'.
If you can't even begin to explain the code or point out any bugs generated by LLMs or even off-load architectural decisions to them, you're going to have a big problem in explaining that in code review situations or even in a professional pair-programming scenario.
> I perfer building and using software that is robust, heavily tested and thoroughly reviewed by highly experienced software engineers who understand the code, can detect bugs and can explain what each line of code they write does.
that's amazing. by that logic you probably use like one or two pieces of software max. no windows, macos or gnome for you.
It's a funny comic, but can you actually give an example of what it's talking about? "Properly reviewed" can be construed as "has been working for a long time for a lot of people", which definitely can't be said about any AI process or any AI generated code. At the very least, 1 human person actually sat down and wrote the tools the comic is poking fun at. But with AI, we are currently producing code that was neither peer reviewed nor written (a process which includes revision) -- it was instead "generated". So it's still a step backwards.