Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The next stage of this issue is: how do you explain something you didn't write?

The LLM-optimist view at the moment, which takes on board the need to review LLMs, assumes that this review capability will exist. I cannot review LLM output on areas outside of my expertise. I cannot develop the expertise I need if I use an LLM in-the-large.

I first encountered this issue ~year-ago when using an LLM to prototype a programming language compiler (a field I knew quite well anyway) -- but realised that very large decisions about the language were being forced by LLM implementation.

Then, over the last three weeks, I've had to refresh my expertise in some areas of statistics and realised much of my note taking with LLMs has completely undermined this process -- the effective actions have been, in follow on, traditional methods: reading books, watching lectures, taking notes. The LLM is only a small time saver, "in the small" once I'm an expert. It's absolutely disabling as a route back to expertise.



IMO we are likely in a golden era of coding LLM productivity, one in which the people using them are also experts. Once there are no coding experts left, will we still see better productivity?


Yea, and how are people going to learn when the answer is just a chat away. I know it would have been hard for me to learn programming if I knew I could just ask for the solution everytime (no stackoverflow does not count because most people dont ask a question for every single issue they encounter like with AI)


this was the same criticism against SO at the time too. people who want to learn and put in the effort will learn even faster with AI (asking, exploring the answers, etc.) and those who use it as a crutch will be left behind as always. we're just in the confusing staging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: