Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> These kinds of comments are really missing the point.

I disagree. In my experience, asking coding tools to produce something similar to all of the tutorials and example code out there works amazingly well.

Asking them to produce novel output that doesn’t match the training set produces very different results.

When I tried multiple coding agents for a somewhat unique task recently they all struggled, continuously trying to pull the solution back to the standard examples. It felt like an endless loop of the models grinding through a solution and then spitting out something that matched common examples, after which I had to remind them of the unique properties of the task and they started all over again, eventually arriving back in the same spot.

It shows the reality of working with LLMs and it’s an important consideration.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: