Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> And I thought: mate, you were already so far up the abstraction chain you didn’t even realize you were teetering on top of a wobbly Jenga tower.

But AI is different. If I program in a high level language like Python, sure I don't know what's going on under the hood. But you get a 'feel' for it because the same code usually reproduces the same results. Does AI reproduce the exact same results when I ask the same thing? That I don't know.

 help



I was thinking the wobbly Jenga tower thing was unfair. The stack TypeScript runs on is fairly stable and buildable on. LLM output is much more random.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: