Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's what I tell people at work: It's OK to use AI, but you must say it's AI. If you post something and say "Here's what GPT-5 says" - great. Love the efficiency. If someone asks you to do something and you respond with clearly AI-generated crap masquerading as your own, you will be getting a piece of my mind.


> "Here's what GPT-5 says"

This drives me nuts. It's often wrong, but then I have to do the research to prove it before the conversation can get back on track.


Had a coworker paste an error log from a repo I maintain in Slack with an LLM summary of the log, three dot points which were written quite clearly in the log if he’d bothered to read it.


I use AI mostly for writing docs and always make sure the documents have an "AI generated content" notice as the first thing readers see.

In the codebase itself I add in-line comments pointing to precisely where AI was used.

AI has proven very useful for providing extensive in-line comments too as my employer is pushing hard for our Ops guys to learn IaC despite the vast majority having zero-to-none development experience.

Contextual comments explaining _what, why & how_ loops/conditionals/etc. work has (so far anyway) proven quite successful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: