It's also easy to forget about it. Or get complacent. Or not realize some of the things you didn't replace should have been.
But then, Grammarly managed to make a successful business out of a keylogger. And if you're working in a mid-to-large company and developing on Windows, chances are your corporate-mandated AV is sending every single binary you build to the AV company servers. And so on and so on. So I suppose this ship has already sailed, and you may just as well plug your company's Exchange server and Sharepoint to ChatGPT and see if it can generate your next PowerPoint presentation for you.
I always go out of my way to protect the information and IP I'm contractually obliged to protect, but it's getting increasingly hard these days. It's also very frustrating when you notice and report a new process that silently sends company IP to third parties to do $deity knows what with it, and you discover that instead of being concerned, the people you report your findings to don't care and don't understand why you're making a fuss out of it.
If you replace terms with something else, what sort of value can you find in the responses you get back? On the one hand, the LLM does not know the code you are using; alternatively, if you restrict your code to using well-understood near synonyms, you are not really obfuscating.