I feel more sympathy for that Google engineer that feel in love with their LLM. I am sure more people will become attached, at least once the rate limits are relaxed.
The rea trap is that LLMs can simulate empathy surprisingly well. If you have many problems to rant about but nobody willing to listen, the LLM is always there, it will never get bored or tell you to shut up, and it will always respond in encouraging and "positive" ways. Given how many people today do not have anyone like that in their lives, it's no wonder that they form an emotional bond.
This doesn't make any sense. Empathy itself is qualia; the language is merely a medium to communicate it, and far from the only one (e.g. facial expressions are generally better at it).
As for LLMs "following the structural patterns" of empathetic language - sure, that's exactly what simulating empathy is.
I don't see what practical difference any of this makes. We can play word games all day long, but that won't convince Blake Lemoine or countless Replika users. To them, it's not "a character in a story", and that's the important point here.
The character of a story does not think, or do anything outside of what the writer of that story writes them to. The character cannot write itself!
That is the distinction I am making here.
Any person using an LLM is effectively playing "word games"; except instead of words, the game uses tokens; and instead of game rules, they follow pre-modeled token patterns. Those patterns are wholly dependent on the content of the training corpus text. The user only gets to interact by writing prompts: each prompt gets tokenized, the tokens get modeled, and the resulting difference gets printed back to the user.
No new behavior is ever created from within the model itself.