There are ways to gauge the confidence of the LLM (token probabilities over the response, generating multiple outputs and checking consistency), but yeah that’s outside the LLM itself. You could feed the info back to the LLM as a status/message I suppose
The idea of hooking LLMs back up to themselves, i.e. giving them token prob information somehow or even giving them control over the settings they use to prompt themselves is AWESOME and I cannot believe that no one has seriously done this yet.
I've done it in some jupyter notebooks and the results are really neat, especially since LLMs can be made with a tiny bit of extra code to generate a context "timer" that they wait before they prompt themselves to respond, creating a proper conversational agent system (i.e. not the walkie talkie systems of today)