> Ironically, users who are extremely put off by conversational expressions from LLMs are just as vibe-sensitive as anyone else, if not more so. These are preferences regarding style and affect, expressed using the loaded term ‘sycophancy’.
It's not just about style. These expressions are information-free noise that distract me from the signal, and I'm paying for them by the token.
So I added a system message to the effect that I don't want any compliments, throat clearing, social butter, etc., just the bare facts as straightforward as possible. So then the chatbot started leading every response with a statement to the effect that "here are the bare straightforward facts without the pleasantries", and ending them with something like "those are the straightforward facts without any pleasantries." If I add instructions to stop that, it just paraphrases those instructions at the top and bottom and. will. not. stop. Anyone have a better system prompt for that?
The “sycophantic sugar” part - “that’s exactly right”, “what an insightful observation” etc. is the most outwardly annoying part of the problem but is only part of it. The bigger issue is the real sycophancy, going along with the user’s premise even when it’s questionable or wrong. I don’t see it as just being uncritical - there’s something more to it than that, like reinforcement. It’s one thing to not question what you write, it’s another to subtly play it back as if you’re onto something smart or doing great work.
There are tons of extant examples now of people using LLMs that think they’ve some something smart or produced something of value, that haven’t, and the reinforcement they get is a big reason for this.
ChatGPT keeps telling me I'm not asking the wrong questions, like all those other losers. I'm definitely asking the special interesting questions - with the strong implication they will surely make my project a success.
Characteristics - Defaults (they must've appeared recently, haven't played with them)
Custom instructions:
"Be as brief and direct as possible. No warmth, no conversational tone. Use the least amount of words, don't explain unless asked.'
I basically tried to emulate the... old... "robot" tone, this works almost too well sometimes.
You just got pink elephanted. Better learn what the other setting in your model do. There’s more to playing with LLMs than prompts. There’s a whole world of things like sampler settings or constrained generation schemas which would fix all of your and everyone else’s problems in this thread.
It’s pretty much trivial to design structured generation schemas which eliminates sycophancy, using any definition of that word…
I got mine to bundle all that bs into a one word suffix "DISCLAIMER." which it puts at the end of responses now but basically doesn't bother me with that stuff.
It's not just about style. These expressions are information-free noise that distract me from the signal, and I'm paying for them by the token.
So I added a system message to the effect that I don't want any compliments, throat clearing, social butter, etc., just the bare facts as straightforward as possible. So then the chatbot started leading every response with a statement to the effect that "here are the bare straightforward facts without the pleasantries", and ending them with something like "those are the straightforward facts without any pleasantries." If I add instructions to stop that, it just paraphrases those instructions at the top and bottom and. will. not. stop. Anyone have a better system prompt for that?