I've seen this a lot with LLMs that use conversational history as part of the input to infer the next response. Once it says no, it's more likely to say no again. Sometimes I find it better to start over when I get the the finger than trying to fight the chat history.