> The whole point of a cited source is that you read the source to verify the claim. Amazing how many people in this thread seem to not let this little detail get in the way of their AI hate.
I like that you read all the citations in your concrete example of how good chat gpt is at citations and chose not to mention that one of them was made up.
Like you either would have seen it and consciously chose not to disclose that information or you asked a bot a question, got a response that seemed right, and then trusted that the sources were correct and posted it. But there’s no chance of the latter happening though because you specifically just stated that that’s not how you use language models.
On an unrelated note what are your thoughts on people using plausible-sounding LLM-generated garbage text backed by fake citations to lend credibility to their existing opinions as an existential threat to the concept of truth or authoritativeness on the internet?
I like that you read all the citations in your concrete example of how good chat gpt is at citations and chose not to mention that one of them was made up.
Like you either would have seen it and consciously chose not to disclose that information or you asked a bot a question, got a response that seemed right, and then trusted that the sources were correct and posted it. But there’s no chance of the latter happening though because you specifically just stated that that’s not how you use language models.
On an unrelated note what are your thoughts on people using plausible-sounding LLM-generated garbage text backed by fake citations to lend credibility to their existing opinions as an existential threat to the concept of truth or authoritativeness on the internet?