> Chatgpt and the likes generate convincing fake news
It's not like fake news was a non-issue before ChatGPT existed. Breitbart, and other fake news sites existed for years before this was even imaginable. Fake graphics were around for ages too, with image manipulation, even before computers. Take for example the Surgeon's Photo of the Loch Ness Monster.
This argument has been rehashed time and again and it's getting a bit boring: the fact that you can do something at all is qualitatively different from when you can do that same thing at scale.
Agreed. As I’m sure you know, it’s easier for people to reason when the arena presents only discrete possibilities, especially binary ones, eg yes/no or black/white.
The arena of the continuous, which encompasses most of the natural world, is far more difficult. It doesn’t allow for the sort of arguments where one can assert things with ego-boosting absolute confidence.
Matters of scale naturally fall into the continuous sort, but scale can also introduce new discrete possibilities. Perhaps we need to present the new specific, discrete effects and outcomes that AI-at-scale will introduce. Maybe, just maybe, that’ll change minds of people who are truly open to being changed.
My intuition though is that this argument is repeated here (on HN in particular) not for lack of knowledge or thought, but rather because they WANT to see the chaos that’ll result. They want all the positives and negatives, no matter the balance, simply because it’s exciting and adds to their “mundane” lives. Where the mundanity is of course subjective, simply the result of their default worldview, rather than anything nearing an objective description of the world as it is.
> simply because it’s exciting and adds to their lives.
It's certainly exciting but I wouldn't bet on it adding to our lives just yet. Maybe. But many avenues leading to net negative outcomes are still part of the tree.
AI doesn't solve the reputation problem. Just because you can make 1000x more fake news articles doesn't mean anyone will ever see it. Spam is still spam, AI doesn't suddenly make your site rank on Google or get you followers on Twitter or get you upvotes on Reddit.
I'm skeptical content generation was the thing holding back this wave of fake news and scams from ruining the Internet. These doomer posts are always handwavy on the specifics.
And we've seen little evidence of it operating at scale in the past 2yrs since these tools have been unleashed.
Maybe automated niche targeting but that still depends on networks with reputation systems + spam detection... So mostly depends on ostensibly solvable tech problems like caller ID spoofing, or someone sending mass email spoofed campaigns.
You can easily apply AI to the generation of fake accounts used to amplify the message, this is a tried and true playbook that can now be enacted at lower cost and with much greater speed.
That you have seen little evidence of it doesn't mean that it isn't happening, it could also mean that (1) you're looking in the wrong places, (2) that it is so good that you don't detect it and probably other explanations besides those.
I'm on HN daily like you are, I'm sure if this was a developing crisis in spam detection and social media we'd be hearing about it constantly. People on HN love a good AI doomer story, it won't be hiding.
Otherwise it's mostly just predictions of widescale disruption which beyond hyper targetted attacks I remain skeptical.
Run a couple of hundred comments on HN sequentially sampled from the comments stream through the AI detector and see what pops up. The fraction is rising steadily.
I create 'fake news' message among a series of bots and have a strong network there of social media accounts and sites. Ok, that's step one.
Now, I hack your account and steal your reputation. It appears you approve this message and some subset of your followers follow the bots and go to the bot sites.
You can try to pull out and say your hacked, and some percentage of people will unfollow said bots, but at the same time, a subset of those followers are now following those bots and giving them thumbs up thereby getting access to your friends friends.
User accounts are easily hackable as we saw with Linus Tech Tips recently. Reputation is just the newest currency worth stealing.
Right, that's a fair point, but I think when it comes to fake news, the issue really is about the quality, rather than quantity. Really lethal fake news requires you to be in touch at a deep level with your target audience to make it stick.
True in general, but in the particular case of misinformation, there was so much of it already before LLMs that I don't think scale makes a qualitative difference.
Maybe even the opposite, perhaps the deluge of AI-generated content will make the average person trust less what they find at random sources... which is healthy.
Note that in general I'm not too optimistic about AI risks (the very news that motivated this thread is scary) but I don't see the worry about mass misinformation in particular to be such a big deal.
You could make fake photos with photoshop. And now you can make them with AI.
A normal person couldn’t use photoshop to detect the image was a fake. But a normal person will be able to use AI to detect it.
I see all negative nancies about it. I don’t see anyone realizing that as good as the tools are at faking, the tools to detect will be exactly as good.
I thought about this a bit, and am wondering about the results of LLM-fueled arms-race when it comes to figuring out whether an AI has created some piece of text.
I'm worried we may reach a point AI will get so good faking people that real peoples' output will be treated as fake due to only as many combinations possible that originates from humans. You will start depending on AI telling you what is true and what is fake for every single piece of information. This leads to a question how to tell which AI is right - you can't really verify an opinion, only facts.
And being able to manipulate opinions is a very strong perk.
I was curious about this recently, so I built a very rudimentary neural net trained on GPT generated text messages, and human generated text messages. I was able to get a surprisingly good detection accuracy with just under 1k lines from each sample set. I'm not sure it's as apocalyptic as people think.
We might as well hit the problem Google Translate hit, where the training set started to contain more and more data created from GT itself. Similar thing may happen with your NN. At some point there may be so much AI-generated content (by different AIs) that it may be difficult to compose a trustworthy training set.
I suppose a solution to this would be something like pre-war iron. We would have to rely on archived sources, like Wikipedia past edits, that come from before ChatGPT existed.
Personally, I was won over by Scott Alexander's argument that news sites very rarely lie. They very often mislead and obfuscate, but that's fakery of another kind.
The bad-faith commenters, YouTube, journalists and other types who have an axe to grind already happily cite garbage sources without verifying (or perhaps caring) when making their motivated arguments and there's more than enough out there to back up whatever BS they're trying to spew any minute. I don't see how quantity available changes that. And of course AI can be deployed in the counter direction. I think (hope) you need a qualitative change in order to tip the balance of power.
It's not like fake news was a non-issue before ChatGPT existed. Breitbart, and other fake news sites existed for years before this was even imaginable. Fake graphics were around for ages too, with image manipulation, even before computers. Take for example the Surgeon's Photo of the Loch Ness Monster.