I'm really negative about the impact AI will have on the society. We have already been drowning in fake news and polarizing information.
Now, the likes of DALL-E and DeepFake can generate convincing fake graphics. Chatgpt and the likes generate convincing fake news. Voice AI can generate convincing voice from small samples.
If you were afraid of your elderly relatives being scammed by people pretending to be policemen or grandchildren, now more tech-concious people will get scammed by the voice and look of their relatives. Are we really approaching the reality where we need 2FA to trust the other person is really who they are?
The "fun" thing about the tech in the current state is that it's wrong quite a bit, which puts a cap on how much of a productivity boost it can give any organization that cares about correctness or its reputation; meanwhile, it has nigh-unlimited productivity-boosting promise for organizations that don't care about correctness or reputation.
Which kinds of organizations don't care about correctness or reputation? Scammers, spammers, and certain types of propaganda-spewing organization, especially those that puppeteer "grassroots" campaigns. We may give a single-digit multiplier in productivity to some roles in legitimate business, while giving a two or three digit multiplier to similar roles in harmful, parasitic organizations.
I'm much more concerned about dictators than I am about scammers.
Some countries have already successfully rewritten the history of less than 30 years ago with surveillance and media control.
What will happen when they can also control all digital reality? They can invent an uprising of millions of people that never actually happened, and there will be photo, audio, and video "evidence".
AGI is scary, but it's not immediate. I think companies like OpenAI are wrecklessly pushing us toward immediate societal collapse.
I'm also very concerned about where AI will take us, but will bad actors just quit if OpenAI decides to halt? Or will they continue to develop it, but in secret, so we have no clue what its capabilities are or how to control it? I fear the only way out is through.
No, because if you forbid the dissemination of nuclear research and materials then it's very difficult even for highly-resourced organizations (e.g. government of Iran) to produce nuclear devices, but if you would tomorrow forbid the dissemination of all future "AI research" then there are probably a million of people worldwide who would each be able to recreate the "dangerous capabilities" based on what they already have in their heads and current laptops, as long as someone gives them a few million $ worth of commodity (and thus not restrictable) computing hardware.
Nuclear technology is tricky to get working even if you have decent scientists and all the published research about basic principles. For current ML tech, a big point is that the core techniques are really simple, the simple textbook methods work at large scale, you don't necessarily need any special sauce which could be restricted. Yes, there is a lot of engineering work that goes into a full system to scale it efficiently, but all of that can be replicated by any decent bunch of engineers, unlike nuclear weapons.
The cat is out of the bag; you can't put the genie back in the bottle, all the "sensitive knowledge" you'd need to restrict AI is already known by at least a few CS undergrads in every single university in the world.
Those can be regulated because they require a level of physical access to materials. Centrifuges and viral samples and whatnot.
A given AI, once created, is "just" data. It can be sent around the world and run on commodity hardware. Nothing special needed beyond the expertise that created it in the first place.
Think of how relatively easy it is to clamp down on bioweapons, and compare that with our ability to stop malware.
Training new SOTA (state-of-the-art) models does require a large amount of compute resources. So we could at least pause here and not make more powerful models publicly available until we understand the implications of wtf actually just happened over the last 6 months.
Certainly the models and weights that have been released publicly are out in the wild and near impossible to retract. That damage is done.
The same is largely true of bio weapons. The difficult but is R&D, but a small group with access to a decent collage lab could recreate some dangerous shit if the people using it knew what to do.
"More" is dramatically understating it. The new platforms (GPT4, Midjourney, etc.) are able to automate the propaganda output of literally millions of people.
If you wanted someone to Photoshop the pope into multiple new outfits and fool a lot of people, you used to need at least a few hours of an expert's time. Now you can do it almost instantly.
It's many orders of magnitude faster to create highly realistic lies than it used to be.
> but there are also more ways to get information so its harder to suppress true info.
There are the same ways to get information, and they're going to be: A) unverifiable, and B) drowned out by propaganda.
Let's say someone wants to take down Joe Biden, and they generate text, audio, video, and photos of him molesting a child. Millions of people would want to believe it and then believe it. How would we prove it's not real? How could we, other than finding the person who did it or exposing a digital trail?
I questioned the Dalai Llama story as a possible CCP deep fake. Considering what they already did with doctored photos of Australian soldiers a year or so back.
Or when a dictator can just put a camera on every corner of every street, then do fully automated data mining on how and when people are moving around.
> are wrecklessly pushing us toward immediate societal collapse.
I would characterize this as just adding more and more landmines to an already crowded space of potential collapse-inducing events through which have to try to navigate society. Oh, and adding glitzy blinkenlights to attract people towards those landmines. "Let the marketplace decide if landmines are fun to step on."
The risks keep stacking up. It's not looking super great for avoiding all the landmines. The path is getting narrow and it's one thing or other that'll get us.
Photos, audio, and video are relatively new media, historically. We always had the potential problem of historical records being rewritten. People will learn to be cautious. I wouldn’t worry too much, unless and until a worldwide government becomes a dictatorship.
Funny how history rymes. The last surge of mass media, radio, conteibuted to the rise of Fascism and Natiobalsocialism. Curious to see what "AI" will bring us, and whether or not we as a species adopt quick enough to prevent the worst.
Could you please stop posting unsubstantive comments and/or flamebait and also please stop using HN for ideological battle (regardless of which ideology you're battling for or against)?
You've unfortunately been doing these things repeatedly, and they're not what the site is for and destroy what it is for.
Could they already do that using actors and sets? I doubt even the poorest dictator would lack the resources to do that currently. It’s the scammers on the other hand who would benefit with cheap ability to generate fake audio/video
Well, no, it's not really possible to just do that using actors and sets, it would have been done during the Cold War otherwise. Up until recent advances of digital technology it wasn't really possible to have an actor convincingly fake someone else - e.g. a foreign politician - so that it's not easily detectable; this is a recent new ability (and even now you'd still want to use a good actor with a similar face shape and skill to match the body movement style in order to make a convincing deepfake).
The "fun" thing about the tech in the current state is that it's wrong quite a bit,
Indeed. And, as every other AI project has shown us, the rough areas are easy, the last few percent are incredibly difficult.
Look at self driving cars. It will easily be 2050 before we can even hope to have good, driving AI. Current versions are a joke, and, are being trialed in places like California.
Not on snow, dirt roads, while snowing, on rural roads, on unmapped roads, of which there are plenty.
2050 at least.
Chatgpt will be vastly improved in maybe 30 years. The last little bit will take decades.
SF isn't an easy environment for self driving cars. Steep hills, congested streets, sometimes thick fog. But Waymo is busy causing traffic jams because when the cars get confused they just stop, and block traffic.
Right now driverless cars only work in places like that with relatively decent weather and which are meticulously mapped with frequent updates, because the tech still does not allow for safe driving using onboard sensors alone.
A self-driving car that can safely drive in all the same places and conditions that humans can is not yet possible.
> Which kinds of organizations don't care about correctness or reputation?
This describes any "creative" job, where the hard part of the job is not rejecting the bad ideas but coming up with the good ones. Visual art is one famous example. Programming is another: if your code doesn't pass the test suite, you can tell, and then you can fix it. Some software needs higher assurance than test suites can provide, and then you need to get your code to pass Rust's borrow checker, or a more rigorous proof checker like ACL2. Telling a valid proof from an invalid one is a solved problem; constructing a program you can write a valid proof for is very much not.
Programmers and visual artists are also famously cavalier about correctness and reputation, being willing to try anything, including notorious bohemian lifestyles.
> Are we really approaching the reality where we need 2FA to trust the other person is really who they are?
Yes we are. We are approaching a world where people will not want to invest their time, energy, and emotion engaging with other supposed humans remotely unless they have verified their personhood / identity.
I'm actually surprised we have not yet seen an entire industry built around human authentication. Seems like Apple is the only company taking this seriously.
The standard of human interaction will either be meeting IRL or signing communications biometrically.
>I'm actually surprised we have not yet seen an entire industry built around human authentication. Seems like Apple is the only company taking this seriously.
Completely agree - but to that - with how quickly AI voice cloning is improving, and how quickly image creation tools are improving, how long will it be until someone can sample my voice well enough to command Siri to do something and Siri won't be able to tell the difference? How long will it be until an AI generated photo of me is so realistic that it can unlock my phone via FaceID?
I have a feeling that's going to happen in a shorter timeframe than we all think it will. I mean, my voice is somewhat similar to my father's, and if I'm around his phone, I can say "Hey Siri" and his phone will think it's his voice, not mine.
We're going to see an arms race between AI and biometrics / authentication technology. My hope is that the latter outpaces the former, but it doesn't seem to be panning out that way.
In the future, we might be only able to unlock our phone or command Siri when hooked up to a continuous biometrics sensor.
You're probably not too far off from what will eventually become reality.
Perhaps that's Apple's angle with the AR/VR headset it supposedly has in the works - if you've got your headset on, you can unlock your devices, and if you don't, you have to manually unlock devices via a password or PIN or whatever.
The bulk of this remains a solvable security issue almost entirely centred around telecom. Spoofing caller IDs is way too easy. SIM swapping is also another issue that can be addressed with better security practices by phone companies.
Most young people these days aren't even using phone numbers/SMS these days except when forced to, so the threat will remain with the older generation and either die off with them or until the phone companies solve this problem... Or some digital voice system replaces it.
Otherwise it's just email phishing. Anything involving money transfers is already locked down and mostly a matter of fooling people... Which again is mostly older people with a poor grasp of technology and common threats (modern street smarts).
Read the story. Notice that the call came from an unknown number, and the scammer was playing the role of a kidnapper calling from the kidnapper's phone, and the "authentication" was the daughter's faked voice. There's nothing the telecom system could have done about that. This particular attack circumvented any possibility of "telecom security" being useful. And if "telecom security" gets better, there's more incentive for scammers to keep finding ways to make it irrelevant.
If you want a technical solution, you have to demand that a person always contact you from their own device as "2FA", or at least that they some kind of 2FA device on them... except there are a billion ways that somebody might lose access to such a device when they were in genuine trouble, and scammers are totally capable of making it look like one of those scenarios has happened.
You're really down to the point where people have to do "mind to mind" authentication with shared secret knowledge... while under extreme stress... in a case that's uncommon enough that most people will not have practiced it.
This is just what GP called "modern street smarts". People won't keep getting fooled like this for long. Just like people had to learn to stop trusting everything they heard on TV, and learn to stop trusting every pop-up on every website. We will develop new habits, such as what you call "mind to mind authentication" or verifying through a separate trusted channel.
So, here's the thing: I have a 15 year old daughter. If she were actually snatched by a kidnapper and threatened with rape/murder/whatever, I am not absolutely sure that she would remember and execute a "code word" protocol. Especially not a protocol that had the extra measures to help keep it from being subverted in various ways, but maybe not even a very simple protocol.
Not sure enough to feel really comfortable betting her life on it, anyway. Not if we hadn't drilled it on a daily basis for weeks and a weekly basis for months.
It's easy to blank on things when you're adrenalized, say if you've been kidnapped. And it's also easy to blank on things when you're adrenalized because you are hearing the person's voice saying they've been kidnapped.
... and if I asked a scammer pretending to have kidnapped her to let me call her on her phone, I would expect to get the obvious reply: "I threw her phone away. I'm not dumb enough to let you track me/her/us through it". Which is totally credible because that's what a kidnapper should do.
When you get the call, the strong prior probability is that the whole thing is a scam, but that's not so easy to hold onto in a situation like that. And even if you do hold onto it, you will be scared.
Oh, and on edit: Yes, I expect I would keep it together enough to call her on her phone to check, since if she hasn't been kidnapped there's nothing stopping her from answering it. I don't know if I'd expect that of others. But it's also true that if I call her, and nothing is actually wrong, I still expect about a 50-50 response rate because she doesn't hear the thing, has it on mute, or is in school and forced to keep it in her locker, or has let the battery run down, or whatever.
If millions of people start getting fake kidnapping calls every day, then I'm sure we will stop falling for it quite soon. There's a limit to how many days in a row you will keep sending money to every random AI-empowered guy calling to convince you he has taken your daughter, only for her to return home from school like normal just a few minutes later.
I don't think it will be long before video and audio is no more convincing than text is now. We will stop falling for the AI scams, just like we (most of us) stopped falling for the Nigerian princes, scam ads/virus popups on the web, and those fake emails from family members claiming they need us to wire money so they can pay for a flight home or whatever. Basically, my thesis is that people will start to get wise to any scam that is sufficiently common and harmful.
Real kidnappers might have to learn to work to convince the families that the kidnapping is real and not just another scam.
And what happens when someone surfaces a video of you spouting off racial slurs or running down a street naked, generated literally with a simple text prompt? Do you think your employer, colleague, or even spouse will take the repetitional risk of sticking around even if the video is completely AI-generated?
Every piece of communication will have to be signed to verify authenticity.
Camera makers are already looking into embedding authentication technology at the hardware level.
All a digital signature can prove is possession of a secret, it can't prove that some process was followed in generating an image.
It's impractical to have authenticity enforced by millions of consumer devices. If any random manufacturer in a third-world country can generate millions of valid signing keys to be embedded in millions of cheap phones or cameras, then there are many employees there that can also leak a bunch of valid keys if the price is right, and these keys can sign anything in a way that's indistinguishable from all these cheap cameras. And if you revoke the signatures of any "untrustworthy" manufacturers (really, any manufacturer can be and will be compromised, especially if some governments want to manufacture propaganda), then people won't stop using the phones/cameras just because their signatures aren't considered kosher, you'll still have millions of people uploading genuine, valid images from these cameras, so either people will have to trust invalid signatures or refuse lots of benign content, and I'll bet most people will choose the former one.
And of course a camera doesn't know if it's taking a picture of reality or another picture - it would take a relatively simple optical setup to allow any camera to record (and sign) an image off of another screen, so if some authenticity-verification system was actually working and popular, any reasonable criminals would do it and have signed recordings of their deepfakes from a real major brand camera.
Where exactly does this video "surface"? A random person on the internet?
If it's someone you know, with a reputation that makes them believable, then that's basically a criminal act and depends heavily on that person's reputation. There should be enough deference even without that, beyond simple harassment (which is what you're saying, where you lose your job or relationships).
Otherwise I'm not sure why your spouse or boss would rather believe some video coming from barelycartwheel48292@gmail.com is more believable than the person themselves.
Anyway, I was talking about overt scams not the wider cultural/social implications. This is a large new burden that will be imposed on society. We will learn to adapt and society will manage.
This is starting to happen when you engage with the state (small "s" in the US). Things like pictures of your drivers license and 180degree videos of your face further coupled with a quick facetime video with a real person reviewing the data.
It feels horrible, invasive and like the state has outsourced it to a third party you dont know and have no control/direct engagement with.
Every single time historically that we've had this level of authentication, there is a 'strong man' type that uses this information against the population. Whether its simple strong societal shame in smaller communities or its a massive program to track and/or persecute a group of people.
> The standard of human interaction will either be meeting IRL or signing communications biometrically.
From a convenience standpoint, it seems like we've gotten so technologically advanced that we are starting to move backwards. Is there a name for this type of phenomenon?
The physical world is what we are fully integrated into as embodied beings along dimensions we do not yet and may never understand. It is the only space that has an inherently scarce quality, which seems to be something we need as humans. IMO technology should enable us to deepen our connection in the "real world," not isolate us in piss poor representations built on the belief that we are merely mind-bearing mechanical vessels to be manipulated.
AI is going to drive us back to fully-embodied living, and that's good.
>> The standard of human interaction will either be meeting IRL or signing communications biometrically.
Isn't Sam Altman involved in some Crypto company trying to collect biometrical data, using some orb thing, in exchange for worthless crypto? No idea if he still is, but I am more and more convinced that some folks took every single cyber punk story not as a satire or warning, but as a playbook and something to strife for.
> We are approaching a world where people will not want to invest their time, energy, and emotion engaging with other supposed humans remotely unless they have verified their personhood / identity.
We were past it years ago in some domains.
I used to do a lot of language exchanges, and there is/was a need to screen for people who are solely using machine translation. It's pointless correcting someone using machine-generated output.
How is AI different than any other technology in this respect?
As far back as the discovery of fire, new technology has enabled more positive outcomes than negative.
Some tech more so than others, I guess, but what makes you lean so negatively regarding AI? It’s already improved my life considerably. Just the basic ChatGPT web app has extended my capacity in multiple respects.
Not sure what you are comparing here, but we (people in early 30s) also grew up on the internet and mostly ended up ok. Some people who grew up on tv did not end up ok.
What makes me so negative is that it has a tremendous productivity boost regardless the intentions. ChatGPT saves you hours daily generating content, but it makes it as easy to generate convicing content on any subject, at scale.
It may be in your nature to question everything you see, hear or read. I'm pretty sure that's not the case for the vast majority of population. And as magical ChatGPT seems to all the tech and non-tech people, it is really difficult to predict what it will look like in 30 years from now. It may now take 100 trolls to spread fake news about Ukraine on Twitter.
In 5 years from now it might take one prompt and you will be able to cover every single social media on the planet in convicing information from many spectrums of the opinions. What you might see could be 10 people violently arguing on the internet, whilst its all bots whose only purpose is incite emotions and polarize people.
Doesn't this remind you of ender's game? The two siblings manipulating social discourse by creating and then whipping up opposing sides?
We know that this already happens, but now its in the grasp of many people and thus much harder to pin down where its coming from and what that party's motive is. How can you 'follow the money' when
- it requires almost no money to spam every social media channel
- then get picked up by news bc they turn to social media for fast/new/relevant information
> It may be in your nature to question everything you see, hear or read. I'm pretty sure that's not the case for the vast majority of population.
But we have been here before. When radio first came out, it had an almost magical sway on people. People attribute the newness of radio as a significant factor in hitler's rise to power. Eventually we adapted to it though, and it lost its magic sway.
This seems like a difference in perspective (which is reasonable and probably won’t be changed with discussion).
I interpret your opening statement here positively. I tend to think most people work towards the good of mankind, so a boost in productivity for everyone is a net positive.
Using fire seems like an odd comparison to me. Fire exists in nature. Even if you can't create it, there is nothing to stop you from understanding and likely avoiding it. You can even extinguish it without understanding the creation of fire in most circumstances.
Systems like Chat GPT and other large models do not exist in nature that we know of.
Because it is digital. Other technology adoption was slow to spread, because it couldn't simply be cloned and integrate immediately with most pre-existing infrastructure.
AI can be copied and can integrate with a lot of existing infrastructure right out of the box. Combine this with the magnitude of the capabilities and our societal "immune system" doesn't stand a chance of being ready for whatever is about to emerge. That lack of time to learn and adapt is a massive difference.
The tech is the same tech used in nuclear reactors, and depending on how far you go certain therapies. Nuclear weapons are just one instance of the tech.
Simply put: no. This is just misinformation. Thermonuclear weapons derive their energy from fusion, not fission. The prompt criticality of the igniting fission weapon has nothing to do with the criticality seen in a nuclear reactor used for generating energy.
The Manhattan Project literally built the world's first nuclear fission reactor. The group that built that reactor later became Argonne National Laboratory which then built the first nuclear reactor that generated electricity. Nuclear weapons and nuclear power come from the same roots.
> Modern nuclear weapons work by combining chemical explosives, nuclear fission, and nuclear fusion. The explosives compress nuclear material, causing fission; the fission releases massive amounts of energy in the form of X-rays, which create the high temperature and pressure needed to ignite fusion.
I disagree that it has nothing to do with it. All of this technology was being developed at the same time, and the most important part of the technology for both purposes is criticality.
Yes, technology advances. We're no longer pedaling around on penny farthings or driving around on Benz Patent-Motorwagens, but the technology is the same basis in many ways, just evolved.
Is fusion weapon research informative of fusion reactor research, and vice versa?
Not even remotely useful. If it was, there would be no more research on fusion reactors. They'd just process the heaps of government data available from both full weapon tests & other research projects.
Good point. I think it's fair to criticize instances of technology and innovation. So it would be fair to criticize all innovation that is specific only to weapons.
Since their inception they have revolutionized modern biology, and by extension medicine [1]. Also they gave us the seismic monitoring network we currently use to understand the earth.
Of course it wasn't the weapons that did this, but rather the offshoot technologies. But like every technology the first use was blowing something up, the later uses were arguably more peaceful.
I would agree that these days there's a bit more divergence between weapons research and useful nuclear research. But there are still some things that might belong in both camps (e.g. inertial confinement fusion).
Not that I agree with op but you could make a solid argument that they're the reason we haven't had war between major powers. You could probably argue the whole Ukraine Russia issue is because Ukraine gave up it's nuclear weapons in exchange for a promise of peace (and other countries can take notes on the result of that)
The past century has really been an anomaly when it came to trusting news, with photos and videos seeming to be reliable proof.
Before that, societies had to be structured very differently to account for the lack of proof.
You'd hear people from out of town talking about what they saw in a neighboring city. You'd need to judge how trustworthy the person was. People would expect a chain of narration, to understand how _that_ person came to learn a bit of information (or if they claimed to witness it first hand).
As AI generated content becomes more popular, I predict we as a society are going to go back to relying more and more on the reputation of the speakers in question.
Who might you consider reputable? People you've met personally, people who your community respects, and of course, influencers who you follow who's persuasive words match your pre-existing world views
This change will likely advantage established institutions, no? It will be much harder for a whistleblower, random, etc to prove something that the powers-that-be would prefer be kept quiet.
I was having a conversation with some friends the other day about what schemes might mitigate some of these risks, something like an anti-safe word, i.e. a "danger word" that someone can be used to remotely validate that a loved one is authentically in danger.
This is fairly low-tech and likely susceptible to various kinds of social engineering, but I'm curious what a more robust approach might look like that doesn't involve us all regurgitating 6-digit codes like robots all the time.
Let's say you call or text me, requesting gift cards or my password or something else weird.
I tell you I'll email you immediately to verify. I email you to confirm you want N $X gift cards. You do so.
Or if the inbound contact is by email, you verify by text or phone call. Or Signal/Discord/FB chat apps, etc. Heck even LinkedIn Inmail could be your verification channel.
You could still be compromised, but that'd require attackers to have significantly more access and readiness. And if you reach the other person and they're unaware, now you both have some support as you figure it out.
> Chatgpt and the likes generate convincing fake news
It's not like fake news was a non-issue before ChatGPT existed. Breitbart, and other fake news sites existed for years before this was even imaginable. Fake graphics were around for ages too, with image manipulation, even before computers. Take for example the Surgeon's Photo of the Loch Ness Monster.
This argument has been rehashed time and again and it's getting a bit boring: the fact that you can do something at all is qualitatively different from when you can do that same thing at scale.
Agreed. As I’m sure you know, it’s easier for people to reason when the arena presents only discrete possibilities, especially binary ones, eg yes/no or black/white.
The arena of the continuous, which encompasses most of the natural world, is far more difficult. It doesn’t allow for the sort of arguments where one can assert things with ego-boosting absolute confidence.
Matters of scale naturally fall into the continuous sort, but scale can also introduce new discrete possibilities. Perhaps we need to present the new specific, discrete effects and outcomes that AI-at-scale will introduce. Maybe, just maybe, that’ll change minds of people who are truly open to being changed.
My intuition though is that this argument is repeated here (on HN in particular) not for lack of knowledge or thought, but rather because they WANT to see the chaos that’ll result. They want all the positives and negatives, no matter the balance, simply because it’s exciting and adds to their “mundane” lives. Where the mundanity is of course subjective, simply the result of their default worldview, rather than anything nearing an objective description of the world as it is.
> simply because it’s exciting and adds to their lives.
It's certainly exciting but I wouldn't bet on it adding to our lives just yet. Maybe. But many avenues leading to net negative outcomes are still part of the tree.
AI doesn't solve the reputation problem. Just because you can make 1000x more fake news articles doesn't mean anyone will ever see it. Spam is still spam, AI doesn't suddenly make your site rank on Google or get you followers on Twitter or get you upvotes on Reddit.
I'm skeptical content generation was the thing holding back this wave of fake news and scams from ruining the Internet. These doomer posts are always handwavy on the specifics.
And we've seen little evidence of it operating at scale in the past 2yrs since these tools have been unleashed.
Maybe automated niche targeting but that still depends on networks with reputation systems + spam detection... So mostly depends on ostensibly solvable tech problems like caller ID spoofing, or someone sending mass email spoofed campaigns.
You can easily apply AI to the generation of fake accounts used to amplify the message, this is a tried and true playbook that can now be enacted at lower cost and with much greater speed.
That you have seen little evidence of it doesn't mean that it isn't happening, it could also mean that (1) you're looking in the wrong places, (2) that it is so good that you don't detect it and probably other explanations besides those.
I'm on HN daily like you are, I'm sure if this was a developing crisis in spam detection and social media we'd be hearing about it constantly. People on HN love a good AI doomer story, it won't be hiding.
Otherwise it's mostly just predictions of widescale disruption which beyond hyper targetted attacks I remain skeptical.
Run a couple of hundred comments on HN sequentially sampled from the comments stream through the AI detector and see what pops up. The fraction is rising steadily.
I create 'fake news' message among a series of bots and have a strong network there of social media accounts and sites. Ok, that's step one.
Now, I hack your account and steal your reputation. It appears you approve this message and some subset of your followers follow the bots and go to the bot sites.
You can try to pull out and say your hacked, and some percentage of people will unfollow said bots, but at the same time, a subset of those followers are now following those bots and giving them thumbs up thereby getting access to your friends friends.
User accounts are easily hackable as we saw with Linus Tech Tips recently. Reputation is just the newest currency worth stealing.
Right, that's a fair point, but I think when it comes to fake news, the issue really is about the quality, rather than quantity. Really lethal fake news requires you to be in touch at a deep level with your target audience to make it stick.
True in general, but in the particular case of misinformation, there was so much of it already before LLMs that I don't think scale makes a qualitative difference.
Maybe even the opposite, perhaps the deluge of AI-generated content will make the average person trust less what they find at random sources... which is healthy.
Note that in general I'm not too optimistic about AI risks (the very news that motivated this thread is scary) but I don't see the worry about mass misinformation in particular to be such a big deal.
You could make fake photos with photoshop. And now you can make them with AI.
A normal person couldn’t use photoshop to detect the image was a fake. But a normal person will be able to use AI to detect it.
I see all negative nancies about it. I don’t see anyone realizing that as good as the tools are at faking, the tools to detect will be exactly as good.
I thought about this a bit, and am wondering about the results of LLM-fueled arms-race when it comes to figuring out whether an AI has created some piece of text.
I'm worried we may reach a point AI will get so good faking people that real peoples' output will be treated as fake due to only as many combinations possible that originates from humans. You will start depending on AI telling you what is true and what is fake for every single piece of information. This leads to a question how to tell which AI is right - you can't really verify an opinion, only facts.
And being able to manipulate opinions is a very strong perk.
I was curious about this recently, so I built a very rudimentary neural net trained on GPT generated text messages, and human generated text messages. I was able to get a surprisingly good detection accuracy with just under 1k lines from each sample set. I'm not sure it's as apocalyptic as people think.
We might as well hit the problem Google Translate hit, where the training set started to contain more and more data created from GT itself. Similar thing may happen with your NN. At some point there may be so much AI-generated content (by different AIs) that it may be difficult to compose a trustworthy training set.
I suppose a solution to this would be something like pre-war iron. We would have to rely on archived sources, like Wikipedia past edits, that come from before ChatGPT existed.
Personally, I was won over by Scott Alexander's argument that news sites very rarely lie. They very often mislead and obfuscate, but that's fakery of another kind.
The bad-faith commenters, YouTube, journalists and other types who have an axe to grind already happily cite garbage sources without verifying (or perhaps caring) when making their motivated arguments and there's more than enough out there to back up whatever BS they're trying to spew any minute. I don't see how quantity available changes that. And of course AI can be deployed in the counter direction. I think (hope) you need a qualitative change in order to tip the balance of power.
Don't be negative about revolutionary new technology that can make the world a better place.
As it stands, people believe whatever dumb thing they read online. Thanks to AI, they will have to learn that if something is out of the ordinary, they should confirm with multiple sources. I call this a net win.
And for those literally medically incapable of this level of reasoning, we will soon have "AI firewalls" (Gibson ICE) that can tell people "Hey, this looks like a scam!" and also help them reason about complex topics.
> NOW they will have to learn that if something is out of the ordinary
I once shared this approach, but learned that the majority of people don't really want to leave their echo chambers and gladly accept anything that amplifies it. And the bad part about it is that the western world is based on democracy, and democracy is controlled by the majority. You may not care about people that are easy to manipulate, but in reality you are not the target as your voice won't matter at scale.
>> Are we really approaching the reality where we need 2FA to trust the other person is really who they are?
We are already there. In the kidnapping context, we have been there for a great many years. If someone say they have kidnapped my child, I will text/call that child immediately to verify. The 'second factor' is the realworld daughter. A kidnapper must both create a facsimile of the daughter and then also render the real daughter incommunicado. We don't need new 2FA because we already have it.
We need a PKI run by the governments. National ID cards that are smart cards. Cryptographically sign any and all digital communication. Self-signed certificates could still be used with TOFU.
However, I'm not sure if total loss of pseudonymity is less of a horror scenario.
I've been wondering what would be the event that causes digital signatures to become the norm. Though, I didn't expect it to be machine learning. I'm scared by how much is just done by phone with people who don't know me.
You can still use pseudonyms with PKIs. The CA will know who you really are, but it doesn’t need to be public by default. And it can be regulated such that a judicial order is necessary to disclose your identity to a third party (including to the government).
I think we're passed due for some kind of 2FA for the phone. Sometimes my bank or credit card company will call and try to sell me shit and I tell them I have no proof of who they are hang up. Too bad for them.
I'm also more on the negative side because I'm really not convinced by the whole "AI will just automate/remove the boring aspects of life". Every single prototype capability of current AI points in the direction of a worst-case scenario of overall misery.
> Every single prototype capability of current AI points in the direction of a worst-case scenario of overall misery.
Software is often aimed at solving problems and making processes more cost-effective. Founders are often most interested in solving their own problems, problems of their friends, or the most valuable problems of their customers. For B2B companies to customers are other businesses. The greatest cost for businesses are employee costs. Solving the employee cost, but eliminating employees, is a great target for business owners.
Absolutely. I feel like I'm just watching the car crash in slow motion. Hopefully I'm wrong, but I really don't see this AI revolution working out well for humanity (in the short/medium term). I'm sure we will eventually get through it, but I think life for most is going to get real hard for the next 20-50 years.
... That is an interesting point. I wonder how long it will be before some enterprising developer hooks up an LLM to a HN account with the instructions to "blend in while promoting X". Maybe that AI already exists... It wouldn't take more than a day.
> If you were afraid of your elderly relatives being scammed by people pretending to be policemen or grandchildren, now more tech-concious people will get scammed by the voice and look of their relatives. Are we really approaching the reality where we need 2FA to trust the other person is really who they are?
I thought I was immune to certain scams where the scammer accent was a dead giveaway that they were scammers. Now with capabilities of AI I feel somewhat vulnerable again though Im far from being elderly.
>>Are we really approaching the reality where we need 2FA to trust the other person is really who they are?
YES
In fact, not approaching, but already passed the threshold.
If you have any assets, it'd time to be sure you have a set of actually obscure un-guessable prompt & responses to verify identity in your family. And it needs to be better than "what's our first dog's name?" or "where do we vacation?" type stuff found on FB; if they're going to the trouble of getting images & voice samples to clone, they'll find that stuff.
Now, it is not that everyone is famous for 15 minutes, it is that everyone needs to be up-to-date on security to avoid being randomly shot or scammed. Nice society we've built.
1. A not exactly rocket scientist POTUS takes office in 2024. Let's suppose they were a populist, triumphalist, religious, agro, law-and-order DINO to make it interesting and fictional.
2. Rogue AI launches a social engineering communication offensive against CENTCOM impersonating generals, cols, and ltcs against enlisted and lower ranks to carry out a first strike against China with a digital hallucination that Taiwan is being "attacked". It uses details gleaned by wiretapping the upper echelon to circumvent normal N-person keying rules and authentication protocols.
It's funny, technology was able to bring us together across geographic distances. Someone on the other side of the world was a phone call away. You can turn on your TV or phone and instantly tap into seeing, hearing, and reading people and their thoughts from around the globe.
Almost as quickly, AI may unwind all of that. It lay become that there's more noise than signal across all of these technical media. The only thing you can trust, and the only thing that matters, is the people in front of you.
Maybe all countries need to finally give importance to this issues, put a few take scam crimes seriously, stop trading with countries that do not cooperate, put the scammers in jail.
Tech evolved any idiot can get a copy of photoshop and some video software, we need tp solve the problem, otherwise is like preventing creation of email because we are too incompetent to address the spam problem.
Your concern is understandable and relatable, but I approach the issue from a stance that is, while not necessarily optimistic, definitely is less pessimistic.
I don't think we truly know just yet how negative an impact AI will have on society. Every time there has been a technological leap, people have panicked over what the gizmo of the now will do to society. Again, not an invalid concern, but society has yet to have been blown apart by anything.
Also, everything potentially being just AI may inadvertently get the public to do the right thing. People should never have been as trusting of authority figures or institutions in the first place. If everyone assumes that everything is likely to be complete hogwash, which was already true in many cases, then they may not just swallow everything as fact. Maybe fewer people will blindly consume news, which is a good thing; 99% of news is not actionable or good for an average person's well-being. And if enough ransom demands turn out to be AI-generated scams such that the real ones are overall far less successful, it's possible that fewer people are kidnapped for ransom in the first place.
I'm not saying I know any of this, but rather the opposite.
In a world where people need to be constantly paranoid, I think many will develop true paranoia, or just give up trying to understand what is going on and defer to who has the power already. Certainly it is not possible for individuals to seek truth on all things on their own, even if it was their full time responsibility to attempt so.
Of course the dominant 2FA methods will be those that are easily forged, so the general public will still get swindled (I'm looking at you banks that are insisting on using SMS to "verify" my identity). It's as if nobody cares about the world programmers are creating.
I'm sure those will all be serious problems (or already are problems) but I think they pale in comparison to the potential upsides. This could be the biggest jump in productivity since the industrial revolution.
Managed properly, it could lead to what is essentially a utopia.
I think it will be addressed by "community". For the last probably decade (maybe more) every online social space has been pressured to serve global scale. That increases surface area, and increases more strangers.
Smaller, more trusted rooms, will combat the threat of misinformation.
The universe is built on waves/cycles. I think we are going to see pressures to make smaller rooms because of AI.
I'm actually excited for smaller, more intimate spaces, with trusted people who now have 10x ability from the powers of AI. That seems like a LOT of fun.
> DeStefano found the voice simulation particularly unsettling given that “Brie does NOT have any public social media accounts that has her voice and barely has any,” per a post on the mom’s Facebook account.
> “She has a few public interviews for sports/school that have a large sampling of her voice,” described Brie’s mom.
and
> Then, DeStefano remembered that her 15-year-old daughter Brie was on a ski trip, so she answered the call to make sure nothing was amiss.
No public social media accounts combined with the timing of the fraud happening during a ski trip implies that somebody close to the family was in on it. It's possible that they used the public sports interview and got randomly lucky with the trip timing, but I'd be looking much closer to home for a suspect.
>No public social media accounts combined with the timing of the fraud happening during a ski trip implies that somebody close to the family was in on it.
I recommend against this type of baseless speculation.
We should all do well to remember that time Reddit caught the boston bomber.
When people say they 'don't have social media' it usually is a good idea to check what their definition of 'social media' is. If they have a web presence that they regularly update it doesn't really matter what the label is, it can likely be used to infer a whole bunch of stuff.
I know quite a few people who claim not to have 'social media' and yet they are on LinkedIn etc. Maybe I should have used 'web presence', but I think that what I intended to convey is clear: if you have a regularly updated online identity then you are giving an impostor a lot of useful information.
If there is absolutely nothing about this person online, including that they don't use any of the video chat services, no tiktok videos etc then yes, the circle of suspects would narrow accordingly. But most teens are quite active online, even if their parents aren't always aware of it.
If they have a web presence that they regularly update it doesn't really matter what the label is, it can likely be used to infer a whole bunch of stuff.
My 90s-era internet paranoia has never steered me wrong. I don't put out my real name or picture anywhere. When I mention personal details as part of a story or argument, I randomize the details, brother becoming uncle, gay becoming straight, etc. We are not far from powerful AIs being able to link every online account you've ever had together by just your unique writing style, but they can't manufacture information that simply doesn't exist. There isn't enough emphasis these days on keeping your real and electronic identities separate.
> When people say they 'don't have social media' it usually is a good idea to check what their definition of 'social media' is. If they have a web presence that they regularly update it doesn't really matter what the label is, it can likely be used to infer a whole bunch of stuff.
They didn't deny having social media. She implied that she did have social media accounts. They said the social media accounts were non-public, aka friends-only.
Hence my point: If their social media presence was friends-only, then it would imply that it's someone in the friends circle who had access to enough information.
I'm not using a smartphone. I have no illusions about my contact list being public because the other side is using smartphones. So you can infer my contact list from other sources. Similar mechanisms apply to other tech.
Anyway, we'll see what comes out. I'm aware of a blackmail case where the perp was very close to the family so I'm sympathetic to your argument, and investigators typically only expand the net when the closest contacts can be ruled out.
> The part I quoted claims that she doesn't have any public social media, so that wouldn't work.
It claims her voice is absent from her public social media, not that she does not have any social media accounts: "does NOT have any public social media accounts that has her voice".
Brie herself doesn't even need to be the sole broadcaster. The mother or other family members could be contributors.
I still think it is way more likely that someone posted about he trip on social media. There is basically no benefit to targeting someone you know with this sort of scam, and the risk is higher.
The more easily dismissed instances where the timing isn’t as confidential aren’t getting big articles written about them. This isn’t the first time this scam has been attempted.
Nothing in this article gives any reason to believe that the voice on the other end of the phone was AI-generated.
Occam’s razor suggests it was more likely a human pretending to be her daughter.
I’m guessing after she realized that the voice wasn’t her daughter, the mother convinced herself it must have been a deepfake to explain herself having been so easily convinced.
This is likely as currently only elevenlabs has the publically available technology to do this and is SaaS only (and i presume requires law enforcement traceable usage after that first weekend and 4chan's fun with it)
I've played with elevenlabs' solution; its really good, but I don't think it could believably emulate the voice of someone under panic duress. The voices it produces have a pretty believable, but flat, affect. Maybe it has options I haven't played with, or maybe you could train it to add panic to the voice.
Idk, something about this story feels fishy to me. I don't doubt that we're headed toward this kind of future; but we're talking about very high level spear-phishing to accomplish this today (knowing the individual isn't home, having samples of their voice, highly sophisticated AI knowledge to make the voice and add elements of panic duress, to actually succeed you'd also need to compromise lines of communication to the individual, e.g. do this while they're on a trip to the amazonian rainforest or something). This feels more like an AI hit-piece than an actual done-with-AI story.
Proof of identity is going to be a huge opportunity.
Images, text and voice can now be spoofed with minimal cost and effort. With the progress of deepfakes and text to video, how much longer until you can spoof video calls?
Meeting in person is not practical in many scenarios.
Anyone know of any promising ideas or companies in this space of digital trust?
Our family has had this sort of 2 factor authentication for years. The rule is that when/if something goes sideways, we have a secret pass-phrase that must be used for the other party to be taken seriously. It's something that we won't say, as a general rule, but would make sense as part of a regular conversation if you didn't know what you're listening to.
We did this when an elderly relative got scammed by the 'your son has been arrested' type of scam. It became obvious to us that we needed something to verify that the person contacting us was legitimately the person we thought it was.
It's not hard to do and it doesn't require technology. Just like disaster planning and home inventories; it's just that most people don't think about this kind of thing until it's too late.
I think this needs to be provided by the government: an organisation that we already have to trust and already has a monopoly. Countries that get this sorted out will see their economies grow. Countries like the UK where people continue to believe that sending scans of utility bills is a good basis for a modern economy and somehow prevents money laundering will continue to go down the toilet.
There are plenty of studies and case studies that show this to be true, it's a necessity that government given identity should also exist digitally. It's rather insane it doesn't yet in the states.
The _easy_ thing is probably also the thing that's hard to productionize: Just agree upon a secondary channel beforehand, and if I ever text/call/send you a slightly grainy video clip you and you're concerned if its really me asking for 50 amazon gift cards, you contact me via Channel 2.
I don't think you can easily create a product to serve this, since _prior_ is the key there, and each pair of people should have their own preferred channels, since the scheme falls apart if the faker contacts you on Channel 2 first. If anything, centralizing on that part just creates a massive vulnerability.
it's not any harder than any website login. The only thing is that this isn't available as a simple service to normal people, maybe this is a feature Facebook should offer to have more value: your kids can sign in via biometrics on their phone and then it shows their parents on their facebook account that the child used biometrics X seconds ago to provide proof of identity.
Have you ever gone though a password reset process with a bank or 401k? They can't just say "oh, you lost access to your email? sorry, your money is gone".
They rely on phone calls, documents, security questions... all things which are very susceptible to programmatic social engineering.
I dont see how it matters, this is for the usecase that there is already an established and verified account with e.g Facebook. Many betting or crypto sites do passport and video verifications, Facebook can do the same. Then all the child has to do is login via the Facebook app as usual, on iPhones via biometrics but it doesn't have to be biometrics. It can be any kind of 2fa or tripple verification. Authenticator code + email + sms + app login, simply cutting down on the likelihood that someone is a fake person. Then the parent can see which verifications were done by the child on the Facebook dashboard to see if the kid is alive and able to do verifications instead of whatever else the attacker claims.
I think you meant to say that the bank verification process can be socially engineered... well first of all to actually do that you also need a lot of fake documents and expertise and the bank has to utterly fail at their job. Which hopefully prompts governments to require in person meetings at the offline location for future bank accounts.
As others have mentioned, for all families, friendships, and relationships, is a good idea to establish a word or phrase that can verify someone is real and not a faked/AI voice. As the resources necessary to carry out a scam like this race towards near trivial, this will happen more frequently.
I like the term "realword", like a password to determine if real. And of course, this word must be said in person, not over chat/text/email/etc. For most, over phone or videochat should be fine as well.
Maybe we need some kind of public service campaign to add visibility to this threat and mitigation options like the realword? Maybe also encourage a spaced repetition habit to establish it?
My question is if this will eventually backfire on kidnappers, because if this escalates enough if might create the expectation that it's not real, so a genuine kidnapper can't distinguish themselves credibly from a virtual kidnapper, because what previously was credible proof of life (the voice of the victim) is no longer so.
> My question is if this will eventually backfire on kidnappers,
It's kind of weird to have both groups combined like this. We're talking about different people. The same person isn't a "fake kidnapper" and a "legitimate kidnapper". (Boy, that sentence felt strange to write).
Yes, scammers make it harder for people who would engage in the genuine version of the commerce.
Yes, by "kidnapper" I meant someone who had actually kidnapped someone. Virtual kidnappers are going to be able to increasingly make a quick buck, but presumably at the cost of actual kidnappers by reducing the credibility of their threats, which in turn might ultimately reduce the ability of virtual kidnappers to make money themselves once everyone believes all the threats are fake. Wonder what the equilibrium is and whether it's better or worse for society than the status quo.
Do random people even get kidnapped for ransom anymore? It seems like technology and risk/reward killed that one pretty quickly. At least in the west and places with stable governments/law enforcement.
It is still quite common in Mexico and Latin America. It can even be somewhat civilized, where if both sides play by the rules it turns out fine. Common in the sense that most people are never kidnapped, but everyone knows someone who has been.
That is almost entirely runaways and of the small fraction that isn't they are usually abducted by other relatives. About 99% of them return home [safely](https://www.ojp.gov/pdffiles1/ojjdp/196465.pdf#page=6). Taken (2006) style kidnappings basically do not happen.
I find it a little odd that an article which is, at least incidentally, about the dangers of sharing too much information about yourself publicly on the internet in this dawning age of AI-assisted scams, is itself so heavily plastered with unrelated random facebook and instagram photos of the victims.
I would personally always demand proof of life regardless if fake or real and I would ask my family member to answer a question only they would know the answer to. We would of course have code words ahead of time.
Maybe this is a sad anecdote, but the code phrase my mom and I had for duress was "I love you"
I am not sure it even matters either way. It is certainly possible to do things like this right now, and "kidnapped" or "arrested" family member scams are well established as existing. The significance of this report is less that it happened, and more that it is an illustration of what is not only possible, be very cheap and easy to do.
Man, do we need a way to prove you are who you say you are that doesn't involve a giant corporation. I feel like there was a window where we could've organically coalesced around something open source, like Keybase, or a government oauth run by 18F, or MIT's PGP keyring or something, but the moment has passed and the only identity service you're going to realistically get a majority of people to adopt is for-profit social media bullshit, which is incentivized to stay fragmented and not really address problems like this.
This is very concerning, but I think my bigger, more immediate worry - and one I suspect will happen at a much larger scale in the US, if not the world - is the use of these tools in bullying, particularly among school-aged kids. The detrimental effects on kids (and thus, eventual adults) due to the misuse of these tools could be cataclysmic.
Apple/Google should allow people to answer phone calls from strangers with a voice changer. It's way too easy to clone someone's voice and cause harm. Alternatively, they could notify you if they detect a faked voice, but maybe these two ideas are contradictory.
Another solution, of course, is to not pick up the phone from unknown numbers unless you're expecting a food delivery or something.
Thanks, now I have a mental image of the author as a late middle aged man in a drab brown office, working for a low-rent paper, sitting behind a desk with an old computer that hasn't changed since '98, leaning back and doing crosswords in between writing articles lamenting today's youths and technology. With maybe a overcoat and beaten up fedora hanging up behind him. He smiles slightly while working on the puzzle, hornswoggle, that's the one he's going to sneak into the next article.
I've never heard it anywhere as a USian. It seems the type of obscure and redundant word that people bring out once per decade. Without looking it up, I could guess horn refers to phone and swoggled is another swindled, so to hornswoggle is to phone-scam.
Not with the current generation of people. I can't speak to any other generation, but as a native speaker I did understand it. It seemed archaic to me.
I wouldn’t call it archaic, but more folksy. I think most people would understand it, but would rarely encounter it outside a humorous context. It’s pretty similar to bamboozle, except that’s gained kind of a “cute” connotation in recent years (due to pet memes, etc.) that hornswoggle doesn’t have.
Reminds me of Greg Egan's "A Kidnapping" from "Axiomatic". (I'd like to describe it, but it would spoil it. Just go read Axiomatic, it's got some of the most mind-blowing stories I've read.)
Now, the likes of DALL-E and DeepFake can generate convincing fake graphics. Chatgpt and the likes generate convincing fake news. Voice AI can generate convincing voice from small samples.
If you were afraid of your elderly relatives being scammed by people pretending to be policemen or grandchildren, now more tech-concious people will get scammed by the voice and look of their relatives. Are we really approaching the reality where we need 2FA to trust the other person is really who they are?