It would be a lot simpler to only sell standard devices to adults. Kids should be using devices with curated access to specific tools and platforms meant for children.
Built-in checks prevent processing of inappropriate content, ensuring legal and ethical use."
I see it claims to not process content with nudity, but all of the examples on the website demo impersonation of famous people, including at least one politician (JD Vance). I'm struggling to understand what the authors consider 'ethical' deepfaking? What is the intended 'ethical' use case here? Of all the things you can build with AI, why this?
That they do, but perhaps the relevant context is that while porn is globally unregulatable, but the one entity that has proven its ability to regulate it (or at least exercise some control over it) have been payment processors like Visa and Mastercard.
FT had a fantastic podcast on the porn industry and the guy behind Mindgeek. Like many stories about multinational entities, you constantly hear the usual refrains - noone can regulate this, the entities keep changing their name and face, there is no accountability, etc. But when Visa and Mastercard threaten to pull their payments, the companies have to listen.
Visa and mastercard are the de facto regulators of porn today, and mostly do so to prevent nonconsentual and extreme fetish stuff from being displayed on mainstream platform.
From what I gathered from the podcast, they're not super keen on being the regulator - but it's a dirty job and somebody has to do it.
They don't care about the content, they care about the correlation with customers who have an exceptional rate of chargebacks or other payment avoidance on legitimate purchases.
Cryptocurrency and the like may offer a way out of that problem by allowing direct purchases, but only for companies willing to deal with the support burden of making everything nonrefundable.
I don't think this is the case - because then we'd see them pull card services for porn websites altogether. This clearly didn't happen, nor was it the intention. They never did anything that would reduce revenue.
Instead, it was more a case of regulation to avoid looking like their services were financing illegal or illicit content.
And also the recessive policies come out in the open in front of the world from people like the ones who created "operation chokepoint", who are perhaps not help us as being super 'socially conservative'.
Your choice of entertainment, information and tools is under attack from all sides when they can get away with it.
cryptocurrency doesn't offer credit card chargebacks, but why can't you refund customers by sending them the same amount of crypto back from where it came from? I've gotten merchants to give me refunds to a different credit card before.
I can't find anything to support your claim about weapons. Seems pretty much all online arms dealers I can find selling anything from grenades, machine guns, and even rocket launchers take credit cards and I'm fairly certain stores also accept them too.
> grenades, machine guns, and even rocket launchers
Umm, yeah - what country are you buying live grenadesor working rocket launcher online with a Mastercard? Cuz it’s not the US or Canada. And if it’s not a live grenade or working rocket launcher, it’s no different than any hunk of metal.
uh. yeah... us and canada are a tiny fraction of the world, also what is really buying using visa or mastercard? if i use visa to byy crypto and then get explosives (which can be transparently done) there is nothing they can or will do about it... - buying things online has nothing to do with countries or borders, nor is it always clear, to payment providers or even customers, what kind of scheme enables a payment..
I am not sure what the current state of the issue is, but there was an initial effort to restrict gun sales in various devious and deceptive ways since it is illegal to overtly do so because it is legal trade and economic activity.
I would not be surprised though if the clear illegality of the violation of the Constitution of such efforts were brought to the attention of the payment processors, and they were reminded that they would severely regret hastening attention on an effort that still needs to happen, a public electronic payment processing capacity.
In general I think payment processor are not required to associate with anybody. The government (in the US at least) is limited in their ability to prevent you from buying guns and making porn (a form of speech), but they can’t make people do the transactions with you; the right to have somebody process payments for you is not constitutionally protected.
But I’d be at least curious (as a non-lawyer) if there could be issues around discriminating against pregnant women in the US, since abortion is a service that is only used by them.
The payment processors interpret the networks’ rules, you do understand that right? If they’re banning something, it’s because the networks either outright are banning it too or have put enough restrictions and constraints in place that the liability for the transaction doesn’t make sense.
The payment processors are doing what the networks tell them to do.
It’s not like the processors are actively looking for ways to turn down money; they want as many transactions going through them so they can earn their share of it.
There were several efforts to restrict the people’s right ability to marshal resistance to tyranny and Visa/MC was very much involved with that even though they were not the only ones.
Of all the things you can build with AI, why this?
That can be asked of 90% of what's come out of the latest AI bubble so far.
Like a lot of technology, AI has so much potential for good. And we use it for things like games that simulate killing one another, or making fake news web sites, or pushing people to riot over lies, or making 12-year-olds addicted to apps, or eliminating the jobs of people who need those jobs the most, or, yes, pornography.
I'm hoping that at some point the novelty and hype will die down so that the headline grabbing "send a follow up email" or "summarize call" will get out of the way so the more impressive things like detecting medical conditions months/years earlier than human doctors will be a much more visible. The things for making people lazy are a total waste to me.
# process image to videos
if modules.globals.nsfw == False:
from modules.predicter import predict_video
if predict_video(modules.globals.target_path):
destroy()
OTOH now that we know the technology is possible, would you prefer that only some actors had perhaps the ability to do that. or perhaps not and having the lingering doubt that anything you see could be deep fake but there could always be plausible deniability that it would be too hard to actually carry it out.
If the technology is actually made widely available that just reveals that the Pandora box was actually already open
I think this is an oversimplification that undermines your goals.
If you're unwilling to recognize the benefits of something, it becomes easier to dismiss your argument. Instead, the truth is balancing trade-offs and benefits. Certainly there is a clear and harmful downside to this tech. But there are benefits. It does save a lot of money for the entertainment industry when you need to edit or do retakes. The most famous example might be superman[0].
The issue is that when the downsides get easy to dismiss, it becomes easy to get lost in the upsides. It'll get worse because few people consider themselves unethical. We're all engineers and we all have fallen for this trap in some way or another. But we also need to remember that the road to hell isn't paved with malicious intent...
> I think the downside is 10 orders of magnitude larger than this benefit.
I actually agree that the downsides outweigh the upsides.
The intent of my comment is not to defend this work, it is actually more about how to better construct arguments against it. That is why I do not begin with "you're [tdeck] wrong" but specify that the argument undermines the goals.
The point is who your speech is targeted at. If your audience is people who already agree that the downsides outweigh the benefits, the argument is fine. But it also isn't that fruitful, is it. But if your argument is intended to persuade people to agree with you, who already do not, then I think the argument will only amplify such disagreement.
If we recognize that most people aren't intentionally malicious, then if we are to persuade them to be in agreement we must also understand what persuaded them to be in disagreement. It is easy to brush this off as "money" or "stupidity" but doing so won't help you construct an effective argument.
I also need to stress my point in that this construction is harmful to yourself! If we are quick to simplify and see how obvious something is through hindsight, it will make us ill equipped to prevent such mistakes beforehand. Because what's obvious post hoc is not a priori. So don't dig your own grave. Especially because the grave is dug slowly. It's far more effective to be able to recognize it when the grave is shallow and you can still climb out.
In this case, the road to hell seems to be paved with intent to... make it easier to goof around and make silly prank videos, I guess? A lot of deepfake projects seem to be aimed in that direction and while there's nothing wrong with that in itself, it's hardly a compelling use case that outweighs the obvious harms that everyone has been talking about for years now. That's why I say that if someone cared about those harms they wouldn't be making this. Of course there are always things we tell ourselves: "if I didn't make this someone else would", "by making this easier (faking videos of real people) I'm training the public to be more skeptical", etc... etc... At what point is it obvious that these are excuses and the person really doesn't give a damn?
So the truth here is that the reason they're doing this is because they aren't yet good enough to sell to Hollywood. Not to say that Hollywood isn't using deep learning[0], but there's typically a combination of classical tools and deep learning tools. But these companies all seem to have an aversion to traditional tools and appear to want to be deep learning all the way down. This is a weird tactic and the fact that people are funding such companies is baffling. I can't even imagine a future where you don't want traditional tools, even if ML could do 99%. Hell, even 100%. Language is pretty lossy and experts are still going to want to make fine grain edits.
I'm a researcher who's made one of the best face generators. I'd like to address your questions and discuss a larger more critical point.
I too have ethical concerns. There are upsides though. It is a powerful tool for image and video editing (for swapping, you still need a generator on the backbone)[0]. It is a powerful tool for compression and upsampling (your generative model __is__ a compression of (a subset of) human faces, so you don't need to transmit the same data across the wire). It is easy to focus on the upsides and see the benefits. It is easy to not spend as much time and creative thinking directed at malicious usages (you're not intending to use or develop something for malicious acts, right?!). But there's two ways to determine malicious usages of a technology: 2) you emulate the thinking of a malicious actor, contemplating how they would use your tool, and 2) time.
But I also do think application matters. I think this can get hairy when you get nuanced. Are all deepfakes that are done without consent of the person being impersonated unethical? I think at face (pun intended) value, this looks like an unambiguous no. But what about parody like Sassy Justice?[1]. Intent here is not to deceive, and the deep fakes add to the absurdity of the characters, and thus the messages. Satire and parody itself doesn't work unless mimicry exists[2]. Certainly these comedic avenues are critical tools in democracy, challenging authority, and challenging mass logic failures[3] (which often happens specifically due to oversimplification and not thinking about the details or abuse).
I want to make these points because I think things are post hoc far easier to dismiss than a priori. We're all argumentative nerds, and I think despite the fact that we constantly make this mistake, we can all recognize that cornering someone doesn't typically yield in surrender, but them fighting back harder (why you never win an argument on the internet, despite having all the facts and being correct). And since we're mostly builders (of something) here, we all need to take much more care. *The simpler you rationalize something to be post hoc, the more difficult it will be to identify a priori.*
Even at the time, I had reservations when building what I made. But one thing I've found exceptionally difficult in ML research is that it is hard to convince the community that data is data. The structure of data may be different and that may mean we need more nuance in certain areas than others (which is exciting, as that's more research!), but at the end of it, data is data. But we get trapped in our common datasets to evaluate[4] and more and more, our research needs to be indistinguishable from a product (or at least a MVP). If we can make progress by moving away from Lena, I think we can make progress by moving away from faces AND by being more nuanced.
I don't regret building what I built, but I do wish there was equal weighting to the part of my voice that speaks about nuance and care (it is specifically that voice that led to my successful outcomes too). The world is messy and chaotic. We (almost) all want to clean it up and make it better. But because of how far we've advanced, we need to recognize that doing good (or more good than harm) is becoming harder and harder. Because as you advance in any topic, the details matter more and more. We are biased towards simplicity and biased towards thinking we are doing only good[5], and we need to fight this part of ourselves. I think it is important to remember that a lie can be infinitely simple (most conspiracies are indistinguishable from "wizards did it"), but accuracy of a truth is bounded by complexity (and real truth, if such a thing exists, has extreme or infinite complexity).
With that said, one of my greatest fears of AI, and what I think presents the largest danger, is that we outsource our thinking to these machines (especially doing so before they can actually think[6]). That is outsourcing one of the key ingredients into what defines us as humans. In the same way here, I think it is easy to get lost in the upsides and benefits. To build with the greatest intentions! But above all, we cannot outsource our humanity.
Ethics is a challenging subject and it often doesn't help that we only get formal education through gen ed classes. But if you're in STEM, it is essential that you are also a philosopher, studying your meta topic. Don't need to publish there, but do think about. Even just over beers with your friends. Remember, it's not about being right -- such a thing doesn't exist --, it is about being less wrong[7]
[4] I do think face data can be helpful when evaluating models as our brains are quite adept at recognizing faces and even small imperfections. But this should make it all that much clearer that evaluation is __very__ hard.
[5] I think it is better to frame tech (and science) like a coin. It has value. The good or evil question is based on how the coin is spent. Even more so how the same type of coins are predominantly spent. Both matter and the topic is coupled, but we also need to distinguish the variables.
[6] Please don't nerdsplain to me how GPTs "reason". I've read the papers you're about to reply with. I recognize that others disagree, but I am a researcher in this field and my view isn't even an uncommon one. I'm happy to discuss, but telling me I'm wrong will go nowhere.
In a way it is. Practicing writing to HN but when I write similar things I often get no feedback. Positive nor negative. I'm worry verbosity is my issue but I don't know how to state so much briefly. Thanks for the reply, and honestly my goal is to start conversations.
Being a deep thinker, which I assume you are, leads to those results, in my view.
It sounds like you don't want to just "do the punchline" either, that you want to lay it out and moreover you want to share kind of the experience of unrolling your thoughts, so it really makes sense that you're not just sharing the punch line.
I mean that's my interpretation and you know I haven't really met you so take what I'm saying with a mountainous size grain of salt.
I think I'm just unsatisfied with "because" lol. Realistically I think I'm just asking "why" a few more times.
> you don't want to just "do the punchline"
I think the punchline is not only non-obvious, and actually counterintuitive or easy to disagree with (if it was obvious, we wouldn't have the issues, right?). So I think just stating it would likely be ineffective. So then it becomes about how show the logic. Which if disagreed upon the steps or assumptions, I'm more than happy to update.
> I haven't really met you
Actually that's why I find this particularly helpful. Different kind of bias. I do not find these conversations difficult in person, but it is harder in comments. Maybe just blog form is best for online.
And thanks for the comments. They do give me a bit to think about.
paranoia and conspiratorial thinking does not lend itself to trusting others, so could go either way. e.g. paranoid people do not feel safe getting close with people.
Or for a less out-there example, Ken Thompson's classic "Reflections on Trusting Trust". At some point unless you are literally producing all of the hardware and software yourself you have to trust someone. The challenge is figuring out where that line of acceptable risk lies for you. It's going to be very different for an indie game dev vs a FinTech company vs the US DoD.
I imagine anyone with a BA in CS has wrote a OS from scratch.
How many systems on chips are in a moderen computer, not the main system and cpu, but every little chip and controller, the boot system, every board seems to have a little OS.
In regards to security, its all about analysing risk and trade-offs.
For me using containers from known vendors is a risk im willing to take.