Condemning this is an easy virtue signal to boost their reputation.
If some new startup wanted to use AI to identify women getting abortions and offered good money, I reckon that a lot of the signatories here would be sell-outs and offer their expertise.
I don't recall a letter like this in the wake of aggregateiq using ML to interfere in elections or clearview conducting intrusive facial recognition
Perhaps they disagree, but they don't disagree enough to inconvenience themselves or stop publicly associating with their credentials. In fact they proudly sign public documents with their company name.
Maybe a normal person might prefer their salary or position over disagreements like this, that's a fair position. But these people put themselves on a higher moral ground by "condemning unethical use of AI". Clearly they need to be more ethical if they judge others based on this.
Facebook purposefully manipulated the emotional states of thousands of users.
This guy made a bot to shitpost on /pol/ without getting noticed. As far as I see, the worst thing he did was subject the people browsing /pol/ to more of the same bullshit they spew there daily. Good play on his part though, making the direct victims of his AI fuckery the most unsympathetic people you can find online.
> I don’t think the “AI community”—people with access to lots of GPUs—should also get to be the thought police.
I understand how it could be perceived that way. However, I don’t think it is the intent.
If nuclear researchers had developed a laissez-faire attitude and commonly dumped large amounts of radioactive material they were testing, into aquifers that people drink from, eventually a perception of the whole nuclear field as toxic and lethal would develop. Since researchers don’t want that perception to develop, they keep each other in check and set safety standards.
This is researchers keeping each other in check to avoid another AI winter.
Look at it this way: Yannick seemingly had a low enough opinion of 4chan that he felt it was OK to dump one message a minute over a week. Someone out there has a low opinion of HN; if there was no consensus among researchers that doing this is bad, they could similarly unleash bots over HN that pass any CAPTCHA, sound like commenters, and yet have an agenda to prop up various companies or scams.
> This is researchers keeping each other in check to avoid another AI winter.
This would be fair if it wasn't signed by facebook who have a history of breaking these safety standards and then trying to white wash the issue. Now they are throwing an individual under the bus for something they have done in the past at a larger scale, for pay!
Both have the potential to cause vast damage. Nuclear tech gone wrong will poison/kill people outright, whereas AI tech can instead cause a Shiri's Scissor[0] scenario that will collapse a society.
Personally I think posting to 4chan wasn't that big of a deal. Maybe not a great idea, but 4chan users seem to have caught on (probably because of the high volume).
Subsequently uploading the model for everyone to use was a mistake worth condemming. The model was released as public download on HuggingFace [1], and despite being taken down there is now obviously available from torrents. I think it's not hard to imagine how the ability to automate harassment might be abused by other people on other platforms.
Imagine instead if people took this sheepish terrified approach to everything in life.
edit: and I'm not saying you're wrong. But literally everything can be abused, from guns to telephones to cars. kids turn school supplies into weapons in school.
additionally, the big corps abuse the technology they claim moral authority over
Ok, so you think it's a good idea and a worthwile application of AI to create a bot that posts inflammatory answers to a toxic community, making it even more toxic?
To be clear; they took down Tay (the racist bot that unethical AI researchers from Microsoft created), not Twitter.
So the parallel here would be shutting down the bot itself instead of the website it is posting on. And as far as I know the bot has been taken offline for some time already.
Seems like you are putting words in OPs mouth. I didn’t see them say it was a “good idea”. There is a lot of things in life that are bad ideas but we don’t stop people from doing them. Why exactly this issue is so important or needs attention? Seems like ignoring it is probably best for everyone.
To cite from the article (open letter? petition? whatever):
> a community full of racist, sexist, xenophobic, and hateful speech that has been linked to white-supremacist violence such as the Buffalo shooting last month
So the question is not what the toxicity does to the surface web, but what people influenced by it will do to other people.
Just because 4chan has a higher ratio within specific boards doesn't mean it's any worse than other places given that facebook/twitter which often reward cult-like behaviour and boosts calls to violence.
The difference here is that with 4chan it's easier to see unpleasant behaviour given that the two containment boards are easily accessible and observable without much effort. Though take note that not all of the site is represented by those two boards and they are `containment boards`.
Furthermore, (I don't have any data on this, but) I'm willing to bet that Facebook's body count is much higher than 4chans, although by indirect means.
How many people have commited suicide because of mental health issues directly attributable to Facebook? We don't have the data, but it's certainly a non-0 number.
You don't think the toxicity of 4chan has ever leaked outside of 4chan? It's not just the web it leaks out into, it leaks out into real life and people die.
i know that. i know about all those "lone wolves" who have caused deaths to hundreds.
my point is, why not shut these "hate factories" down in the first place? if piratebay admins can be jailed, why not these people? why are we "defending" free speech when we know its harmful?
As much as 4chan is a cesspool, I fail to see exactly what these guys are expecting out of the whole "AI" thing.
Given the recent trends, I suspect they are the gatekeepers, masters of their own Babylon gardens and protectors of the plebes. Just look at the wording and the high pedigree names attached of this.
Are we supposed to trust these people and eat whatever cake they throw us from their balconies?
Somehow this makes me more sympathetic to 4chan's knuckleheads.
> However, Kilcher’s decision to deploy this bot does not meet any test of reasonableness. His actions deserve censure. He undermines the responsible practice of AI science.
What "test of reasonableness" are we talking about?
Without any concrete test to look at, doesn't this boil down to "we don't like him and he should be cancelled"?
It also says "we, the AI community, currently lack community norms around their responsible development and deployment". So they don't have a test, but if they had one then GPT-4chan would fail it.
I prefer Dr. Oakden-Rayner's "this would never pass a human research ethics board" as an argument. But in the time where anyone can be an AI researcher on a hobby budget, without review by an ethics board, maybe a "test of reasonableness" or guideline of some sort would be useful to have.
Then let's play fair and apply that same test to products made by amazon, microsoft, google, facebook, twitter, tiktok etc.
these companies all have products that involve what are essentially forms of "ai" (i include social media platforms like twitter as "ai" not for their algorithms but because they have linked human brains into some sort of super brain - take it or leave it) and were unleashed on an unwitting society with untold (and often times disastrous) consequences.
> But in the time where anyone can be an AI researcher on a hobby budget, without review by an ethics board, maybe a "test of reasonableness" or guideline of some sort would be useful to have.
Common sense would be enough: Don't involve un-consenting others in your research. Facebook discovered this with the backlash they got in 2014 (!) for doing a/b testing [1], it's not like the question of AI/algorithms being used in malicious ways is something new... even though I'd love to see a blanket ban on A/B testing in general. Users are not guinea pigs and A/B testing can amount to gaslighting.
Additionally, the whole world has been debating the influence of racist and other discriminatory language on the Internet since at least 2016 with the election of the 45th.
By 2022, it should be very clear where the borders of civilization are, and I believe that that Youtuber should have known that.
Side note: We definitely need a societal discussion about Youtube and Tiktok. The amount of extremely vile or dangerous stunts people pull for likes/clicks/monetization on these platforms at the expense of others is immense.
Not OK to use nonconsenting human subjects for research, OK, I'm on board for that.
But, after you do all your oh-so-ethical research, why is it then OK to intentionally build products that do damage to unconsenting humans? It's not OK to emotionally manipulate people to write a paper... but somehow it's just fine to emotionally manipulate them to sell useless crap...
> maybe a "test of reasonableness" or guideline of some sort would be useful to have.
We don't have a test because no such test could be reasonable. Who gets to decide what is reasonable for ai and whats not? is it the tech giants that use it to spy on us? or the government that uses it to spy on us?
The only test being proposed here is an appeal to authority of tech giants and governments that have a history of doing worse.
The problem with those kinds of arguments (the "I know it when I see it") is that anyone can stretch them any way they want - since there are no objective criteria, nobody can prove them wrong.
Huh, I watched GPT-4chan’s creators video and the whole time I was thinking “this is really in the spirit of 4chan, I’m sure they loved it”, I imagine the actual 4chan user who would be upset about it is few and far between.
That seems like the only real “test of reasonableness“, would the person be upset to find out they were talking to an AI.
Beyond everything else, the entire setup made it clear it wasn’t a real person. This lead the site on a hunt to figure it out.
I would imagine few of the signatories have even visited the site let alone spent time there to get a sense of the community.
Language model researchers, like social media developers before them, have built this tech under the presumption that it will be applied only with positive intent.
GPT-4chan is about the least worst thing somebody could have done with it.
Any good AI/ML language-model implementation will produce results indistinguishable from those of a human...
And that is (even in the best of current civilisation) a wide range of capacity and behaviours, the folly of which is usually limited only by societal pressure.
What exactly do they hope to get out of this? Anyone could have told you this would eventually happen and worse to come. The AI box is open, there is no closing it again. Welcome to the future of the internet.
Yeah, I've been interested in the long-term effects of this kind of thing for a while and AI posts have been part of the landscape for a long time already. Occasionally you see posts where the image tells you to respond in a particular way to prove you're not an AI (the theory being the AI is taking the text post as an input but not parsing the attached image for text). Sure enough, a portion of replies respond only to the text. Not a concrete test, but interesting nonetheless.
All of this feels similar to the late nineties and early 2000s where viruses and spyware proliferated against ill-prepared home computers and their users. Tech & culture eventually got a lid on things (mostly), but not without great loss of innocence and a reduction generally in 'openness'.
Online discussion is probably going through a comparable phase, and will suffer a comparable loss of innocence/utility as a result.
But a more human-like AI chatbot being thrown into the mix is not some game-changing event, just a logical progression of what's been true for a long time already. Perhaps if nothing else it will increase awareness that this kind of thing is basically everywhere online already, and erode the increasingly inaccurate belief that what we read from 'other people' online is genuine. A loss of innocence, sure, but it's fairly obvious that innocence is being hugely exploited already and has been for some time now.
It's perhaps going to be a net neutral thing - While more AI posting and responding will result in some real people wasting time conversing with them and reading their unreal opinions, they're also going to suck in other AI and non-genuine (shill etc) posters and waste their time/posts as well. Where this gets dicey is that posting and responding are not the only two sides of the interaction - You also have real people simply reading and not responding. This is where the main outcomes are generated - What they read and how that influences them onwards is really the battleground being fought over here, and where the wider dangers present.
Perhaps everything ultimately descends into a SubRedditSimulator type state and real people gradually withdraw from the whole thing as it becomes useless to them.
Me too, but then I miss a great deal about the earlier internet environment as well, just before all the spam and viruses meant you were no longer going to use that webmail host with the cute domain name, join that chat room, run that executable that a friend sent you. It's not that we lost trust in things, it's that we originally didn't factor trust into our dealings with them because it wasn't a big issue. The same thing seems to be now creeping into online discussion, so I have to at least guess it'll go the same boring way in general.
If someone comes up with a magical way to easily attest that a post has been made by a genuine human, while also not burdening that human with the asymmetric backlash potential of the internet mob, perhaps it plays out differently.
I think going in the direction of low trust and strict identity verfiication is a mistake (although, given the trajectory taken by social media thus far, is likely to be the path we continue on). IMO a much better bet is to shut down or discourage participation in large, "flat open space" social media a la Twitter or Facebook, in favour of smaller, closed pseudonymous communities (like you might find on Discord).
Moderating a community that's grown beyond a certain size requires draconian and unexplainable automated systems, because the alternative is finding enough paid moderators to hammer it into something resembling a coherent discourse. Moderating a community of a few dozen, or even a few hundred people, can work with a much smaller number of moderators, and it's likely that those moderators could even do it for free. You could even have such communities be tightly knit without the need for identity verification, because any sockpuppet accounts would first have to prove themselves worthy of inclusion, or risk being banned.
I certainly agree it's not an ideal option, if one exists at all. What you propose is a different path that could be taken, thought I'm not sure the outcome would be better. Just different.
I have left many discord servers because I could see a repeated pattern playing out in each - immature kids posting edgy content was sliding rapidly into sincerely held extreme beliefs, and polarizing members into either leaving or buying further into it. Those smaller non-public communities are breeding grounds for extremity and the problems there, while quite different to the problems faced in wider open internet discussion forums, are also quite severe and IMO heading somewhere very dark medium term.
I recently came across a Tom Scott tweet that at first I reacted dismissively towards, then on reflection realized it was kinda the same thing I just mentioned: https://twtext.com/article/1316099118792572929#
I guess the problem there just shifts from where a bad actor can potentially lie - In other posters is one thing, but in the overarching administration of the forum is another place. In wider public areas those moderators/administrators can be flawed people, who attract a lot of flame (legitimately or otherwise), or an uninvolved automated system, which then pushes the issue back down to the posters again. In private smaller communities the lack of sunlight combined with the potential for bad actor (or uninvolved) moderation only intensifies that problem.
I think that should have been obvious as far back as GPT-2. Once people know it's possible to build an AI like that, then they know it's just a question of persistence, resources and time to replicate the result.
Gives me very low confidence that these people understand society at all. And they’re supposed to be the gatekeepers? According to… themselves I’m guessing?
Ah yes a pompous letter where we list all of our impressive formal accomplishments will surely get these rubes to take their hands off of our toys
The only reason they are the gatekeepers, is because they are the ones with all the GPUs. OpenAI seems to be counting on that as their moat vs all the plebs who want to use GPT-3 for "unapproved" purposes, at least.
I think they will try to ban us from the technology, or force us to be registered to use it on their system under supervision. my guess is that the future will involve thumb-printing these models in some way to determine where they came from and who developed them.
Good luck. This will take the ban on unlicensed general-purpose computation - which incidentally, has more and more pieces dropping into place thanks to the wonders of DRM.
IMO, for far more legitimate reasons than many attempted cancellations I've seen in the past. The guy was the driver behind many initiatives, but on a personal level I'd no longer want to meet him.
There has been some easy to find coverage of all the recent controversies. This video [0] is a decent starting point, but it's really easy to find relevant information by plugging in a few keywords into any search engine.
Overall, he's not a good fit in a leadership position given what others have been seeing. I don't care enough to call for him to be cancelled, but I would support removal from any public speaking position at the FSF. If they can find someone equally hardline on software freedoms but more acceptable in interpersonal interactions, he should go immediately.
> What made you feel this way, exactly?
There wasn't any single thing that threw me off (except maybe personal hygiene habits), but the sum of everything that's going on, crossed my "this is OK" threshold.
true, these people circumvent law, not play by it. But clearly they do their best to wall us off and push us towards the registered paid model that I mentioned.
Funnily enough, you can have pretty decent discussions on pol about economy and such. Sure, there will be lots of shit flung at everyone, but as long as you don't take the bait seriously, its gonna be fine. Better than other forums, even, as you can honestly debate it without fearing that you are stepping on anyones toes.
I didn't say you wouldn't ever step on someones toes. Just that you don't have to care about it. Let the other person rant and rave and realize you aren't beholden to that person.
> I'm not even sure if /pol/ is not already AI-generated nonsense.
I think this was indeed one of the intents, since the author fed GPT-4chan outputs back into /pol/ as an experiment. But publishing the model is another matter.
4chan is light years beyond the point where people just posted "Hitler did nothing wrong" to piss off the righteous.
People acting like hateful idiots for fun eventually brought actual hateful idiots. This brought people who like to spread their ideas to actual hateful idiots.
There's no difference between idiots in churches and idiots on 4chan - both are idiots, both eat up whatever bullcrap is served to them, and both are harmful to society. And neither means that churches or 4chan are bad in themselves.
We must be careful before starting to excuse idiots for their idiotic behavior based on "bad influence". Society is a complex mechanism, and censorship inevitably brings unforseen consequences. Human mind is also complex, and anti-social behavior can sometimes be traced back to early-childhood traumas. Does that mean we should ban free parenting and raise infants in government-approved facilities?
Erroneous argument - shouting "fire" in a crowded theater has a direct, predictable consequence of people acting on it. Posting stuff on the internet does not.
I mostly think of 4chan as an outlet, a place where you can show your worst with limited consequences because no one takes it seriously. Like a violent video game, or playing chaotic evil in D&D.
There are a few psychopaths who take 4chan seriously, just like there are psychopaths who think it is ok to play a D&D character in real life. But should we ban D&D just because of an extreme minority? People back in the day thought so, not so much anymore, because they now understand that it is just a game.
I base my experience on /b/, I don't know much about /pol/ but /b/ certainly was hateful, and it was fun, I got bored of it, and in the meantime, I don't think it affected me much, and neither did the many people I know who did the same.
There was some really bad things happening on 4chan. Like doxing or raids, but a bot won't do that, at least no more that more politically correct bots like GPT-3 (yep, GPT-3 can dox you!). In fact, it is more likely to disrupt the efforts by drowning the conversation in a flurry of vulgarity.
Are people really still taking the line that 4chan is nothing but edgy teenagers, memes and lulz, and that the straight world just doesn't get the irony? Christ, the copium of 4chan fans is even more potent than Trump supporters.
Its worrisome that some people are living in such comfortable, safe environment that they get away with not ever having to learn to shrug off such nonsense.
Stop taking 4chan seriously. If you let yourself be enraged by their words, that's exactly what they want, and it means suffering for you. Never, ever show you are vulnerable to this. Do not feed the trolls.
> Stop taking 4chan seriously. If you let yourself be enraged by their words, that's exactly what they want
4chan is not some amorphous entity with no influence on the real world. Just like "You may not be interested in war, but war is interested in you" can be true, so is the case with 4chan.
For a more direct example of what could happen, check out the origins of the phrase "We did it, Reddit!"
That, or you somehow unwittingly trigger the attention of the hive-mind. It doesn't take all that much. And if it happens to you, then they can successfully doxx you based on something as innocuous as a reflection on a photo [0].
What ever happened to personal responsibility? I think it died with "don't post your personal information on the internet", considering the way which young people use social media these days.
Edit: the article you linked has nothing to do with the website being discussed here at all, or any website really. It's about a japanese man stalking some sort of celebrity by himself.
> Do you have examples like qanon conspiracy or capitol insurrection that were organized by blue collars at work?
The JFK assassination conspiracy was a popular long before the internet existed and before qanon created a resurgence in that thinking. Most of the conspiracy stuff qanon spreads is lifted from mein kamf.
The world existed before the internet and people are dumb as ever.
If you applied the same standard of guilt by association that the post is invoking to other fora, there would be little left.
A is "linked to" B. A "perpetuates" B. A is "known to influence" B.
These are all weasel words meant to hide the fact that there is no actual consistent objection. Usually there are plenty of other, similar examples to be named, with the salient difference only being whether OP feels the site is on their side and whether they can have influence over it or not.
The objection with 4chan has always been that it is unbashfully tolerant to the right wing, which is why it is considered a high crime by some to associate with it. When doxing and incitement is in service of left wing goals instead, such objections evaporate like snow, and even major news outlets and papers happily indulge.
Even the charge of "copium" is shameless projection, and the common response of accusing critics of "whataboutism" is typically meant to deflect that there is no salient difference between two similar situations, and no underlying principle guiding the value judgement being made, other than partisanship masquerading as empathy.
4chan's influence is an order of magnitude less than YouTube, TikTok, Reddit, etc. In Reddit's case in particular, it is easy to observe that major subreddits are ideologically captured and that the site is unrecognizable to those who remember its early days.
By and large, this is allowed and encouraged. Under such hypocrisy, these progressive objections cannot be taken seriously.
Most of the people signing that are building systems that they know will be used to manipulate people into, for example, buying stuff that they don't need. Systems that they know will be used to make half-assed decisions about people's lives (even if they're planning to sit around sanctimonously whining about it).
... but trolling 4Chan, that's a bridge too far. Everybody knows that 4Chan users are there to escape from trolling.
The claims that this model is a "hate speech generator" are unsubstantiated. It encodes knowledge about a wide range of topics, and most of its output is neutral. GPT-3, with an appropriate pre-prompt, becomes orders of magnitude more dangerous than this.
Political correctness is evil. Taking political incorrectness serious is a sickness which should be treated with all means, including overwhelming ad absurdum. IMHO.
What is political correctness? And what's the opposite of it? I don't have enough information to understand what you're trying to say (plus your grammatical mistakes make it difficult to parse meaning too).
According to Wikipedia, it "is a term used to describe language, policies, or measures that are intended to avoid offense or disadvantage to members of particular groups in society".
However, the problem is that offense is subjective. Anyone can be offended by anything, and by extension, anyone else can claim that someone else is offended by something. Such subjective policies are often used for tyrannical purposes.
There is scarcely anything valuable in 4chan patterns, the interesting ones come from elsewhere. Since leftism and political correctness is based upon ignoring those patterns, thinking they are malleable by gigantic bureaucracies and laws, garnished with a lot of wishful thinking and ignoring historical evidence, hilarity then ensues.
I sometimes think QAnon started as an exercise in creating the most absurd conspiracy theory possible - but then lots of people started believing in it anyway. That's why I'm skeptical of the "overwhelming ad absurdum" approach...
I wonder if GPT-4chan could be used for moderation.
/pol/ is a containment board and taking those topics to other boards on 4chan is generally not well received. Could you use GPT-4chan to rate how "/pol/-like" a reply is and automatically flag it for moderation if it passes a threshold?
I understand the idea and I generally agree with some stereotypes.
I was joking because I am amazed by those hypocrite people that have their heads so far down a place, that they don't realize they violate all their righteous principles when it doesn't fit their narrative.
I bet there are multiple GTP type AI accounts posting to hacker news currently with creators who hope the eventual "reveal" will be "impressive." Curious how they are reacting to this.
It is important to note that AI research is mostly ethically neutral. It only becomes an ethical issue once someone actually decides to implement it in real life. At this point, the ethic issues is on the person who decided to do it. Engineers who work on a self-driving car who killed someone did their best; the responsibility is on the person who made the decision "It's good enough, let's roll it on the streets". Well, it wasn't good enough but you decided to do it anyway, and now put the blame on the engineers.
Either the engineers goddamned well knew that would happen, in which case they share the blame, or they are naive idiots, in which case they shouldn't be working on life-critical projects... and therefore share the blame. The same applies to any other technology.
That AI is much less harmful than the posters who created the original posts.
There is no anonymity if you connect to 4chan using a Silicon Valley designed processor.
The "facts" that wannabe shooters are fed there are highly tailored to what they are predisposed to believe already, because the ones posting have complete surveillance of everyone (including of you who reads this - you can thank Eric Schmidt) and know exactly what to post to create a shooter.
Silicon Valley has blood on their hands. 4chan is just one of the places used for these operations. Taking it down doesn't matter, because as long as Silicon Valley continues to spy on everybody and give the data to terrorists, innocent people will continue to be murdered.
What a truly useless, detached from reality statement. Furthermore, this may be a good indication of intent to keep the most sophisticated models away from public, as it really already is, and behind "ethics boards" and such that only our friendly psychopathic corporations and companies such as "Open" "AI" can afford to have — while still objectively being unethical in everything they do from misusing user data in the most malicious ways possible to lobbying regulators, and the list goes on forever.
The knowledge to develop those models, as rudimentary as they may be, is already out there and so is the ability to scrape for content to train with online. They most certainly are already being used for malicious purposes that are beyond spamming some website for fun. What are they going to do, condemn criminals with a strongly worded blogpost too?
This doesn't add any links to the model or the video in question. They are not letting it to be easy for the people to judge for themselves.
This is just virtue signaling and scoring points for the powers that be.
It is very trivially easy to download and fine-tune a large language model to virtually any dataset and generate novel content.
The fact that he does it openly and publicly makes him an easy target. It's always the small guys who are easy to cancel.
I do not understand the actual harm of this. Someone will use this bot to spam other forums or SNSs? They have their own checks and balances. People who post bad comments or troll around, will not even know about this. They will not even bother. They have enough time sitting in their mums' basements. Why use a language model? They simply won't know how.
It is just a demonstration. If the people with the same set of values were in the cybersec industry, they would condemn white-hat penetration testing as something unethical.
This just shows that this can be done.
Enough governments and political parties around the world take help from generative models and bot-farms to sway public opinion. If the party is in power, companies turn a blind eye.
Who knows how many of them use open technologies for bad things. I saw something called deepnude ai, that, for a fee, lets you generate naked bodies of women, consistent enough with her face and body to look very real.
These people should be arrested and condemned. Lives can be ruined with this tech. But these people don't know them and don't care.
Tur k3y government have made ai powered drones that kill without any human in the loop. These half-pant wearing (literally, as you will see in any ai conference) academics can't do anything to them.
Cancel culture is high in ai academia. Timnit Gebru got fired for legit reasons, and these "inclusion" advocates made it a race issue.
They just want to cancel this guy, because he has a track record of not conforming to cancel-list made by them. Like he hosted Pedro Domingos once. And people literally commented they won't watch his videos again and condemned him.
This is one of the follies of being a "hot field". It becomes everyone's playground to implement their agendas. Now that wokism is the Hegelian dominant ideology, everyone must conform to it.
If you want to debate in good faith, I am here. Reply to this comment.
Small selection of people who condemn this "unethical use of AI".
- Amazon Alexa (internet connected microphone in your house to sell you things)
- Google (collect your data and sell targeted ads to the highest bidder)
- Facebook (run experiments to make people more engaged/outraged on the website)
- HrFlow (automate biased hiring with the power of AI)
- Microsoft (collect data from users, launder a ton of GPL and MIT code without adhering to their license with the power of AI)