Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It also says "we, the AI community, currently lack community norms around their responsible development and deployment". So they don't have a test, but if they had one then GPT-4chan would fail it.

I prefer Dr. Oakden-Rayner's "this would never pass a human research ethics board" as an argument. But in the time where anyone can be an AI researcher on a hobby budget, without review by an ethics board, maybe a "test of reasonableness" or guideline of some sort would be useful to have.



Then let's play fair and apply that same test to products made by amazon, microsoft, google, facebook, twitter, tiktok etc.

these companies all have products that involve what are essentially forms of "ai" (i include social media platforms like twitter as "ai" not for their algorithms but because they have linked human brains into some sort of super brain - take it or leave it) and were unleashed on an unwitting society with untold (and often times disastrous) consequences.


> But in the time where anyone can be an AI researcher on a hobby budget, without review by an ethics board, maybe a "test of reasonableness" or guideline of some sort would be useful to have.

Common sense would be enough: Don't involve un-consenting others in your research. Facebook discovered this with the backlash they got in 2014 (!) for doing a/b testing [1], it's not like the question of AI/algorithms being used in malicious ways is something new... even though I'd love to see a blanket ban on A/B testing in general. Users are not guinea pigs and A/B testing can amount to gaslighting.

Additionally, the whole world has been debating the influence of racist and other discriminatory language on the Internet since at least 2016 with the election of the 45th.

By 2022, it should be very clear where the borders of civilization are, and I believe that that Youtuber should have known that.

Side note: We definitely need a societal discussion about Youtube and Tiktok. The amount of extremely vile or dangerous stunts people pull for likes/clicks/monetization on these platforms at the expense of others is immense.

[1] https://www.theguardian.com/technology/2014/jul/02/facebook-...


Not OK to use nonconsenting human subjects for research, OK, I'm on board for that.

But, after you do all your oh-so-ethical research, why is it then OK to intentionally build products that do damage to unconsenting humans? It's not OK to emotionally manipulate people to write a paper... but somehow it's just fine to emotionally manipulate them to sell useless crap...


I 100% hold that A/B testing is a form of gaslighting. It should be illegal.


> maybe a "test of reasonableness" or guideline of some sort would be useful to have.

We don't have a test because no such test could be reasonable. Who gets to decide what is reasonable for ai and whats not? is it the tech giants that use it to spy on us? or the government that uses it to spy on us?

The only test being proposed here is an appeal to authority of tech giants and governments that have a history of doing worse.


The problem with those kinds of arguments (the "I know it when I see it") is that anyone can stretch them any way they want - since there are no objective criteria, nobody can prove them wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: