> I don’t think the “AI community”—people with access to lots of GPUs—should also get to be the thought police.
I understand how it could be perceived that way. However, I don’t think it is the intent.
If nuclear researchers had developed a laissez-faire attitude and commonly dumped large amounts of radioactive material they were testing, into aquifers that people drink from, eventually a perception of the whole nuclear field as toxic and lethal would develop. Since researchers don’t want that perception to develop, they keep each other in check and set safety standards.
This is researchers keeping each other in check to avoid another AI winter.
Look at it this way: Yannick seemingly had a low enough opinion of 4chan that he felt it was OK to dump one message a minute over a week. Someone out there has a low opinion of HN; if there was no consensus among researchers that doing this is bad, they could similarly unleash bots over HN that pass any CAPTCHA, sound like commenters, and yet have an agenda to prop up various companies or scams.
> This is researchers keeping each other in check to avoid another AI winter.
This would be fair if it wasn't signed by facebook who have a history of breaking these safety standards and then trying to white wash the issue. Now they are throwing an individual under the bus for something they have done in the past at a larger scale, for pay!
Both have the potential to cause vast damage. Nuclear tech gone wrong will poison/kill people outright, whereas AI tech can instead cause a Shiri's Scissor[0] scenario that will collapse a society.
I understand how it could be perceived that way. However, I don’t think it is the intent.
If nuclear researchers had developed a laissez-faire attitude and commonly dumped large amounts of radioactive material they were testing, into aquifers that people drink from, eventually a perception of the whole nuclear field as toxic and lethal would develop. Since researchers don’t want that perception to develop, they keep each other in check and set safety standards.
This is researchers keeping each other in check to avoid another AI winter.
Look at it this way: Yannick seemingly had a low enough opinion of 4chan that he felt it was OK to dump one message a minute over a week. Someone out there has a low opinion of HN; if there was no consensus among researchers that doing this is bad, they could similarly unleash bots over HN that pass any CAPTCHA, sound like commenters, and yet have an agenda to prop up various companies or scams.