Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I've been interested in the long-term effects of this kind of thing for a while and AI posts have been part of the landscape for a long time already. Occasionally you see posts where the image tells you to respond in a particular way to prove you're not an AI (the theory being the AI is taking the text post as an input but not parsing the attached image for text). Sure enough, a portion of replies respond only to the text. Not a concrete test, but interesting nonetheless.

All of this feels similar to the late nineties and early 2000s where viruses and spyware proliferated against ill-prepared home computers and their users. Tech & culture eventually got a lid on things (mostly), but not without great loss of innocence and a reduction generally in 'openness'.

Online discussion is probably going through a comparable phase, and will suffer a comparable loss of innocence/utility as a result.

But a more human-like AI chatbot being thrown into the mix is not some game-changing event, just a logical progression of what's been true for a long time already. Perhaps if nothing else it will increase awareness that this kind of thing is basically everywhere online already, and erode the increasingly inaccurate belief that what we read from 'other people' online is genuine. A loss of innocence, sure, but it's fairly obvious that innocence is being hugely exploited already and has been for some time now.

It's perhaps going to be a net neutral thing - While more AI posting and responding will result in some real people wasting time conversing with them and reading their unreal opinions, they're also going to suck in other AI and non-genuine (shill etc) posters and waste their time/posts as well. Where this gets dicey is that posting and responding are not the only two sides of the interaction - You also have real people simply reading and not responding. This is where the main outcomes are generated - What they read and how that influences them onwards is really the battleground being fought over here, and where the wider dangers present.

Perhaps everything ultimately descends into a SubRedditSimulator type state and real people gradually withdraw from the whole thing as it becomes useless to them.



I really hope you are wrong about the future of online discussion, because that's not the world I want to see come to pass.


Me too, but then I miss a great deal about the earlier internet environment as well, just before all the spam and viruses meant you were no longer going to use that webmail host with the cute domain name, join that chat room, run that executable that a friend sent you. It's not that we lost trust in things, it's that we originally didn't factor trust into our dealings with them because it wasn't a big issue. The same thing seems to be now creeping into online discussion, so I have to at least guess it'll go the same boring way in general.

If someone comes up with a magical way to easily attest that a post has been made by a genuine human, while also not burdening that human with the asymmetric backlash potential of the internet mob, perhaps it plays out differently.


I think going in the direction of low trust and strict identity verfiication is a mistake (although, given the trajectory taken by social media thus far, is likely to be the path we continue on). IMO a much better bet is to shut down or discourage participation in large, "flat open space" social media a la Twitter or Facebook, in favour of smaller, closed pseudonymous communities (like you might find on Discord).

Moderating a community that's grown beyond a certain size requires draconian and unexplainable automated systems, because the alternative is finding enough paid moderators to hammer it into something resembling a coherent discourse. Moderating a community of a few dozen, or even a few hundred people, can work with a much smaller number of moderators, and it's likely that those moderators could even do it for free. You could even have such communities be tightly knit without the need for identity verification, because any sockpuppet accounts would first have to prove themselves worthy of inclusion, or risk being banned.


I certainly agree it's not an ideal option, if one exists at all. What you propose is a different path that could be taken, thought I'm not sure the outcome would be better. Just different.

I have left many discord servers because I could see a repeated pattern playing out in each - immature kids posting edgy content was sliding rapidly into sincerely held extreme beliefs, and polarizing members into either leaving or buying further into it. Those smaller non-public communities are breeding grounds for extremity and the problems there, while quite different to the problems faced in wider open internet discussion forums, are also quite severe and IMO heading somewhere very dark medium term.

I recently came across a Tom Scott tweet that at first I reacted dismissively towards, then on reflection realized it was kinda the same thing I just mentioned: https://twtext.com/article/1316099118792572929#

I guess the problem there just shifts from where a bad actor can potentially lie - In other posters is one thing, but in the overarching administration of the forum is another place. In wider public areas those moderators/administrators can be flawed people, who attract a lot of flame (legitimately or otherwise), or an uninvolved automated system, which then pushes the issue back down to the posters again. In private smaller communities the lack of sunlight combined with the potential for bad actor (or uninvolved) moderation only intensifies that problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: