As a permaculture fan, I say well kept gardens die by being too homogeneous and rigidly ordered, rather than alive, organic, and vital. Layer plants in space and time and select them appropriately, and you can leave them unattended for months on end.
Well-layered gardens (like food forests but also my own flower gardens which never get weeded) also better support their members as each plant fills functions which help the other plants.
We need well-layered on-line communities, ones where each layer supports all others in an interdependent web. In such a community, trolls just simply don't show up. There is no room for them. As the permaculturists say, all of life's problems can be solved in a garden.
Lest you doubt my bit about troll-free communities, six years in, I have yet to have to take action against trolls in the LedgerSMB community. I have also never seen trolls on the PostgreSQL email lists.
I'm confused - tightly focused groups with high entry requirements may be able to get away with less moderation, but how would you create something more general-purpose, like HN?
I think the question isn't general-purpose but "what is the purpose?"
If the purpose is collective economic advancement through developing awesome software, all you have to do is support new members of the community in getting going.
If the purpose is having fun, spirited debates, then this becomes harder. However the same rules apply. These are:
1) Stratify the community so not everyone has an equal voice in everything.
2) Every strata has an obligation to help every other strata stay civil and grow the community and norms together.
A food forest is productive when all these layers are providing support to all the other layers.
Similarly that open source community will be productive when the core committee is serving the community by providing infrastructure and governance, the committers are writing code that benefits everyone else, etc. The goal is shared economic support and interdependence between the layers, and in this regard open source communities thrive when this works well.
So what does this mean about intellectual entertainment sites like Slashdot or Reddit, or to a lesser extent HN?
I guess the first thing you have to do is to have a group of people (Pg and others here) who decide what the goals of the site are (they are the canopy), but the next step is to ensure that there are different tiers with overlapping responsibilities to the community as a whole. HN for example seems to have officially a three tier system but unofficially a four tier system at least as far as I have seen. Maybe there are 5 or more tiers, if we count YC hopefuls, those who found firms funded by YC, etc.
I actually think this sort of stratification can be a good thing even though it cuts against modern notions of perfect democracy or egalitarianism. The fact that you have more strata here also means I find it is a more supportive community than a place like Slashdot with fewer strata.
In FIDO.net the problems were largely solved, and they remained solved for a long time. It worked like this:
1. Each forum has a list of rules. Not reading rules -> penalty. Breaking the rules -> penalty.
2. Each forum forum has a FAQ. Asking a question from FAQ -> penalty. Unless you can show you read FAQ and there is still something to be discussed.
3. Each forum has regularly updated off-topic list. Bringing up a topic from this list -> penalty. Moderator keeps the offtopics up to date, adding/removing things as needed to maintain calm.
4. Arguing with moderator or comoderators in public -> penalty. All arguments should go into private messages.
5. Telling others what they can and cannot say (usurping moderator's power) -> penalty. All complains should go into private mail.
6. Moderator is usually elected (if public forum) or self-appointed (if he owns the forum). The former can be impeached. The latter can be ignored by creating an alternative forum.
7. Moderator appoints co-moderators. If comoderators misbehave, moderator alone can overrule.
8. Penalties accumulate, yielding bannination.
The system worked well because bannination was real - most people on FIDO had real identities. No reason one can't do the same with facebook logins now.
FidoNet was network governed by a group petty, power-hungry and insecure moderators posing as lords of the manor. It was really, really sad. It worked well in the sense a prison colony could work well. It's a good thing that the whole thing has pretty much died and replaced by the much bigger and more diverse Web.
In my corner of the FIDO moderators were harsh but just. I find it that generally people appreciate tyranny when it leads to excellent result where it matters, in this case quality discussion on various topics.
downvoting is just as much a problem with fools as well. A good downvote is when a post/comment is abusive or pointless. A bad downvote is when the voter disagrees with the opinion expressed. Mention Microsoft on slashdot you'll get downvoted, period. Mention VisualBasic on any tech forum you'll get downvotes regardless of the point you're trying to make.
Eventually you learn to stop trying to present alternative points of view and just go with the flow or leave because the "fools" are now in charge of the downvoting.
"I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness.
"It only becomes abuse when people resort to karma bombing: downvoting a lot of comments by one user without reading them in order to subtract maximum karma. Fortunately we now have several levels of software to protect against that."
My own analysis of the issue is that if I receive a downvote, I'm curious about why I received it. I understand that not everyone shares all of my opinions, but perhaps if one of my comments is downvoted, I can learn something about how to express my opinions so that they are allowed to stand, without people feeling they have to go out of their way to vote them down. Any time I have a comment with negative karma (which is a rare occurrence), I look back on what I said, and think about what I can learn from the other comments in the same thread and how I might express myself better the next time. I try to learn from the people who disagree with me as well as from the people who agree with me.
Thank you for quoting the comment more fully. I've seen it paraphrased too succinctly as "pg says downvotes can mean disagree."
Although pg hasn't implemented a change to this effect, I believe he would agree that -- "people feeling they have to go out of their way to vote them down" -- implies a downvote should be more work than an upvote.
I think a downvote should require a response to the parent comment that:
1. contains at least 15 English words, excluding quoted phrases
2. quotes at least min(25% of entire comment, 5 consecutive words) of the parent comment
3. meets current downvote requirements (karma and downvote bombing protections spring to mind immediately)
For the sake of argument, such a "required downvote comment" could contribute almost nothing -- but it would still reduce downvotes. An example of a really bad downvote comment on this comment might be:
> a change to this effect
This is HN, not reddit. Um, did you read the article? I down voted you.
The 15 English word requirement sort of requires some sort of explanation. Does anyone think this would help?
I think this is an actively bad idea. If I had to reply to a comment, explaining why the comment is bad, then the most likely scenario is that the writer replies, arguing with me, and I reply, and now the thread is full of a bunch of bullshit that nobody wants to read. Multiply this by ten per post.
If the downvote reason was only visible to the writer, then I could go for that. But it's ridiculous to make a rule that drives people to have huge off-topic flamewars in the middle of a comments section.
Make the downvote reason only visible to the writer, and do not allow the writer to reply to the downvote reason - the system already heavily favors upvotes over downvotes, so that writers can just shrug off the downvotes.
The main thrust of my suggestion is to make downvoting even harder.
Don't feed the trolls! If a comment is bad enough for me to want to downvote it I certainly don't want to engage the author in conversation. In fact, by downvoting I hope to push the comment further down so there to let the better comments float up and to make it less likely it will start a stupid pointless argument thread, replying would do the oposite usually.
Also, I think that, in general, quoting in comments tends to lead to lower quality comments. It's a lot easier to do a point by point nitpick than the write a proper response to the ideas contained in the parent post.
I think that's going in the right direction, but arbitrary rules on replies will always fail for some situation.
For instance if someone wrongly states "There are five lights." my reply would probably only contain the correction "There are FOUR lights!"; anything beyond that would be artificial inflation.
Quoting the parent article is a bad idea. It promotes arguing trivial points of the article ad infinitum. See USENET circa 1995 for a discussion style that should be avoided. (Now called "Fisking.")
You're conflating three separate concepts. There's nothing inherently wrong with quoting the part you're reacting to, and it can be done without resorting to flaming or Fisking.
Essentially, posts are ranked on two dimensions: agreement, the degree to which you agree with the posts arguments and conclusions -- and quality, the degree to which the post the post conforms to certain expectations in a well-mannered, good faith discussion.
Any ranking or moderation involves one degree of subjective judgement: you have a post and you have a dimension along which to rank it (according to certain criteria), where do you put it? This is true even if the criteria are completely objective, e.g. if you asked people to moderate posts in terms of spelling or grammar, which are fairly objective, moderation would introduce a subjective quality (e.g. people don't recognize a typo, some will think a couple of grammar mistakes in a long post shouldn't lead to a down vote, others will disagree).
Post quality has fairly objective criteria, but they are never really spelled out. Nevertheless, it's usually fairly easy to recognize trolls and flamebaits and overall bad quality posts, independent of your own position or even the existence of a position in a certain context.
Agreement introduces another level of indirection, in a manner of speaking: not only does the act of assessing the criteria involve a subjective effort, the criteria themselves are completely subjective, ie. it completely depends on the moderator's own opinion in the context.
Why elaborate on this? Well, in the first place I think it's intellectually interesting how the two dimensions differ in a fairly fundamental way. But there are practical consequences to this, as well.
Simply adding up up/downvotes means we can't distinguish between a post that has not received any moderation and one that has received an large amount of moderation that has cancelled out itself. This seems all right for "quality" moderation: when good-faith moderators can't decide on where to put a comment on this scale, then it's probably not clear where applying the objective-but-not-spelled-out criteria leads to, and it's probably a post that's neither particularly trolly or insightful -- the system works! But when good-faith moderators hugely conflict on an "agreement" moderation, this means the moderators themselves are in conflict, and the post articulates this in a way that makes them agree or disagree, e.g. by identifying the central contested issue. And despite being a +300 & -300 post, it sits there at +/- 0: I don't think the system works very well here.
I'm not sure why we're trying to moderate both dimensions on a single scale on HN. I guess it's an elegant system, because it compresses so much information and complex judgements down to just a single up- or a downvote per person. On a large scale, this should yield good results, right? Well, I don't buy it. In effect, every post is a mini-poll mixed with a quality moderation -- and this is what we use to, essentially, delete comments. Ideally, we'd have two moderation options: a post quality moderation that's summed up, used to hide trolls and flamebaits and not otherwise shown, and an agreement moderation that's not summed up but displayed for everyone to see. "Agreement" votes wouldn't affect karma, as people should neither be awarded for having a popular opinion nor be punished for voicing an unpopular one.
Ideally, we'd have two moderation options: a post quality moderation that's summed up, used to hide trolls and flamebaits and not otherwise shown, and an agreement moderation that's not summed up but displayed for everyone to see.
I upvoted you because I'm in agreement that this is how it should be in an ideal world. I just don't think it is helpful in practice.
The problem here is that people tend to either pick "Agree/High Quality" or "Disagree/Low Quality". These two axes are not orthogonal within the voter's mind.
Given a high correlation, the extra axis only increases complexity without much benefit.
Suppose you had two axes on which to vote (Quality/Agreement) but you only got one vote? If people were forced to pick which axis they speak of, maybe it could allow for more discussion on otherwise controversial topics? I'm imagining a scenario were registering your agreement/disagreement is easy, ala reddit, but saying something about the quality of a post is harder, maybe some threshold of standing in the community, ala hackernews.
For me, a comment like this one: http://news.ycombinator.com/item?id=4034170 exemplifies some of the deterioration. The commenter asks for substantiation of a claim that refers back to the original article that was submitted. If he'd read the article, he'd know that.
Now, it's easy to argue that this is a one-time example, and alone that's true. But I'm noticing these kinds of comments more and more—ones in which people comment without knowing anything, or without reading the original article, or even without closely reading the person who they're replying to.
No one has found a good solution to this problem. As an open community grows it will inevitably experience Eternal September and content quality will decrease. It happened with Usenet, Slashdot, Reddit, and Hacker News is following their footsteps. The upper echelon will migrate, others will follow.
Perhaps closed communities with a curated member list is the answer[1][2]. Closed communities are not without its own drawbacks, but the trade off may be acceptable to some.
Open communities allows for anyone's input, but you also get drama like arguing about Javascript semicolons or GitHub's commit messages. However, closed communities are susceptible to group think. How do you judge a minority opinion's validity?
HN probably slower than the others for the same reason that PostgreSQL seems to be perpetually getting better....
I think the decay you note is specific to communities who are not coming in with any sort of common self-interest at stake. In essence they are what I might call "intellectual entertainment" communities. People read and argue on Slashdot primarily because it's fun. People argue on PostgreSQL email lists because they all want PostgreSQL to be as good as it can be. In the former, a troll can stir up a lot of argument, but in the latter, the most that can be hoped for is a collegial discussion on the merits of a specific approach.
HN leans towards the former. I am betting most commenters are mostly interested in fun conversations. However, the fact is that it is also an integral part of a VC ecosystem and so presents an aspect of common economic interest as well.
I think you're hitting on an important point by discussing different types of conversations. My theory (based on many years of Usenet) is that there are three basic types of online participants: "cocktail party", "scientific conference", and "debate team". In "cocktail party", the participants are having an entertaining conversation and sharing anecdotes. In "scientific conference", the participants are trying to increase knowledge and solve problems. In "debate team", the participants are trying to prove their point is right.
HN was originally largely in the "scientific conference" mode, with very smart people discussing areas in which they were experts. Now HN has much more "cocktail party" flavor, with smart people chatting about random things they often know little about. And certain subjects (e.g. economics, Apple, sexism, piracy) bring out the "debate team" commenters.
Any of the three types can carry on happily by themself. However, much of the problem comes when the types of conversation mix. The "cocktail party" conversations will annoy the "scientific conference" readers, since half of what they say is wrong. Conversely, the "scientific conference" commenters come across as pedantic when they interrupt a fun conversation with facts or "citation needed". A conversation between "debate team" and one of the other groups obviously goes nowhere.
I think the comment karma in HN encourages "cocktail party" conversation since you're as likely to get upvoted for trivial chitchat as a carefully-reasoned expert statement. (See http://news.ycombinator.com/bestcomments) Also, as I just found out, "expired link" on HN encourages quick comments rather than slowly written ones.
(And yes, I realize the irony that this is a "cocktail party" style comment of with my random opinion. I've actually considered ways of making this categorization quantitative, but it would take way too much time.)
> A good downvote is when a post/comment is abusive or pointless. A bad downvote is when the voter disagrees with the opinion expressed.
A downvote is also justified when a comment is misleading, contains a fallacious argument, or misrepresents contents of other comments or the article. Sometimes even when it's just incorrect. Unless the particular misconception is somewhat popular or difficult to spot.
Basically, anything that could be removed without loss of value, should be downvoted.
My only disagreement with this comes in when some kind of censorship is enacted once you pass a certain number of down votes. And no, the fact that you can disable that censorship doesn't ameliorate the issue - the majority will still be unable to see the comment. Reddit is particularly bad at this, as once you hit -2, your comment might as well have never been posted. Any idiot with a bot and a grudge can make anything you do invisible.
FWIW I like the way HN has chosen to handle this. You can't even downvote until you go past a certain karma threshold (eliminates sock puppetry and calling for backup), and even if you are down voted into oblivion, your words are still visible (if a little harder to see).
Now if they'd just lighten up on the hell bans. I've seen a number of valid comments here which were dead on arrival.
With luck many silent but wise people will downvote the one or two noisy fools.
There's a strong temptation to interact, even with someone you suspect is a fool. Resist that temptation, and carefully use a downvote.
There's a great article called "Why We Banned Legos" which covers some of the weird problems that online communities get into. (A small group of children get control of the lego, and decide who can have what pieces, etc.)
I don't agree. First, the people complaining about downvotes often really did write a comment worth being downvoted (I didn't look at your history). Second, HN is still very valuable if the top comments are all good - it's much less necessary that all good comments are on the top. And despite what people say, the key source of groupthink on HN is not the voting system (rather, the homogeneity of the user base.)
I think upvotes and downvotes should be harder to do.
An example of what would make a vote "harder" would be to require reason-words. When you click down-arrow a box might pop up where you must select (or better still, type out in full) at least two words justifying the vote such as "Abusive" or "Pointless". This would also help to prevent fat-fingered downvotes that I've accidentally done on my iPad because the arrows are too close together and irreversible.
I don't think making it mandatory is necessarily a good idea. But I would love to be able to have people give me a short reason for an down vote so that I can hopefully see what I'm doing wrong. Anonymously of course.
One issue I could see happening would be people abusing the feature to send hateful messages. Some solutions may be to allow users to delete messages or block the ability to give them text feedback along with a down or up vote.
Another problem might be causing a sort of hugfest where users try to get upvotes by upvoting someone else, creating an upvote ring. (Eg: Including their name in the feedback and asking for upvotes.) One countermeasure would be to block the user from putting in their unabridged unedited name into the feedback box. (Of course, this could be circumvented in the way that almost all other online censor systems are circumvented. Obfuscation.)
Yes, an open text channel could be abused and it is only unlikely to be abused if it's public; e.g. everything reads "makecheck said X" and not just "X".
Maybe the right thing is for downvotes to always list voter IDs (but probably not long upvote lists). If I downvote something for "the right reasons" I don't really care if anyone knows it because I expect most people will agree with me. The opposite would be true for someone doing a lot of petty downvoting; he or she would probably be embarrassed and discouraged once everyone can see that the same person has downvoted a bunch of comments for no reason.
One way to find a "reason" for a personal downvote, then, would be to look at everything else your downvoters have downvoted. One of two patterns would probably form: either these people are in the habit of downvoting lots of stuff for no reason, or they seem to be downvoting comments that have common themes.
Along similar lines, I wondered whether making up votes have a steadily decreasing effect (say, first up vote is 5 points, then 2.5, then 1.25) and down votes have a steadily increasing effect (0.375, 0.75, 1.25...) would be wise.
In my opinion HN has the most advanced online community control system that I know.
For example I cannot downvote. That's great. I didn't contribute enough to have that previlege yet, I'm still adapting to community standards. I don't want my mistakes to weaken the community standards, the same standards that made me want to join. If the core members don't think I'm contributing I agree that they should have the power to weaken me. Some control in one website is not censorship. It WAS MY CHOICE to join this community, I didn't create it. If I'm not happy at any moment I'm free to go. I'm just being banned from a community, I'm not being banned from internet, that would be censorship. There are still thousands of other communities to belong or even start a new one.
This is different from censorship in one country. There you have right to belong cause you borned there, it WASN'T YOUR CHOICE to join. You are already a core member. You have the same rights as everybody else to establish the standards. You need a place to exist.
edit: I think we can use this "logic" to analyze immigration problem.
I would give this 50 up votes if I possibly could, mostly because it struck very close to home.
I've been in a leadership role in a couple of smallish online communities that, on further reflection, died just because of what this article is talking about. Fear of doing anything because of not wanting to be labeled another mod abusing their powers, or what have you. I've been members of communities that died or were damaged due to outright mod abuse - figure the converse is better, right? Death through inaction and allowance is better than death through action and disallowance, right? Intent doesn't even begin to enter into it.. right?
Not so sure anymore.
So like most things in life, it's a balancing act. As a commenter on the article said, 9 times out of 10, your guess that a comment is crap, will be correct. Don't start second guessing yourself. You probably have a very finely tuned crap sensor.
If you have a couple people in a leadership role that you share power with, you can make sure everyone acts together and that sticky situations are discussed beforehand. That helps manage both the impression of abuse of power and issues in application.
>It's just one fool, and if we can't tolerate just one fool, well, we must not be very tolerant.
Repeat after me:
100 times I’ve sworn this oath:
100 years I’d rather languish in a dungeon,
100 mountains I’d rather grind to dust,
If only I don’t have to make a fool to see the truth.
– Bakhvalan Machmud
That kind of attitude leads to not being able to take any sort of challenging by anyone who you label fool or inferior, be it by smarts, knowledge, experience or other things.
The quote is a translation as far as I know. (The only reference to it I can find is in a book translated from Russian about Russia.) So subtle nuances are likely missing.
To me at least; "to make" a fool to see the truth implies that you will try and try until you succeed. This approach is ineffective because a large portion of people are never satisfied. (See: Pretty much any Internet drama or argument.) And if you adopt the attitude of winning every argument, you will almost inevitably flood communication channels with your discourse. (See: Any HN thread where the whole page is a few outlying comments and a majority in a threaded argument about a topic. Or a forum thread that goes on for pages where people argue about something.) Such discourse also costs you time to participate in and prevents you from doing more useful work. (Before writing this comment I could have sworn that Newton wrote a letter to this effect explaining why he stopped publishing his works. But I couldn't source it (I spent around 20 minutes trying.) so I decided that if it exists including this comment will likely make it appear from the depths of the Internet, if it doesn't it doesn't.) There comes a point where the best way to resolve an argument is to either cut it short or not have it.
EDIT: A large portion of this is how one defines the word "fool" if everyone who disagrees with you is a fool in your mind by default, then I doubt you're going to listen to anyone who expresses disagreement. Regardless of weather or not you engage them in conversation.
People as a general rule don't let the people they consider fools change their mind. Unless by proxy when they learn from their actions.
The problem is you might identify as "fool" someone who simply doesn't agree with you or the established groupthink. Re-read the first few paragraphs and replace the word "fool" with "person who doesn't agree with you" -- now it says something very different.
Now replace fool with dragon. Says something different again. Whoa!
Good mods can tell the difference between people who have other views and idiots. If the moderators in your community of choice can't, you should probably choose another community. But your equivocation between silencing other viewpoints and eliminating unhealthy voices is not useful.
I upvoted you before you added the second paragraph because I thought your joke was funny. "But your equivocation between silencing other viewpoints and eliminating unhealthy voices is not useful." Why not? I was making the point that it's often not easy to tell the difference.
I mean, the body of the blog post you're responding to is claiming a) that it is, and b) that even when it isn't, you can just leave a poorly modded community. In the face of that, I don't find "replace-the-word" rhetoric very compelling.
If it were shown that even in good communities with solid mods, a primary use of moderating power was to silence dissent.
For example, in this community, anti-startup articles get posted occasionally. I'd expect to see far more dead comments from people who post agreement with such articles. But I browse with showdead on, and I see no such thing.
Basically, the rhetoric needs to line up with the evidence at hand. If it doesn't, it's unsound.
It was more obvious, re-reading the parent post and noting the phrase, "In the face of that." My reading of your comment without absorbing the phrase's implications prompted the question.
I can't believe you upvoted him at all, he wasn't joking he was mocking you while failing to comprehend the connection you were making. Given the subject matter here, I find that very ironic indeed.
More accurately, in this context, "person who disregards the purpose of the community". Communities are created to serve certain purposes (quality discussion, for example) and the risk to a community is that users eventually disregard that purpose and the community is no longer useful for the original purpose. To avoid this a community should be very explicit about its purpose from the beginning (and the means that will be employed to defend the purpose), should regularly remind users of the purpose, and should aggressively question behavior that undermines the purpose.
There's a fine line between disregarding the purpose of the community and attempting to change it, and it's often difficult to distinguish between the two.
For example, when 4chan became Anonymous, a lot of people objected to the politicization of the community. They felt that it was a betrayal of the original purpose of the community. But communities evolve. Purposes change. We don't live in the Founding Fathers' America anymore. It's not easy to ban people who "disregard the purpose of the community" without also trampling on nascent attempts to evolve that very purpose, and I suspect that this is where accusations of censorship most often arise. Strong moderation, on its own, can't solve this problem.
Good moderators should be wise enough to make subtle distinctions like this and humble enough to confer with long-time users on tricky issues.
The fact is that at every point in the development of our culture, most people have been horribly misinformed about many important things, and when someone came on the scene and questioned one of these things, he was thought a fool by the majority or even imprisoned (Galileo) or burned at the stake. There is no reason to think this groupthink isn't integral to our present culture, including by scientists and professionals; quite the contrary.
...into this garden comes a fool...Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
It reads to me like the fool is somehow intrinsically, immutably subhuman compared to the "knowledgeable and interested folk". It's as if we weren't all once not knowledgeable, as if the non-knowledgeable fool can never become knowledgeable. I'd say that compared to a disinterested fool, a non-knowledgeable fool who's interested and is attracted to this garden of "high quality speech" is easier to fix.
The conclusion seems to go against everything Less Wrong is about. It reads to me like, "biases? Nah, I bet nine times out of ten you're not biased at all. Downvote away!"
In this context, "fool" is an action not an immutable nature - it's the action of talking about topics that are either well below the intellectual level of the ongoing discussion, or a distraction from it. It doesn't matter why a person does this, it matters that they do it at all. Up to a certain rate of inflow of well meaning fools, the group can absorb and socialize them. If there is a fire-hose of them, you get an Eternal September and the forum dies.
Yes, it is elitist, but judging someone as being not up to the standards of a community is not the same as calling them subhuman.
As for the conclusion, the argument he's making is that he has seen evidence of a bias against making these harsh judgements, and it has led to the destruction of valuable internet communities.
The large number of people willing to take Brian Hamacheck's word (without response or evidence) on the "who's here" versus "who's near me" dispute seems relevant to me. The number of people offering to donate is even more disturbing.
What's even more disturbing is reading the comments of the arrest section on local newspapers. 95% of the people commenting seem to make up these elaborate fantasy scenarios of what happened in their heads and then decide people's guilt or innocence based on that.
Welcome to the human condition. Evidently, the human imagination predicts well in the context of small family groups in the wilderness. Have a society with enough complexity to merit laws, and the imagination isn't so finely tuned.
What makes you think it works on small family groups? All we can conclude from ev-psych is that making stuff up didn't cost the imaginers much reproductive success personally - we have no idea what it did to the actual victims having stories made up about them.
I suspect that in small family groups in the wilderness, much more of our imaginings concern the environment and non-human agency, and fewer concern the actions of the group.
Posts in reddit are partitioned into "subreddits". Each submission is made to one subreddit, though multiple submissions can be made to spread items to different communities. Each subreddit is essentially like Hacker News, a list of ranked links with comments all flavoured by some common theme. The front page of reddit aggregates posts from your preferred subreddits (or a default list of popular subs for users who aren't logged in).
/r/askscience is a subreddit in which users can ask real scientists questions about science. It's a less-structured StackOverflow, essentially. Submissions are somewhat moderated, comments are more heavily moderated. Top-level comments are not allowed to be jokes, and offending comments are removed. On-topic jokes in other places are allowed, but too much joking around in a thread will often lead to all of the comments in the thread being removed. Deleted comments are replaced by "Comment removed" tombstones, and seeing whole trees of these tombstones is a decent reminder to stay on-topic.
The CSS of the subreddit also encourages the community to maintain their standards. When upvoting, downvoting, commenting etc, users are reminded what those actions represent in the community. There is also an informal hierarchy of posters, with experts' comments being identified with short descriptions of their specialisations. The posts of these users are given more weight by the community, but the posters are held to somewhat higher standards.
/r/askscience is a heavily moderated subreddit where there are many rules for posting/commenting beyond those of other subreddits. Top level replies that don't site verifiable sources or contain layman's speculation are deleted. Comments that are off topic or do not comply with the rules are also promptly moderated.
repsilat and ceph_ are right, but I'll add that it's noteworthy that askscience is good. Askscience is a default subreddit, meaning that everyone who joins reddit sees it and is drawn into the conversation. Large subreddits with lots of newbies tend to be... problematic.
This is a two-edged sword. Also keep in mind, "Who watches the Watchmen?" The power to ban can also be used to turn a group into a personal fief. As in many things, the optimum path is a middle one. Checks and balances are needed.
Sometimes the ban hammer is wielded by those with troll ethics.
That's one way to look at it... Another is just the phenomena of fads. No matter how well-kept the garden of slashdot/digg/reddit/HN they all have a time horizon. Just enjoy it while it lasts.
We all have the desire to analyze the things in our life (groups of friends, clubs, companies, communities) that have died. I do it all the time. I wonder what happened to community "x" that I was a part of and really enjoyed and miss. You look around at your neighborhood or city and see permanence and figure your community must have had something wrong with it.
Of course, the barriers to entry and exit for an online community are very low. People change, the time they have available changes, their interests change, whatever. They exit the community as easily as they entered it. Soon the community dies. Your town or neighborhood isn't so easily left or entered so the stakes are much higher.
Sometimes communities just get old and people leave. Pretty soon they're gone.
They're not that bad. Will Newsome is being dealt with, dymtry is slowly reforming, and those are the only 2 persistent ones I know of right now. What's more problematic is lack of really good fresh content; this can be traced to the absence of folks like Eliezer.
It perhaps also fulfills that story-role, yes. Nonetheless, it shouldn't be contrary to your expectations, if you apply a little wisdom.
I find in my own case that the things which bug me about others are things which deeply bug me about myself. Whenever I find my heart beating a little harder and my brow a little furrowed, I try to relax, and take a deep breath, and see what it is within me which resonates so strongly against the error that I have seen. I am normally quite patient with error, and this usually causes my impatience.
Personally, I feel this "what is wrong with me" redirection is a much deeper lesson than the fact that harsh moderation can help keep a community together. First off, it is a discipline which can help keep us together, as opposed to the communities we moderate. And second, because the original lesson comes embedded in a much deeper truth, which Eliezer doesn't seem to acknowledge. I would like to spend a moment on this:
The deeper truth is that love is transformative, that to love is to change both yourself and your beloved. The original post just notes that if a moderator isn't moderating, they aren't loving their community. True enough. It is important, however, to understand that transformation is not sufficient for love: that we all know people who do use their authority too far.
Well-kept gardens don't die due to pacifism. That's stupid -- if they did, then they weren't "well-kept". But they die because they don't get the love that they need. It doesn't matter how much you pull the weeds if you forget to water the roots and fertilise the soil.
To any of you who wish to know whether they love their job, I will simply add the poignant questions: when did you last transform it? And when did it last transform you?
Can you link to a comment or two on LW that exhibits the trolling you speak of? I honestly have not noticed although I would not be terribly surprised.
The tension between maintaining the original quality and growing the size of the community by several orders of magnitude is, in my opinion, an unsolvable problem.
No amount of arrow-clicking, ignore list or banning can solve this.
The easiest way to control the quality of the community is to control its growth. Pay-wall, invites, etc. will all be vastly more useful than after-the-fact approaches.
When the barbarians are in the walls, you've already lost.
Interestingly, the story strikes as a completely opposite of 4chan.org/sci/ where among army of trolls many intellectuals for many years discuss interesting ideas.
Maybe it's the community size, maybe it's new technology (comment voting, bans and such) but I have a hunch that fools are not a problem.
Maybe it's due to 4chan's huge community size that fools are not a problem as they seem in smaller communities. Additionally, there is a whole slew of tools to identify (and punish) fools such as karma, comment voting, etc which perhaps weren't as widely available in the past.
4chan has mods who routinely remove child-porn and other questionable, non-related material. http://www.4chan.org/faq#whomod
> It was once a well-kept garden of intelligent discussion .... But into this garden comes a fool [quoted from the original article]
Assuming this isn't a community of those who come out of the womb enlightened, aren't most of us living in some state of foolishness?
I suppose it might be possible to sustain a community by continuously attracting new enlightened individuals, but wouldn't it be more practical to do what most societies do and teach (at least some of) the fools until they aren't fools any longer?
Well-layered gardens (like food forests but also my own flower gardens which never get weeded) also better support their members as each plant fills functions which help the other plants.
We need well-layered on-line communities, ones where each layer supports all others in an interdependent web. In such a community, trolls just simply don't show up. There is no room for them. As the permaculturists say, all of life's problems can be solved in a garden.
Lest you doubt my bit about troll-free communities, six years in, I have yet to have to take action against trolls in the LedgerSMB community. I have also never seen trolls on the PostgreSQL email lists.