Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That protection requires that they are not editorializing content.

Please cite the law that says that, because it's certainly not S230.

S230 was specifically made to empower sites to moderate, editorialize, and remove content as they see fit for their platform.



IANAL but I think Section 230 does not extend blanket immunity for any moderation "as they see fit," because then they would be a publisher and should be held up to publisher-type liability.

> Section 230 protect a blog host from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” [1]

The phrase "otherwise objectionable" has come under limited judicial review. It is not a unlimited catch-all, but relates back to the meaning and purpose of the preceding language, mainly dealing with mature themes.

The Ninth circuit reviewed a case against Malwarebytes where they were blocking access to a competitor and they were hoping for Section 230 protection, but the Ninth found that "otherwise objectionable" did not extend to anti-competitive blocking. [2]

I think it's also interesting that in this particular case it is not even really a question of restricting access or availability of material. Twitter is editorializing -- essentially adding a Editor's Note to Trump's tweet.

That Twitter has to right to do this is unquestionable. The question is whether in doing so they have crossed a bridge into become a publisher and not a platform protected by Section 230 immunity.

If you read the history / case law which Section 230 is meant to address, the issue was a contrast between Compu-Serv dodging liability from statements posted by users because they were not moderating topics, vs. Prodigy being found liable for user-posted statements because they were moderating topics.

> Section 230 was enacted in early 1996, in the CDA’s Section 509, titled “Online Family Empowerment.” In part, this provision responded to a 1995 decision issued by a New York state trial court: StrattonOakmont, Inc. v. Prodigy Services Co. The plaintiffs in that case were an investment banking firm. The firm alleged that Prodigy, an early online service provider, had published a libelous statement that unlawfully accused the firm of committing fraud. Prodigy itself did not write the allegedly defamatory message, but it hosted the message boards where a user posted the statement. The New York court concluded that the company was nonetheless a “publisher” of the alleged libel and therefore subject to liability. The court emphasized that Prodigy exercised “editorial control” over the content posted on its Congressional Research Service site, actively controlling the content of its message boards through both an “automatic software screening program” and through “Board Leaders” who removed messages that violated Prodigy’s guidelines. [3]

CDA's intention was to allow "good faith" / "Good Samaritan" moderation without triggering publisher liability. The CDA was not designed to totally eliminate the entire concept of publisher liability on the internet.

There is a lesser standard of liability which falls upon distributors based on content which they "know or should have known" violated the law. It's a higher standard than publisher liability because it requires establishing direct knowledge of the offending material. Very interestingly, the courts found that CDA 230 actually precludes even distributor liability in the case the service knows or should have known of the illegal content, because distributor liability is a subset of publisher liability and if they don't have publisher liability then they can't have distributor liability. (I'm sure I'm butchering this explanation somewhat).

This has become a problem as of late with issues like revenge porn or online harassment campaigns where service providers had refused to take down material even after being notified it was illegal, and were getting protection under Section 230 for keeping the content up!

[1] - https://www.eff.org/issues/bloggers/legal/liability/230

[2] - https://www.wileyconnect.com/home/2020/1/22/ninth-circuit-re...

[3] - https://fas.org/sgp/crs/misc/LSB10306.pdf


From the same article:

> Do I lose Section 230 immunity if I edit the content? Courts have held that Section 230 prevents you from being held liable even if you exercise the usual prerogative of publishers to edit the material you publish. You may also delete entire posts. However, you may still be held responsible for information you provide in commentary or through editing. For example, if you edit the statement, "Fred is not a criminal" to remove the word "not," a court might find that you have sufficiently contributed to the content to take it as your own. Likewise, if you link to an article, but provide a defamatory comment with the link, you may not qualify for the immunity.

You may be held liable for the commentary you provide, but you are not liable for the content you provide commentary for, even if you choose to sometimes provide commentary.


And the next paragraph from that EFF article;

> The courts have not clarified the line between acceptable editing and the point at which you become the "information content provider." To the extent that your edits or comment change the meaning of the information, and the new meaning is defamatory, you may lose the protection of Section 230.

I'm not qualified to say what the limit is, I'm merely hoping to provide some background on how 230 came about and that it does have some form of limits.

What's particularly unclear to me is that if you have a site under active moderation which could even include editorializing some small percentage of the posts that are made, how does that impact the potential liability from posts which you don't editorialize.


Right, again: if your edits cause the content to become defamatory, you may be liable for that specific content. Modifying any particular piece of content does not cause you to lose sec 230 protections in general.

So unless Trump's original tweet, or Twitter's fact check are defamatory or otherwise illegal, Twitter doesn't care.


> of material that the provider or user considers to be [...] otherwise objectionable,

That seems like a pretty blank check for moderation to me, at least. "Otherwise objectionable" can easily be defined in a ToS, and you're off to the races.


It does in English, but in legal construction an ending "or otherwise.." usually limits its scope to items similar to the ones at the start of the list.

https://www.law.cornell.edu/wex/ejusdem_generis

> For example, if a law refers to automobiles, trucks, tractors, motorcycles, and other motor-powered vehicles, a court might use ejusdem generis to hold that such vehicles would not include airplanes, because the list included only land-based transportation.

(I'm not a lawyer.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: