Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can't bootstrap individual criminal use into "the company should have known someone might use this for crimes, therefore they must filter everything."

Lucky for me, I am not. The company already has knowledge of each and every prompt and response, because I have read the EULAs of every tool I use. But that's beside the point.

Prior restraint is only unconstitutional if it is restraining protected speech. Thus far, you have not answered the question of whether AI output is speech at all, but have assumed prior restraint to be illegal in and of itself. We know this is not true because of the exceptions you already mentioned, but let me throw in another example: the many broadcast stations regulated by the FCC, who are currently barred from "news distortion" according to criteria defined by (checks notes) the government.



Having technical access to prompts doesn't equal knowledge for criminal liability. Under 18 USC § 842, you need actual knowledge that specific information is being provided to someone who intends to use it for a crime. The fact that OpenAI's servers process millions of queries doesn't mean they have criminal knowledge of each one. That's not how mens rea works.

Prior restraint is presumptively unconstitutional. The burden is on the government to justify it under strict scrutiny. You don't have to prove something is protected speech first. The government has to prove it's unprotected and that prior restraint is narrowly tailored and the least restrictive means. SB 53 fails that test.

The FCC comparison doesn't help you. In Red Lion Broadcasting Co. v. FCC, the Supreme Court allowed broadcast regulation only because of spectrum scarcity, the physical limitation that there aren't enough radio frequencies for everyone. AI doesn't use a scarce public resource. There's no equivalent justification for content regulation. The FCC hasn't even enforced the fairness doctrine since 1987.

The real issue is you're trying to carve out AI as a special category with weaker First Amendment protection. That's exactly what I'm arguing against. The government doesn't get to create new exceptions to prior restraint doctrine just because the technology is new. If AI produces unprotected speech, prosecute it after the fact under existing law. You don't build mandatory filtering infrastructure and hand the government the power to define what's "dangerous."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: