Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having technical access to prompts doesn't equal knowledge for criminal liability. Under 18 USC ยง 842, you need actual knowledge that specific information is being provided to someone who intends to use it for a crime. The fact that OpenAI's servers process millions of queries doesn't mean they have criminal knowledge of each one. That's not how mens rea works.

Prior restraint is presumptively unconstitutional. The burden is on the government to justify it under strict scrutiny. You don't have to prove something is protected speech first. The government has to prove it's unprotected and that prior restraint is narrowly tailored and the least restrictive means. SB 53 fails that test.

The FCC comparison doesn't help you. In Red Lion Broadcasting Co. v. FCC, the Supreme Court allowed broadcast regulation only because of spectrum scarcity, the physical limitation that there aren't enough radio frequencies for everyone. AI doesn't use a scarce public resource. There's no equivalent justification for content regulation. The FCC hasn't even enforced the fairness doctrine since 1987.

The real issue is you're trying to carve out AI as a special category with weaker First Amendment protection. That's exactly what I'm arguing against. The government doesn't get to create new exceptions to prior restraint doctrine just because the technology is new. If AI produces unprotected speech, prosecute it after the fact under existing law. You don't build mandatory filtering infrastructure and hand the government the power to define what's "dangerous."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: