> Add in the narrow exceptions like child porn and true threats, and that's it.
You're contradicting yourself. On the one hand you're saying that governments shouldn't have the power to define "safety", but you're in favor of having protections against "true threats".
How do you define "true threats"? Whatever definition you may have, surely something like it can be codified into law. The questions then are: how loose or strict the law should be, and how well it is defined in technical terms. Considering governments and legislators are shockingly tech illiterate, the best the technical community can do is offer assistance.
> The government doesn't get to create new categories of "dangerous speech" just because the technology is new.
This technology isn't just new. It is unlike any technology we've had before, with complex implications for the economy, communication, the labor market, and many other areas of human society. We haven't even begun to understand the ways in which it can be used or abused to harm people, let alone the long-term effects of it.
The idea that governments should stay out of this, and allow corporations to push their products out into the world without any oversight, is dreadful. We know what happens when corporations are given free reign; it never ends well for humanity.
I'm not one to trust governments either, but at the very least, they are (meant to) serve their citizens, and enforce certain safety standards that companies must comply with. We accept this in every other industry, yet you want them to stay out of tech and AI? To hell with that.
Frankly, I'm not sure if this CA regulation is a good thing or not. Any AI law will surely need to be refined over time, as we learn more about the potential uses and harms of this technology. But we definitely need more regulation in the tech industry, not less, and the sooner, the better.
There's no contradiction. "True threats" is already a narrow exception defined by decades of Supreme Court precedent. It means statements where the speaker intends to communicate a serious expression of intent to commit unlawful violence against a person or group. That's it. It's not a blank check for the government to decide what counts as dangerous.
Brandenburg gives us the standard: speech can only be restricted if it's directed to inciting imminent lawless action and is likely to produce that action. True threats, child porn, fraud, these are all narrow, well-defined categories that survived strict scrutiny. They don't support creating broad new regulatory authority to filter outputs based on "dangerous capabilities."
You're asking how I define true threats. I don't. The Supreme Court does. That's the point. We have a constitutional framework for unprotected speech. It's extremely limited. The government can't just expand it because they think AI is scary.
"This technology is different" is what every regulator says about every new technology. Print was different. Radio was different. The internet was different. The First Amendment applies regardless. If AI enables someone to commit a crime, prosecute the crime. You don't get to regulate the information itself.
And yes, I want the government to stay out of mandating content restrictions. Not because I trust corporations, but because I trust the government even less with the power to define what information is too dangerous to share. You say governments are meant to serve citizens. Tell that to every government that's used "safety" as justification for censorship.
The issue isn't whether we need any AI regulation. It's whether we want to establish that the government can force companies to implement filtering systems based on the state's assessment of what capabilities are dangerous. That's the precedent SB 53 creates. Once that infrastructure exists, it will be used for whatever the government decides needs "safety mitigations" next.
I'm not sure why you're only focusing on speech. "True threats" doesn't come close to covering all the possible use cases and ways that "AI" tools can be harmful to society. We can't apply legal precedent to a technology without precedent.
> "This technology is different" is what every regulator says about every new technology. Print was different. Radio was different. The internet was different.
"AI" really is different, though. Not even the internet, or computers, for that matter, had the potential to transform literally every facet of our lives. Now, I personally don't buy into the "AGI" nonsense that these companies are selling, but it is undeniable that even the current generation of these tools can shake up the pillars of our society, and raise some difficult questions about humanity.
In many ways, we're not ready for it, yet the companies keep producing it, and we're now deep in a global arms race we haven't experienced in decades.
> I want the government to stay out of mandating content restrictions. Not because I trust corporations, but because I trust the government even less with the power to define what information is too dangerous to share.
See, this is where we disagree.
I don't trust either of them. I'm well aware of the slippery slope that is giving governments more power.
But there are two paths here: either we allow companies to continue advancing this technology with little to no oversight, or we allow our governments to enact regulation that at least has the potential to protect us from companies.
Governments at the very least have the responsibility to protect and serve their citizens. Whether this is done in practice, and how well, is obviously highly debatable, and we can be cynical about it all day. On the other hand, companies are profit-seeking organizations that only serve their shareholders, and have no obligation to protect the public. In fact, it is pretty much guaranteed that without regulation, companies will choose profits over safety every time. We have seen this throughout history.
So to me it's clear that I should trust my government over companies. I do this everyday when I go to the grocery store without worrying about food poisoning, or walk over a bridge without worrying that it will collapse. Shit does happen, and governments can be corrupted, but there are general safety regulations we take for granted every day. Why should tech companies be exempt from it?
Modern technology is a complex beast that governments are not prepared to regulate. There is no direct association between technology and how harmful it can be; we haven't established that yet. Even when there is such a connection, such as smoking causing cancer, we've seen how evil companies can be in refuting it and doing anything in their power to preserve their revenues at the expense of the public. "AI" further complicates this in ways we've never seen before. So there's a long and shaky road ahead of us where we'll have to figure out what the true impact of technology is, and the best ways to mitigate it, without sacrificing our freedoms. It's going to involve government overreach, public pushback, and company lobbying, but I hope that at some point in the near future we're able to find a balance that we're relatively and collectively happy with, for the sake of our future.
You're contradicting yourself. On the one hand you're saying that governments shouldn't have the power to define "safety", but you're in favor of having protections against "true threats".
How do you define "true threats"? Whatever definition you may have, surely something like it can be codified into law. The questions then are: how loose or strict the law should be, and how well it is defined in technical terms. Considering governments and legislators are shockingly tech illiterate, the best the technical community can do is offer assistance.
> The government doesn't get to create new categories of "dangerous speech" just because the technology is new.
This technology isn't just new. It is unlike any technology we've had before, with complex implications for the economy, communication, the labor market, and many other areas of human society. We haven't even begun to understand the ways in which it can be used or abused to harm people, let alone the long-term effects of it.
The idea that governments should stay out of this, and allow corporations to push their products out into the world without any oversight, is dreadful. We know what happens when corporations are given free reign; it never ends well for humanity.
I'm not one to trust governments either, but at the very least, they are (meant to) serve their citizens, and enforce certain safety standards that companies must comply with. We accept this in every other industry, yet you want them to stay out of tech and AI? To hell with that.
Frankly, I'm not sure if this CA regulation is a good thing or not. Any AI law will surely need to be refined over time, as we learn more about the potential uses and harms of this technology. But we definitely need more regulation in the tech industry, not less, and the sooner, the better.