What kind of government would use their statutory authority to shut down an airport when there is a risk to the planes?
Why do you think the FAA doesn't have this authority? Or, why do you think the FAA shouldn't have this authority?
In other words: This may have been needed but poorly executed; this may have been incompetent planning and response. But I wouldn't call the FAA shutting down an airport "police state".
>> What kind of government would use their statutory authority to shut down an airport when there is a risk to the planes?
It could be either an incompetent government or an authoritarian government that is trying to militarize certain institutions of civilian life.
>> Why do you think the FAA doesn't have this authority? Or, why do you think the FAA shouldn't have this authority?
The FAA does indeed have the authority. The question is simply: why did the FAA choose to exercise its authority in this case? If there was a real danger to the public, then the FAA should be honest with the people and tell them what is the danger. That is what citizens should expect from a democratic government.
>> This may have been needed but poorly executed; this may have been incompetent planning and response. But I wouldn't call the FAA shutting down an airport "police state".
The reason why I ask if this is an example of police state behavior is because in this case the government apparently took drastic measures without explaining to the people why it was doing so.
Google ought to rethink its policy of disclosing government subpoenas to users. Every time this happens, the media uses it to attack Google. They'd be better off leaving users in the dark about these legally required data disclosures. Even if most users don't go crying to the media when it happens, it's still not worth it.
Ultimately it's better for the public and users to be informed about this occurring though. If Google wanted to they could salvage it and explain their legal duties and how that applies to these situations. I don't think Google is worried though. They have multiple captive markets and have seen continued growth so it's obviously not affecting the bottom line.
It's a good contrast to Apple where any bit of bad news that makes headlines becomes priority one to fix. Which just creates a privileged class of users and makes the brand look fragile.
That would solve exactly zero of the complaints surfaced in this lawsuit. Companies still have an incentive to maximize app usage regardless of whether the advertising is personalized.
In fairness, AI-generated CSAM is nowhere near as evil as real CSAM. The reason why possession of CSAM was such a serious crime is because its creation used to necessitate the abuse of a child.
It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.
Definitely agree on which is worse! To be clear, I'm not saying I agree with the French raid. Just that statements about severe crimes (child sexual abuse for the above poster - not AI-generated content) being "lesser problems" compared to politics is a concerning measure of how people are thinking.
It may not be worse "objectively" and in direct harm.
However - it has one big problem that is rarely discussed... Normalizing of behaviour, interests and attitudes. It just becomes a thing that Grok can do - for paid accounts, and people think - ok, "no harm, no problem"... Long-term, there will be harm. This has been demonstrated over decades of investigation of CSAM.
Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.
There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.
Grok brought that thought all the way to "... so let's not even try to prevent it."
The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.
I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.
Grok does try to prevent it. They even publicly publish their safety prompt. It clearly shows they have disallowed the system from assisting with queries that create child sexual abuse material.
The fact that users have found ways to hack around this is not evidence of X committing a crime.
If AI GF Generator 9001 is producing unwilling deepfake pornography of real people, especially if of children, feel free to raid their offices as well.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.
Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?
You prefer those be shut down to the one run by a pedo who happens to be the richest person in the world and meddles in elections across the global personally with money?
Not sure why the title was editorialized, but this is literally just one person's opinion. The title makes it sound like the legal community universally agrees, which is not true.
It’s also bad legal commentary . The TSA seems to have broad legal authority. The more vague a law is, the more authority the executive branch has , not less (assuming it’s constitutional, and our constitution is also deliberately limited)
There are two avenues for recourse: lobbying your congressman or suing the TSA . I’m guessing the ACLU / EFF and other groups haven’t yet sued because the TSA’s legal authority is broad.
As discussed in the original article, John Gilmore (co-founder of EFF) did sue. "His complaint was dismissed on the basis of TSA policies that said travelers were still allowed to fly without ID as long as they submitted to a more intrusive 'pat-down' and search. The court didn’t rule on the question of whether a law or policy requiring ID at airports would be legal, since the TSA conceded there was no such law."
reply