This distinction gets lost in these discussions all of the time. A company that makes an effort to comply with laws is in a completely different category than a company that makes the fact that they’ll look the other way one of their core selling points.
Years ago there was a case where someone built a business out of making hidden compartments in cars. He did an amazing job of making James Bond style hidden compartments that perfectly blended into the interior. He was later arrested because drug dealers used his hidden compartment business to help their drug trade.
There was an uproar about the fact that he wasn’t doing the drug crimes himself. He was only making hidden compartments which could be used for anything. How was he supposed to know that the hidden compartments were being used for illegal activities rather than keeping people’s valuables safe during a break-in?
Yet when the details of the case came out, IIRC, it was clear that he was leaning into the illegal trades and marketing his services to those people. He lost his plausible deniability after even a cursory look at how he was operating.
I don’t know what, if any, parts of that case apply to Pavel Durov. I do like to share it as an example of how intent matters and how one can become complicit in other crimes by operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law. It’s also why smart corporate criminals will shut down and walk away when it becomes too obvious that they’re losing plausible deniability in a criminal enterprise.
What do you mean "look the other way?" Does the phone company "look the other way" when they don't listen in to your calls? Does the post office "look the other way" when they don't read your mail?
That guy who built the hidden compartments should absolutely not have gone to jail. The government needs to be put in check. This has gotten ridiculous.
If the police tell them illegal activity is happening and give them a warrant to wiretap and they are capable of doing so but refuse then yeah they’re looking the other way. That’s not even getting into things like PRISM.
If you know your services are going to be used to commit a crime, then yes, that makes you an accessory and basically all jurisdictions (I know basically nothing about French criminal law) can prosecute you for that. Crime is, y'know, illegal.
I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
Why aren't all gun manufacturers in jail then? They must know a percentage of their products are going to be used to commit crimes. A much larger percentage than those using Telegram to commit one.
> I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
The usual metaphor is child pornography, but let's pick something less outrageous: espionage. If a spy uses your messaging platform to share their secrets without being detected & prevented, that's using the service to commit a crime. Now, if you're making a profit from said service, that doesn't necessarily make you a criminal, but if you start saying "if spies used this platform, they'd never be stopped or even detected", that could get you in to some serious trouble. If you send a sales team to the KGB to encourage them to use the platform, even more so.
Gun manufacturers have repeatedly been charged with crimes (some are currently in court). I'd argue that messaging platforms have, historically, been less likely to be charged with crimes.
The second amendment gives weapon makers some extra protection in the US, but they do have to be very careful about what they do and do not do in order to avoid going to jail.
> They must know a percentage of their products are going to be used to commit crimes. A much larger percentage than those using Telegram to commit one.
Do you have the stats on that? I don't, but I'm curious. While I don't doubt the vast majority of people using Telegram aren't committing a crime, I know that the vast majority of people using guns also aren't committing a crime.
> I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
That's because you're assuming facts not in evidence and painting the broadest possible argument. Obviously we don't know the details yet, but it's not unlikely that this situation was a bit more specific.
Consider:
F: "We want you to give us the chat logs of this terrorist"
T: "OK!"
F: "Now we need you to give us the logs from this CSAM ring"
T: "No! That's a violation of their free speech rights!"
You can't put your own moral compass in place of the law, basically. That final statement is very reasonably interpreted as obstruction or conspiracy, where a blanket refusal would not be.
You are right; the arrest might be legal and even morally justifiable.
However, I still argue that wanting to provide secret communication (which Telegram actually doesn't do) is not abetting crime or helping it more than any other product.
In fact, in my humble opinion, it's the opposite: Private communications are a countermeasure against the natural tendency of governments to become tyrannical, and thus maintaining one is an act of heroism.
> Private communications are a countermeasure against the natural tendency of governments to become tyrannical, and thus maintaining one is an act of heroism.
That's an easy enough statement in the abstract, but again it doesn't speak to the case of "Durov knowingly hid child porn consumers from law enforcement", which seems likely to be the actual crime. If you want to be the hero in your story, you need to not insert yourself into the plot.
The answer to this charade is that to "prove" that you're not doing anything wrong you need to secretly provide all data from anyone that the government doesn't like. Otherwise you go to jail.
If his was really true for banks there would be a large number of bankers in jail. This number being close to zero, I guess the courts are very lax at charging bankers for crimes.
Banks are a terrible example for this thread's argument. Banking is essentially the end result of what happens when businesses kowtow to the invasive demands of the government, implement ever-more invasive content policing, becoming de facto arms of the bureaucratic state.
A bank will drop you if they even think you might be doing something (demonstrably on paper) illegal. When opening an account, some of the very first questions a bank asks you are "where did you get this money" and "what do you do for work" - proactively making you responsible for committing to some type of story. All of the illegality you're trying to reference is happening under a backdrop of reams of paperwork that make it look like above board activity to compliance departments. Without that paperwork when shit does hit the fan, people working at the bank do tend to go to jail. But with that paperwork it's "nobody's fault" unless they manage to find a few bank employees to pin it on.
Needless to say, this type of prior restraint regime being applied to free-form communication would be an abject catastrophe.
Banks do a massive amount of tracking and flagging. Even putting a joke “for drugs” in a Venmo field can cause issues. Plus reporting large transactions. There was a massive post on HN yesterday about how often banks close startup accounts due to false positives.
> the real criminals continue doing their business everyday
Any source for that? Media loves to blame banks for everything, but when you go into the details it always seems pretty marginal (e.g. the HSBC Mexico stuff).
It cannot be marginal because drug traffic, just as an example, moves billions of dollars every year. They certainly have schemes and someone in the banking system must be complying with these schemes. Every time the officials uncover one of these schemes, banks are miraculously not charged of anything and they don't even give back the profits of the illegal operation.
If you provide a service that is used for illegal behavior AND you know it’s being used that way AND you explicitly market your services to users behaving illegally AND the majority of your product is used for illegal deeds THEN you’re gonna have a bad time.
If one out of ten thousand people use your product for illegal deeds you’re fine. If it’s 9 out of 10 you probably aren’t.
> If one out of ten thousand people use your product for illegal deeds you’re fine.
This logic clearly makes the prison of someone like the owner of Telegram difficult to justify, since 99.999% of messages in telegram are completely legal.
If 10,000 people out of 10 million are doing illegal things and you know about it or you are going out of your way to turn a blind eye then you’re gonna have a bad time.
Keep in mind that as soon as you store user accounts you keep user data, which is perhaps a trivial form of eavesdropping, but clearly something law enforcement takes an interest in.
Try to deposit 10k to your bank account and then, when they call you and ask the obvious question, answer that you sold some meth or robbed someone. They will totally be fine with this answer, as they are just a platform for providing money services and well, you can always just pay for everything in cash.
And even then you don’t have to tell them it’s illegal. Just what you earned. Frankly they don’t care where it came from as long as you report and pay.
No, you have to specify where it came from. You don't have to say what crime you committed, but you'd list the income under "income from illegal activities".
Suppose you knit mittens and sell them for cash out of your garage. The IRS expects you to report and pay taxes on the income. How do they check that the sum you specified is correct?
Not sure how it works in the US. In Germany you are supposed to have a cash register or issue an invoice on each purchase, and sometimes (though really rarely given lack of personnel) they can randomly check of your reported numbers make sense together.
It's not clear how that sort of thing would even help, it seems like just a trap for the unwary. If you're an honest person selling your mittens and paying your taxes without knowing you're supposed to have a cash register, you could get unlucky and get in trouble for innocuous behavior. If you're a drug dealer then you get a cash register and ring up all your drug sales as mitten sales. Or, if someone wanted to report less income, they would have a cash register and then use it to ring up less than all of the sales. Whether or not you have the cash register can't distinguish these and is correspondingly pointless.
If you are directly aiding and abetting without any plausible attempt to minimize bad actors from using your services then absolutely.
For example, CP absolutely exists on platforms like FB or IG, but Meta will absolutely try to moderate it away to the best of their ability and cooperate with law enforcement when it is brought to their attention.
And like I have mentioned a couple times before, Telegram was only allowed to exist because the UAE allowed them to, and both the UAE and Russia gained ownership stakes in Telegram by 2021. Also, messaging apps can only legally operate in the UAE if they provide decryption keys to the UAE govt because all instant messaging apps are treated as VoIP under their Telco regulation laws.
> For example, CP absolutely exists on platforms like FB or IG, but Meta will absolutely try to moderate it away to the best of their ability
Is this true? After decades now of a cat and mouse game, it could be argued that they are simply incapable. As such, the "best of their ability" would be using methods that don't suit their commercials - e.g verifying all users manually, requiring government ID, reviewing all posts and comments before they're posted, or shutting down completely.
I understand these methods are suicidal in capitalism, but they're much closer to the "best of their ability". Why do we accept some of the largest companies in the world shrugging their shoulders and saying "well we're trying in ways that don't impact our bottom line"?
If you are a criminal lawyer who is providing defense, that is acceptable because everyone is entitled to to a fair trial and defense.
If you are a criminal lawyer who is directly abetting in criminal behavior (eg. a Saul Goodman type) you absolutely will lose your Bar License and open yourself up to criminal penalties.
If you are a criminal lawyer who is in a situation where your client wants you to abet their criminal behavior, then you are expected to drop the client and potentially notify law enforcement.
> If you are a criminal lawyer who is directly abetting in criminal behavior
Not a lawyer myself but I believe this is not a correct representation of the issue.
A lawyer abetting in criminal behaviour is committing a crime, but the crime is not offering his services to criminals, which is completely legal.
When offering their services to criminals law firm or individual lawyers in most cases are not required to report crimes they have been made aware of under the attorney-client privilege and are not required to ask to minimize bad actors from using their services.
In short: unless they are committing crimes themselves, criminal lawyers are not required to stay clear from criminals, actually, usually the opposite is true.
Are you talking about Brian Steel? He was held in contempt because he refused to name his source that informed him of some misconduct by the judge (ex parte communication with a witness). That's hardly relevant here, the client wasn't involved at all as far as anyone knows.
any plausible attempt to minimize bad actors from using your service
I mentioned criminal lawyers because their job is literally to "offer their services to criminals or to people accused of being criminals" and they have no obligation whatsoever to minimize bad actors from using your service, in fact bad actors are usually their regular clientele and they are free to attract as many criminals as they like in any legal way they like.
Helping a criminal to commit a crime it's an entirely different thing and anyway it must be proved in a court, it's not something that can be assumed on the basis of allegations (their clients are criminal, so they must be criminal too).
That's why in that famous TV drama Jessy Pinkam says "You dont want a criminal lawyer, you want a Criminal. Lawyer.".
The premise of this story is that Telegram offers a service which is very similar to safe deposit boxes, the bank it's not supposed to know what you keep in there hence they are not held responsible if they are used for illegal activities.
In other words most of the times people do not know and are not required to know if they are dealing with criminals, but, even if they did, there are no legal reasons to avoid offering them your services other than to avoid problems and/or on moral grounds (which are perfectly understandable motives, but are still not a requirement to operate a business).
Take bars, diners, restaurants, gas stations or hospitals, are they supposed to deny their services?
And how would they exactly should take actions to minimize bad actors from using your service?
If someone goes to a restaurant and talks about committing a crime, is the owner abetting the crime?
I guess probably not, unless it is proven beyond any reasonable doubt that he actually is.
It doesn't matter if it's true or false it only matters what the justice system can prove.
> The premise of this story is that Telegram offers a service which is very similar to safe deposit boxes, the bank it's not supposed to know what you keep in there hence they are not held responsible if they are used for illegal activities.
This is the issue. Web platforms DO NOT have that kind of legal protection - be it Telegram, Instagram, or Hacker News.
Safe Harbor from liability in return for Content Moderation is expected from all internet platforms as part of Section 230 (USA), Directive 2000/31/EC (EU), Defamation Act 2023 (UK), etc.
As part of that content moderation, it is EXPECTED that you crack down on CP, Illicit Drug Transactions, Threats of Violence, and other felonies.
Also, that is NOT how bank deposit boxes work. All banks are expected to KYC if they wish to transact in every major currency (Dollar, Euro, Pound, Yen, Yuan, Rupee, etc) and if they cannot, they are expected to close that account or be cut off from transacting in that country's currency.
> That's why in that famous TV drama Jessy Pinkam says "You dont want a criminal lawyer, you want a Criminal. Lawyer.".
First, it's Pinkman BIATCH not Pinkam.
And secondly, Jimmy McGill (aka Saul Goodman) was previously suspended by the NM Bar Association barely 5 years before Breaking Bad, and was then disbarred AND held criminally liable when SHTF towards the finale.
At least in case of Section 230, distributirs that do not moderate do not need it because they do indeed have that kind of legal protection - see Cubby v. CompuServe for an example. Section 230 was created because a provider that did moderate tried to use this precedent in court and its applicability was rejected, and Congress decided that this state of affairs incentivized the wrong kind of behavior.
This is precisely why Republicans want to repeal it - if they succeed, it would effectively force Facebook etc to allow any content.
> This is the issue. Web platforms DO NOT have that kind of legal protection - be it Telegram, Instagram, or Hacker News.
e2e encryption cannot be broken though
> Safe Harbor from liability in return for Content Moderation is expected from all internet platforms as part of Section 230 (USA), Directive 2000/31/EC (EU), Defamation Act 2023 (UK), etc.
I have no sympathy for Durov and I don't care if they throw away the keys, but what about Mullvad then?
I guess that a service whose main feature is secrecy and anonymity should at least provide anonymity and secrecy.
> CP, Illicit Drug Transactions, Threats of Violence, and other felonies
you understand better than me that the request is absurd all of this is in theory, in practice nobody can actually do it for real, the vast majority of illicit clear text content are honeypots created by agents of various agencies to threaten the platforms and force them to cooperate. nothing's new here, but let's not pretend that this is to prevent crimes.
also: the allegations against Telegram are that they do not cooperate, but we don't actually know if they really crack down on CP or other illegal activities or not, because if they don't, the reasonable thing to do would be to shut down the platform, what does arresting the CEO accomplish? (rhetorical question: they - I don't want to throw names, but i think that the usual suspects are involved - want access to and control of the content, closing the platform would only deny them access and would create uproar among the population - remember when Russia blocked Telegram?)
also 2: AFAIK Telegram requires a phone number to create an account, it's the responsibility of the provider to KYC when selling a phone number, not Telegram's.
also 3: safe deposit boxes are not necessarily linked to bank accounts. I pay for a safety deposit box in Switzerland but have no Swiss bank account.
So my guess is EU wants in some way control the narrative in Telegram channels where the vast majority of the news regarding the war in Ukraine spread from the war front to the continent.
> First, it's Pinkman BIATCH not Pinkam.
Sorry. I'm dyslexic and English is not my mother tongue, but the 4th language I've learned, when I was already a teenager.
> was previously suspended by the NM Bar Association
that was the point. TV dramas need good characters and a criminal lawyer who's also a criminal is more interesting than a criminal lawyer who's just a plain boring lawyer that indulges in no criminal activity whatsoever.
> operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law
is it what happened here?
in my view Durov is the owner renting his apartment and not caring what people do inside it, which is not illegal, someone could go as fare as say that it is morally reprensible, but it's not illegal in any way.
It would be different if Durov knew but did not report it.
Which, again, doesn't seem what happened here and it must be proven in a court anyway, I believe everyone in our western legal systems still has the right to the presumption of innocence.
Telegram not spying on its users is the same thing as Mullvad not spying on its users and not saving the logs. I consider it a feature not a bug, for sure not complicity in any crime whatsoever.
As far as I can see. CP is probably the fastest way to get a channel and related account wiped on telegram in a very short time. As a telegram group manager. I often see automated purge of CP related ad/contents, or auto lockout for managers to clear up the channel/group. Saying telegram isn't managing CP problems is just absurd. I really feel like they just created the reason for other purpose.
Read the founder exit letter. whatsapp is definitely not e2e encrypted for all features.
You leak basic metadata (who talked to who at what time).
You leak 100% of messages with "business account", which are another way to say "e2e you->meta and then meta relays the message e2e to N reciptients handling that business account".
Then there's the all the links and images which are sent to e2e you->meta, meta stores the image/link once, sends you back a hash, you send that hash e2e to your contact.
there's so many leaks it's not even fun to poke fun at them.
And I pity anyone who is fool enough to think meta products are e2e anything.
> with "business account", which are another way to say "e2e you->meta and then meta relays
actually its a nominated end point, and then from there its up to the business. It works out better for meta, because they aren't liable for the content if something goes wrong. (ie a secret is leaked, or PII gets out.) Great for GDPR because as they aren't acting as processor of PII they are less likley to be taken to court.
Whatsapp has about the same level of practical "privacy" (encryption is a loaded word here) as iMessage. The difference is, there are many more easy ways to report nasty content in whatsapp, which reported ~1 million cases of CSAM a year vs apples' 267. (not 200k, just 267. Thats the whole of apple. https://www.missingkids.org/content/dam/missingkids/pdfs/202...)
Getting the content of normal messages is pretty hard, getting the content of a link, much easier.
iMessage is not on the same playing field as Whatsapp and Signal. Apple has full control over key distribution and virtually no one verifies Apple isn't acting as a MitM. Whatsapp and e2e encrypted messenger force you to handle securely linking multiple devices to your account and gives you the option to verify that Meta isn't providing bogus public keys to break the e2e encryption.
For iMessage, Apple can just add a fake iDevice to your account and now iMessage will happily encrypt everything to that new key as well and there's zero practical visibility to the user. If it was a targeted attack and not blanket surveillance then there's no way the target is going to notice. You can open up the keychain app and check for yourself but unless you regularly do this and compare the keys between all your Apple products you can't be sure. I don't even know how to do that on iPhone.
never thought about using csam image hash alerts as a measure of platform data leaks (and popularity as i doubt bots will be sharing them). that's very smart.
and show that fb eclipse everyone by a insane margin it's scary!
about your point on business accounts, the documents i reviewed included dialog tree bots managed by meta. not sure if not having that change things... but in that case it was spelled out that meta is the recipient
Its more a UX/org thing. In iMessage how do you report a problematic message? you can't easily do it.
In whatsapp, the report button is on the same menu that you use to reply/hide/pin/react.
Once you do that, it sends the offending message to meta, unencrypted. To me, that seems like a reasonable choice. Even if you have "proper" e2ee, it would still allow rooting out of nasty/illegal shit. those reports are from real people, rather than automated CSAM hashing on encrpyted messages. (although I suspect there is some tracking before and after.)
Its the same with instagram/facebook. The report button is right there. I don't agree with FB on many things, but this one I think they've made the right choice.
Telegram is for the most part not end-to-end encrypted, one to one chats can be but aren't by default, and groups/channels are never E2EE. That means Telegram is privy to a large amount of the criminal activity happening on their platform but allegedly chooses to turn a blind eye to it, unlike Signal or WhatsApp, who can't see what their users are doing by design.
Not to say that deliberately making yourself blind to what's happening on your platform will always be a bulletproof way to avoid liability, but it's a much more defensible position than being able to see the illegal activity on your platform and not doing anything about it. Especially in the case of seriously serious crimes like CSAM, terrorism, etc.
End-to-end encrypted means that the server doesn’t have access to the keys. When server does have access, they could read messages to filter them or give law enforcement access.
If law enforcement asked them nicely for access I bet they wouldn't refuse. Why take responsibility for something if you can just offload it to law enforcement?
The issue is law enforcement doesn't want that kind of access. Because they have no manpower to go after criminals. This would increase their caseload hundredfold within a month. So they prefer to punish the entity that created this honeypot. So it goes away and along with it the crime will go back underground where police can pretend it doesn't happen.
Telegram is basically punished for existing and not doing law enforcement job for them.
Maybe they didn't ask nicely. Or they asked for something else. There's literally zero drawback for service provider to provide secret access to the raw data that they hold to law enforcement. You'd be criminally dumb if you didn't do it. Literally criminally.
I bet that if they really asked, they pretty much asked Telegram to build them one click creator that would print them court ready documents about criminals on their platform so that law enforcement can just click a button and yell "we got one!" to the judge.
> There's literally zero drawback for service provider to provide secret access to the raw data that they hold to law enforcement.
That's not true. For one things, it is expensive. For another, there's a chance people will find out and you'll lose all your criminal customers... they might even seek retribution.
> I bet that if they really asked, they pretty much asked Telegram to build them one click creator that would print them court ready documents about criminals on their platform so that law enforcement can just click a button and yell "we got one!" to the judge.
You seem to believe, without having looked at the publicly available facts of the matter, that the problem is law enforcement didn't say "pretty please". The fact of the matter is that they've refused proper law enforcement requests repeatedly; if anyone has been rude about it, it's been Durov.
The chats are encrypted but the backup saved in the cloud isn't. So if someone gets access to your Google Drive he can read your WhatsApp chats. You can opt-in to encrypt the backup but it doesn't work well.
Meta seems to shy away from saying they don't look at the content in some fashion. Eg they might scan it with some filters, they just don't send plaintext around.
Yes, WA messages are supposed to be e2e encrypted. Unless end-to-end encryption is prohibited by law in your jurisdiction, I don't see how that question is relevant in this context.
The receiving end shared your message with the administrators? E2e doesn't mean you aren't allowed to do what you want with the messages you receive, they are yours.
Nope, it didn't even arrive on their end, it prevented me from sending the message and said I wasn't allowed to send that. So they are pre screening your messages before you send them.
isn't meta only end to end encrypted in the most original definition in so much that it is encrypted to each hop. but it's not end to end encrypted like signal.. ie meta can snoop all day
If a service provider can see plain text for a messaging app between the END users, that is NOT end-to-end encryption, by any valid definition. Service providers do not get to be one of the ends in E2EE, no matter what 2019 Zoom was claiming in their marketing. That's just lying.
What has E2EE got to do with it? If you catch someone who sent CP you can open their phone and read their messages. Then you can tell Meta which ones to delete and they can do it from the metadata alone.
I'm more disturbed by the fact that on HN we have 0 devs confirming or denying this thing about FBs internals wrt encryption. We know there are many devs that work there that are also HN users. But I've yet to see one of them chime in on this discussion.
I find it pretty ridiculous to assume that any dev would comment on the inner workings of their employers software in any way beyond what is publicly available anyway. I certainly wouldn't.
Why not? If I think my employer is doing something unethical, I certainly would. That would be the moral thing to do.
This tells me most of the people implementing this are either too-scared of the consequences, or they think what they're implementing is ethical and/or the right thing to do. Again, both are scary thoughts we should be highly concerned about in a healthy society that talks about these things.
One other potential explanation: FB and these large behemoths have compartmentalized the implementations of these features so much that no one can speak authoritatively about it's encryption.
You are talking about a company whose primary business idea it is to lock up as much of the world's information as possible behind their login.
The secondary business idea it to tie their users logins to their real world identities, to the point of repeatedly locking out users who they live under threat and refuse to disclose their real name.
For Reddit it is a bit documented how some power-mods used to flood subreddits with child porn to get them taken down. It was seemingly done with the administration's best wishes. Not sure if it still going on, but some of these people are certainly around, in the same positions.
That’s disgusting but certainly effective to take down something very quickly.
I was very disappointed to hear that UFO related subreddits take down and block UFO sightings. What’s the whole point of the sub if they censor the relevant content.
This is unrelated to main thread but since you brought up UFOs and censorship. Isn't it a disgrace what Wikipedia has done to the trove of "list of UFO sightings"?
Those listings were great and well documented up until about 2019 or so. They've been scrubbed heavily.
Yes it is. I don’t recall when and if I check out the list of UFO sightings on Wikipedia but I’m very aware of the problem.
In the English wiki it’s a group “Guerilla Skepticism” which dominates the field on esoteric content and much more.
In Germany we have the same situation and very likely every language has the same issue.
The bigger pictures is that the whole content from Wikipedia gets fed into the AIs and then it answers you practically the strongly moderates censored misleading content from Wikipedia.
The very disappointing thing is that nobody can’t to anything about the mods in Wikipedia, they dominate the place.
I've actually given up trying to post on Reddit for this reason. Whenever I've tried to join in on a discussion in some subreddit that's relevant(eg r/chess) my post has been autoremoved by a bot because my karma is too low or my account is "too new". Well how can I get any karma if all my posts are deleted?
Even those who farm accounts know the simple answer to your question. You have to spend a little time being civil in other subreddits before you reveal the real you. Just takes a few weeks.
The comments I made were quite serious and civil. Not sure what you mean. They were autodeleted by a bot. I wasn't trolling or anything.
I'm not particularly interested in spending a lot of time posting on reddit. But very occasionally I'll come across a thread I can contribute meaningfully to and want to comment. Even if allowed I'd probably just make a couple comments a year or something. But I guess the site isn't set up for that, so fuck it.
Sounds like you glossed over the phrase “in other subreddits”, which is the secret sauce. The point of my phrasing was not to suggest that you aim to be uncivil, but to highlight that the above works even for those who do aim to. So, surely, it should work for you, too.
I can see how it's frustrating, but the communities you're trying to post in are essentially offloading their moderation burden onto the big popular subreddits with low requirements -- if you can prove you're capable of posting there without getting downvoted into oblivion, you're probably going to be less hassle for the smaller moderator teams.
That's silly. I gotta go shitpost in subreddits I have no interest in as some sort of bizarre rite of passage? I'd rather just not use the site at that point.
Actually, HN has a much better system. Comments from new accounts, like your throwaway, are dead by default, but any user can opt in to seeing dead posts, and any user with a small amount of karma can vouch those posts, reviving them. Like I just did to your post.
It's simpler, the US wants to control the narrative everywhere and in everything, just like in the 90s and 00s. Things like Telegram and Tiktok and to some extent RT, stand in the way of that.
But why don’t they arrest them for allowing it to happen? Phone calls should be actively moderated to block customers who speak about terrorist activity.
Because the telcos _cooperate_ with law enforcement.
It's not whether the platform is being used for illegal activity (all platforms are to some extent, as your facile comment shows). It's whether the operator of a platform actively avoids cooperating with LE to stop that activity once found.
I know. That’s obviously true, but I hate that it happens and it makes no sense to me why more people aren’t upset by it. What I’m trying to get at is that complying with rules that are stupid, ineffective, and unfair is not a good thing and anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad. Cooperation with law enforcement is morally neutral and not important.
The real goal is hurting anyone that’s not aligned with people in power regardless of who is getting helped or harmed. Everyone knows this but so many people in this thread are lying about it.
> anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad
AFAIK these goals _are_ applied to equivalent services. It's just that twitter, FB, Instagram, WhatsApp, and all the others _do_ put in the marginal amount of effort required to remove/prohibit illicit activity on their platform.
Free speech is one thing, refusing to take down CSAM or drug dealing operating in the open is always going to land you in hot water.
I don’t agree that internet platforms deserve to be in their own special category which is uniquely required to police bad content. The only reason it happens is because it’s not politically or technically feasible to do it when the message comes through another medium.
I think it’s wrong on social media for the exact same reason it’s wrong to arrest power companies if a guy staples printed CSAM to a utility pole. Same thing for monitoring private phone calls. We know that AI can detect people talking about terrorism on the phone and cameras can monitor paper ads and newsletters in public spaces, but nobody would advocate for making this a legal requirement because it’s insane. The fact that nobody cares is proof that the public does value privacy and free speech. Why are so many of them tricked into thinking the internet is an exception?
I want people to commit to their beliefs and either admit they want surveillance wherever it’s technically feasible or give up and recognize that internet surveillance is also wrong. No more of this “surveillance is good but legacy platforms are exempt” waffling. Very frustrating and only serves the interests of people who already have power
From what I've read the arrest wasn't related to lack of proactive moderation, but the lack of, or refusal to do, reactive moderation i.e. law enforcement say "there's CSAM being distributed on your platform here" and the owner shrugs
> for the exact same reason it’s wrong to arrest power companies if a guy staples printed CSAM to a utility pole
That seems like a bad analogy. A closer one would be that I rent the pole space to people who I am told by law enforcement are committing serious crime in the open, using the pole I am renting to them. Additionally, I am uniquely capable of a) removing the printouts b) passing on whatever information I have about those involved (maybe zero, but at least I say that). The issue is refusing both. I don't feel they are egregious requests.
(this is not a tacit approval of digital surveillance)