Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
U.K. unveils plan to penalize Facebook and Google for harmful online content (washingtonpost.com)
69 points by hn_throwaway_99 on April 8, 2019 | hide | past | favorite | 67 comments


I suppose it's easier to "ban the internet" rather than fix the societal issues that have lead to people integrating with the sort of communities that share harmful content - that requires serious effort.

Having said that, Facebook et al. are US-based corporations, they'll take as much notice of this as me whinging about privacy violations.

We've also got that porn-pass coming in soon, apparently. Both of these government internet plans are completely routed with the use of a VPN.


They may be US-based corporations but they will have wholly owned subsidiaries here in the UK - looking at Companies House (where you can see their audited accounts) in 2017 Facebook UK had a turnover of a billion pounds, probably a good bit higher by now:

https://beta.companieshouse.gov.uk/company/06331310


When a House Select Committee can get Zuckerberg or Pichai in front of them to demand answers to their Cambridge Analytica questions, then I'll concede that the law can take on these corporations.

Last I noticed, best they've had is some third-party contractor for Facebook, and they had to get the serjeant-at-arms to pin him down at his hotel.

If it hits the fan and they go to take profits, nothing stopping them pulling an Amazon and taking the cash out to a subsidiary abroad.


> When a House Select Committee can get Zuckerberg or Pichai in front of them to demand answers to their Cambridge Analytica questions, then I'll concede that the law can take on these corporations

The UK has broad extradition rights with the United States. If Parliament wanted to get nasty, they could.


In a world where the UK might be struggling to find international friends I don't think anyone would be daft enough to try and extradite a leading citizen of the US.

The chances of the US agreeing to any such extradition are even smaller.


> a leading citizen of the US

If the UK charged him and got British court approval, American courts would defer. Given Zuckerberg’s present standing, it’s unlikely Congress would attempt to intervene.


Right now, Brexit is consuming over 100% of the sustainable effort of the UK government. If the UK government decided Facebook was no longer welcome, it would not be able to operate in London. On the other hand, if the UK government was functioning correctly, it also wouldn’t need to be that heavy-handed.


Well above 100% of the sustainable effort. It's being run like a failed software project, with important decisions running on until 10pm. All kinds of normal leave has been cancelled; MPs are being wheeled out of hospital or missing the funerals of their parents for this.


I'm sure you can think of lots of ways a government can coerce a company if they really want to


A more optimistic view is that if nations hit on the centralized bodies hard enough, it may end inadvertently end up driving the creation and adoption of truly decentralized internet options. Consider how crack downs on things such as Kazaa are what ultimately helped spur on the creation and rapid adoption of things such as torrents. The UK porn ban in particular has some major potential. All it's effectively doing is creating a huge vacuum in the market for a decentralized PornHub.

The irony of the world is that progress often requires egregious action to finally drive counter-action. We should've long since been moving towards a decentralized internet, but 'meh, why? what we have works.' is typical. Until it doesn't work. And so for that, cheers to the UK!


Besides, I'm pretty sure banning is only gonna make those community feels like they are being persecuted for being right and itching the man.

They then will proceed to find another place that is less easy to control to carry on with their message, and will have a renewed feeling of legitimacy and urgency.

What's even worse is that it may very well be used to censor even more things in the future, as people with a new toy tend to make use of it.

In the end, removing space to debate about hot topics like those will make the angry people feel like loosing even more respect (a big motivator in their initial believe), while preventing anybody else to build a sane approach to it (you need to be exposed to hate to be a functional member of an imperfect society).


>> "Besides, I'm pretty sure banning is only gonna make those community feels like they are being persecuted for being right and itching the man.

>>They then will proceed to find another place that is less easy to control to carry on with their message, and will have a renewed feeling of legitimacy and urgency."

You left out the most important part, by forcing them to 'another place' you cut off their entire supply of fresh young, disenfranchised lonely young men to indoctrinate and recruit into their hateful belief system. Yes they will claim they're persecuted and play that up but by denying them the platform they'll also lose relevance, reach and impact.

This is just a small step in the right direction. Despite warnings from many we've let the internet became a hate indoctrination tool under the guise of free speech. Only now that multiple shooters motivations can be directly connected to 'internet indoctrination' is action being taken. This is long overdue and truthfully just a good first step, closing this pandoras box will require a concerted global effort and even that may still prove to be unsuccessful.


The problem with this logic is that the exact same arguments are used by totalitarian systems to shut down the opposition. And like with this comment, they are sure they are the good guys and are doing what's right.

What's more, we had plenty of hatred before internet. They managed to recruit enough people way before we had internet or even television.

The fact is, this "solution" is just hiding a symptom, which not only gives a false sense of security, but makes treating the disease even harder.

And of course, it's a slippery slope.


> We've also got that porn-pass coming in soon, apparently

The implementation of this was delayed because they couldn't make it work, which is something of a hallmark of this government. I suspect something similar will happen to anti-Facebook legislation.


Are these regulation dangerous? Yes, they are. They can easily be bent to censor, to cover up nearly anything.

But are these regulation useful? Yes, they are.

When I was a teenager online - early 2000s - extreme violence was hidden behind obscure websites, not even indexed by search engines. I had friends who had "shocking videos" - think videos of actual killings - and I really tried to stay as far as I could.

In the past few years articles and publications regularly emerge that these topics and videos are now basically mainstream. After deep diving into instagram, tumblr, or twitter, to see if these claims are real, I'm going to say, they are, and this is very far from healthy.

We need some regulations; I'm only afraid they will be an overshoot, for what the chances are too high.


> When I was a teenager online - early 2000s - extreme violence was hidden behind obscure websites, not even indexed by search engines. I had friends who had "shocking videos" - think videos of actual killings - and I really tried to stay as far as I could.

And having access to them caused them any kind of incurable psychological damage or something? Because most of the time I doubt the usefulness of any of those "we need to protect the children at all costs" regulations.


It's extraordinarily difficult to argue both ways: that free speech is crucial to liberal democracy and that hate speech / propaganda / disinformation / advertising have no results or consequences.

And yet that seems to be much of the anti-regulation debate position, including as I read, yours.

My view is that speech is consequential, and that we should, with extreme care, be prepared for some regulation.


Not arguing one way or the other for these regulations, but I think dismissing the negative effects of this type of online content is a huge mistake, as it does cause real, sizable problems. ISIS probably wouldn't have been able to exist without their propaganda videos. Multiple mass shootings had direct connections to the perpetrators' radicalization in online forums. Fake, foreign-state-sponsored news had an impact in democratic elections from the 2016 US presidential election to Brexit.

It's one thing to argue these regulations will have negative (unintended or otherwise) consequences, but it's quite another to just pretend the porblem doesn't exist.



ISIS and other “bad things” you’re probably right to some extent, but what about all the good things that wouldn’t exist either? Awareness of police brutality, corporate corruption, country wide revolutions, marginalized people’s rights, an individual blossoming in an online community. The list goes on, there is much debate to the point that this type of regulation will cause “takedown by default” for all content and push the good and bad to the fringe.


I absolutely agree, but as you point out you need to honestly look at both sides of the coin, not pretend that one side of the coin has no negatives at all.


> In the past few years articles and publications regularly emerge that these topics and videos are now basically mainstream. After deep diving into instagram, tumblr, or twitter, to see if these claims are real, I'm going to say, they are, and this is very far from healthy.

If articles and publications on these topics are basically mainstream, I imagine you wouldn't have to do a deep dive to find them.

IMO the fear that videos of extreme violence is just a rehash of the 1990s culture wars on violent video games and movies. There is no rise in violence or extreme acts around the globe despite availability. Certainly not enough to justify a ham-fisted centralized censorship attempt.


While violence or extreme acts may not be on the rise, what about depression and anxiety? The victim may be the person exposed to the content, not who they in turn harm.


If you're making a claim that availability of extreme content on the internet is leading to depression and anxiety, please provide a source.


I'm not making the claim, I'm offering the question whether more research is needed to determine if the exposure of extreme material increases the level of depression and anxiety a subject may experience.

I am simply responding to your statement which seemed to conclude that only violence and extreme acts are the only measure to determine if something has a negative or positive impact on society. I do not believe it is the only measure, so I posed the question.


That's not the point he's making. There are other consequences than 1:1 repetition.


> We need some regulations;

But the regulations are already in place. In the UK especially there are very strict and broad-range hate-speech laws. And most of the content referred to as problematic already breaches the ToS for the platforms mentioned. And evidence suggests those companies have already been pouring resources into trust and safety teams to detect and stop such content.

Getting rid of "problematic" user content on social networks for the masses is a very hard game of whack-a-mole as people quickly adapt their way of sharing content when it's being blocked. Ultimately you'd have to destroy the value of the social network altogether to ensure you block all of it.

This is just going to give legal power for government PR campaigns whenever a particular "problematic" opinion gains too much traction, since it's so vaguely defined that any website with user content could be penalized at any time at an official's discretion regardless of context and whether it's true hate speech or not. All the actual problematic crap (real hate speech, actual abuse/mutilation videos, etc.) will never gain that kind of spotlight and will continue to find new ways to circulate faster than it's stopped.


Where to start is with advertising content regulations. Not just what is shown, but how often. Prevent ad-stalking from following people everywhere they go.

Also, the categorization of people through these platforms is a mass psychological experiment and maybe it should be treated as such.

Finding what pushes people to engage in a topic is often the same as finding their fears. Feeding them with a barrage of imagery and headline that provoke and prod these fears is a recipe for disaster. AI has proven exceptionally good at that.

Profiting from all of this is morally bankrupt.

The saddest part for me is that these platforms are useful but easily replaceable. They are optional tools in our lives but are treated as societal infrastructure. I deleted social media over 2 years ago and I don’t see what all the fuss is.


Why ban them though? We could just tag them instead, prevent people who don't want to see them from seeing them.

AFAIK there are reddits like (I'm making this up but it's probably something similar) r/PhotosOfDeadPeople and r/gore - but come on, you don't visit such a place accidentially.

This move is pure government censorship and/or security theater (the politicians want to be seen as doing something).


I agree and what I think (optimistically) could come from regulation. Surely by making these views and opinions less discoverable will help them become less mainstream?


In the early 2000s, extreme violence was indexed on search engines. You just had to actively search for it.

The fact that you had to "deep dive" shows that you have to actively search for it so there goes your argument for more regulation.

Also, there is a difference between regulating and banning. Why are you conflating regulation with censorship? Those are two different things.

And why is showing real violence "far from healthy"? One of the reasons why I didn't join the military was seeing what actual violence. I think it is healthy to show what is really happening.

Maybe if we showed everyone the truth rather than PR packaged fake propaganda, we'd be less inclined to support wars ( territorial, drug, etc ).

If your logic is applied, then everything from porn to rap music to movies and literature needs to be banned.

Why do you and a few members of the british government decide what people can or cannot see based solely on your subjective opinions?


[edited for tone]

The proposition that something requiring a "deep dive" is now "mainstream" seems internally inconsistent. Also the proposition that offensive content being available on the internet following such a deep dive is "very far from healthy" needs a citation.


"deep diving" - my term for stepping out of the "AI" preferred content social media would normally show to me. Check trending topics, follow them deep down into popular groups, hashtags, etc.

> > and this is very far from healthy.

> Citation needed.

I still believe in that bright future pre-cyberpunk science fiction was promising us. Citation enough?


> I still believe in that bright future pre-cyberpunk science fiction was promising us. Citation enough?

Not to impose your will against free speech, no.


Free speech doesn't mean you can literally say anything...


It does.

Until you start redefining "free" with State power, that is.

Either way, you're never conceptually "free" from consequence, official or otherwise.


Depends, personal freedom or more of an ethical freedom?

At some point, unlimited personal freedom begins to look like taking away the freedoms of those you're targeting.


Weird how Zuckerberg is actually pushing for this kind of action: https://www.cnbc.com/2019/03/30/mark-zuckerberg-calls-for-ti...

Makes me smell a rat. Perhaps a world in which Facebook can delete whatever posts they choose at the drop of a hat citing vague 'issues' is one they actively want?


Can we all please stop electing politicians that understand nothing about the Internet?

This runs in the same vein as the "terrorism filters" that the EU plans to introduce after the "upload filters". They want to revert public discourse to the "old" model with newspapers as moderating gatekeepers and to kill independent (and free) venues.

This is not about terrorism, porn or "national security", this is about re-entering the Middle Ages. And the problem is that thanks to the ultimatively arrogant "disruption" culture the big tech companies are in a defensive position now and not an offensive one.


Well at least we're leading in something in the UK still: craziness.


There is always 'censorship' in operation. Either overtly when things are not allowed or covertly where some things are simply not aired, people not given visas, opportunities, blacklisted and generally isolated. The problem is when people take an absolute and utopian position pretending these are actually not in effect when they always have.

Perfect examples are hate speech and incitement to violence and even things like nudity and pornography. Most countries have laws against this. For instance most islamic extremism videos and content are already banned and taken down rapidly, preachers and other known purveyors of hate are not given visas and are not allowed to air their views. That's clear cut and no one has a problem with it.

Since most ideologies and religions have an extreme element they will propagate extreme views and incite hate, and the question then becomes if you are comfortable having social media full of people openly spreading hate against religions, race, sex or any other subgroup.

And its here we must explicitly beware of double standards and some people supporting some kinds of extremism opportunistically and muddying the waters under the pretext of free speech because it doesn't threaten them personally.


Are we entering an age when we will in effect end up with geo-fenced content filtering firewalls built on the back of that lovely catch-all `political correctness`.

Look at the great firewall of China, how was that perceived initially and how is that perceived today. For some the perspective is still the same, for others, they see a more upfront way compared to how we are in effect seeing Governments implement comparable systems by forcing that filtering and content blocking/restrictions upon private companies.

Whilst we are all for free speech and openness, we see governments all around the World curtailing those area's on the back of laws and regulations that are born out of a small minority and edge-cases.

After all - seeing sites blocked on the net because 0.01% of the content falls foul of some law or regulation and that being classed as fine. Yet transpose that same mentality onto people and cries of discrimination and racism jump right out at you.

Are we seeing the majority of people being sleep-walked into being firewalled upon what content they can see and what they can say akin to the great firewall of China? Sadly, I think we are.


The act is illegal. Should we penalize police for not being able to stop this action?

How is the information about the act illegal?


Expecting police to prevent all crime is fantastical. Police generally serve to enable punishment, deterrence and incapacitation of criminals.

Furthermore, there is a distinction in motivation between propaganda and journalism, but the information itself is unaware of this. If a terrorist beheads someone, then sends a video of it to a journalist who publishes it, is the journalist simply reporting the news, or are they aiding terrorism? Certainly, the video may influence the audience, and cause fear, as the terrorist desires. How we can, or even whether we should solve issues like this is a polarising topic.


so what about websites that only deal in "harmful" content?


nothing compares to the traffic social media receives.


The UK does it and the media spins it as "penalizing social media for harmful online content". China does it and the same media spins it as "oppressing, tyrannizing and controlling the people".

Just like when germany passed censorship laws, the media praised it and russia copied the german censorship laws and the media spun it as "tyrant oppressing and controlling the russian people".

So is romeo and juliet going to be banned suicide because it "encourages" suicide? What about suicide/euthanasia documentaries which support doctor-assisted end of life programs?

What about books like ( bible, koran, mein kampf, etc ) that encourages violence? Is project gutenberg going to be shut down? What about all the media companies who supported wars in iraq, libya and the middle east? Are they going to be shut down? Are they going to be shut down for encouraging violence?

We have government agencies that spread fake news and disinformation. Is that going to be banned? What about PR firms whose business is spreading fake news and disinformation?

And what is children accessing inappropriate material? A christian or muslim parent thinks that lgbt material is inappropriate. Does that mean we have to scrub the internet of lgbt material? An atheist might be religious material as inappropriate for children. Do we ban religious material from the internet?

Whenever politicians use "children" as an excuse to justify more laws and regulations, it's rarely about protecting children. These people don't care about children. They care about power and control.

Once agin, the UK is at the forefront of censorship.


[flagged]


Indeed.


[flagged]


If false news is incapable of causing harm, then surely no amount of false news telling people “it is a problem” could possibly be a problem that causes anyone any harm?


"If [strawman] then [you are wrong because your strawman is stupid]" is not a persuasive argument.


I have seen a lot of people mock the idea that “fake news” is a problem. I might be accidentally nut picking, but it’s no straw man.


And I would be among those mocking the idea, but that's because I think:

1. The supposed harms of particular lies are frequently overblown.

2. The outright lies that have the most potential to do damage to society are those that get peddled by the mainstream media and believed by huge swathes of society, but the conventional "fake news" narrative of progressives focuses instead on obscure dedicated fake news sites run by Russians.

3. The proposed cures for this supposed evil of "fake news" all involve giving large institutions the role of determining what is true and censoring all dissent. That is a recipe for a society whose beliefs are far more divorced from truth than the one we currently live in, not less.

None of that is equivalent to the claim that literally no harm is ever done by lies in the news. I don't think anyone believes that, and I think you're badly misreading those of us who are contemptuous of the "fake news" narrative if that's the interpretation of our views you've walked away with.


Can we classify Antivaxx content as one of these, please


this is the end result of news titles such as "facebook elected trump", "facebook caused genocide", "facebook is at fault in New Zealand", "facebook caused Brexit".


When you don't like what you see in the mirror, it's easiest to just blame the mirror.


But if repeatedly looking in the mirror actually causes you to become more vain, or develop body dysmorphic disorder, then the mirror itself might not be entirely healthy. Cause and effect in societal matters are complex; there are usually no simple fixes, but that doesn't mean we should never try. Whether this particular attempt will be successful, I doubt though.


If the mirror is harmful, you can stop using it. You don't need to cripple mirrors for everyone else.


If people knew when to stop, we wouldn't have addiction problems. If people knew what was bad for them and could stop, well, that would be a very different world.


Who is to say that society has a body dysmorphic disorder? Are governments such trusted doctors? Should a government control a society's self-image with an airbrush? If one doesn't believe that individuals are responsible enough to govern their own communications, then perhaps so. But I believe that the best society-doctor is society itself, and given past and current efforts, am skeptical of the sincerity of governments when they do pull our their airbrushes. Mass-murder-selfies are obviously evil, but what about mentioning factory air pollution? I know of one government which will readily airbrush over such online blemishes, and if these two seem ridiculously far apart, remember that wedges start thin. The latter example raises what I believe to be the main point of government airbrushing - to better its own mirror-image, rather than for the health of society.

Speaking of censorship, why has my comment been grayed out?


The current debate about tech industry / social media regulation is really frustrating to me, both on HN and elsewhere. First, there are a lot of people calling for regulation without thinking through the consequences. I see a lot of right-wingers and even libertarians thinking regulation will force tech companies to allow more racist etc content. That is not going to happen. Regulations will make social media companies be even stricter on "extremist content", whether of the Islamist or white racist variety.

Second, people here refuse to engage with the arguments of those proposing regulations. There are real problems that we will not be able to brush off. On the internet, "think of the children" has long been a sarcastic phrase. In reality, faced with a choose between allowing for example pro-ana content that encourage teenagers to starve themselves to death, and annoying 25-year-olds who want a free internet, most people will choose regulation.


Whatever pro-ana content is, there have been books about starving yourself, building bombs, committing genocides and whatnot since printing was invented.

It is not the platform's or the internet's responsibility if a minuscule percentage of teenagers starve themselves to death because they watch a video. If a teenager isn't equipped enough to be able to not starve himself to death because someone on the internet tells him to do so, it is the parents' fault, and the parents' responsibility to keep that teenager off the internet and into a psychiatrist or psychologist office.


> it is the parents' fault, and the parents' responsibility

The idea that parents should have such sole responsibility over their children is one extreme of a spectrum of opinion. UK government policy tends to be on the other end of that spectrum, with the opinion that children are the state's responsibility, with forced adoption being used much more readily than the rest of Europe, for example.

It's also hard to reconcile the idea of childrens' mental health being the sole responsibility of parents with the causes of childhood mental health problems very commonly being abusive and neglectful parenting.


I agree, that's how the UK does it. It's a communist view of State though, and it's awful.

Children should be educated and protected by their parents and family, not the State. First, because very kid is different and it's unthinkable to create a catch-all solution that is good for everybody. Second, what is the State good at? Anything that is done at the government level is about 100 times worse than private businesses, and 1000 times worse than what the vast majority of parents can do. You don't want the State deciding what you are allowed to look at on the internet, if anything they would do more damage than anything else.

I also don't get why people have to freak out about everything nowadays, as if kids are completely helpless against the outside world. When I was a kid there was all kind of extremely dangerous stuff going on, and as kids we got warned by our parents and everyone is still alive.


> and as kids we got warned by our parents and everyone is still alive.

Not every child has this though. Parents vary greatly in competence and ethics, and virtually every society and government recognises children as having rights separable from their parents' and draws a line somewhere as to what parental conduct is acceptable.

In addition to not being granted every responsibility, parents often don't want every responsibility. Most people in western societies expect the state to provide education, and some level of public security.

That said, I agree that the UK takes this to an authoritarian extreme.


Yes, let's say that the UK draws it so far from where it should be, then I think most parents should feel like the State is way beyond its responsibility and into attempting to control its citizens.


Most of the world (with the US as one of the most notable exceptions) has signed on to the rights of children and they have a right to a safe and sound upbringing. When the parents fail to do that either due to drugs (including alcohol), religion or general neglect/disinterest the child doesn't lose that right.


I don't see how that's related to what we're talking about--except as an attempt to take an even more extreme set of cases to justify the need for regulating what everybody can and cannot read online.

In the extremely rare cases where parents aren't able to take care of children, new guardians might be assigned by the government, but then they would take on the same responsibilities as the original parents, including not letting a kid starve himself to death because of an internet video.

If you mean that the State is justified to intervene in some cases sure--but it depends how extreme these cases are. Certainly not what people post on the internet (on top of already existing laws that regulate dangerous behavior).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: