Public institutions tend to observe the same policies, even when there's no direct commercial interest.
For example, if you join a Math department and start publishing papers that cast the department's research interests in a negative light, that's often a good way to get fired.
I mean, seriously, go ask just about any professor of any topic at any university if their field is undervalued or overvalued. They'll almost universally tell you that their field is undervalued, underappreciated, and more important than people think. They deserve more funding, and you should definitely consider majoring in their field.
My understanding is that the Google is pressuring it's employees (who are research scientists) to refrain from publishing papers that Google management believes casts a negative light on a Google product. It's not theoretical, researchers are quoted in the article and papers have been altered.
I find it difficult to come up with an analogy from a public institution. What is the product the math department would be pressuring it's members to protect?
Sure, each department in a university believes it's work important. That doesn't seem even remotely similar to me as this issue with Google.
I'm having trouble getting a clear perspective on what's going on. So many of the descriptions are vague and based on apparently informal descriptions, leaving much to the reader's imagination.
> It's not theoretical, researchers are quoted in the article and papers have been altered.
In grad school, PhD students' advisors typically insist on various revisions before publishing a paper, as publications reflect on the advisor and their institution. So there's nothing even slightly weird about Google having the same interest in revising the papers pushed by its researchers.
Unless there IS something weird about what Google's doing? But if so, what is it?
When an graduate school advisor provides feedback on a paper, the goal is to improve the quality of the paper. The peer review process also has the same goal in mind: produce a better paper.
According to this Reuters article, Google's new process happens after peer review and Google's other processes have completed.
"The “sensitive topics” process adds a round of scrutiny to Google’s standard review of papers for pitfalls such as disclosing of trade secrets, eight current and former employees said."
Instead of publishing the paper, Google will now review the paper with an eye to negative impact the paper may have on existing Google product (or lobbying efforts, etc.) Google isn't doing this to improve the quality of the paper, they are doing it to protect their business interests.
"For some projects, Google officials have intervened in later stages. A senior Google manager reviewing a study on content recommendation technology shortly before publication this summer told authors to “take great care to strike a positive tone,” according to internal correspondence read to Reuters."
I think this is very different from the advising process in graduate school.
Preventing people from disclosing trade secrets seem fair to me. Preventing valid research simply because it may negatively impact business strikes me as less reasonable.
"Four staff researchers, including senior scientist Margaret Mitchell, said they believe Google is starting to interfere with crucial studies of potential technology harms."
> I think this is very different from the advising process in graduate school.
All of that sounds normal to me. Including filtering out trade-secrets, which is completely normal when working with trade-secrets in grad school too. Additionally, it's completely normal to filter out intellectual property you might plan to patent; confidential information; proprietary industrial information; information protected by law; dangerous findings (e.g., hackers often omit details of an exploit until the relevant vendor has had time to fix); and a few other categories.
Maintaining a positive, constructive tone is also completely normal. For example, failed experiments are typically described as progressive steps toward an ultimate success; unforeseen problems are discoveries; and major issues are seen as research challenges to be overcome. Or, ya know, stuff like that.
I mean, is that all this story's about? Because if that's it, then it seems like nothing substantial. But if that's the case, why is this in the news?
> I find it difficult to come up with an analogy from a public institution. What is the product the math department would be pressuring it's members to protect?
couldn't the exact thing that happened at Google happened at a university? For example one researcher could publish a paper criticizing the methods that other researchers in the department have developed because of their carbon impact.
I think it's fair to say that there are internal pressures not to do that - - such a professor would have a hard time thriving in the department if they are attacking the methods of their colleagues.
Sorry, I'm having trouble following this comment. In a public institution (as well as at Google) research papers are subject to peer review before publication. Now Google is adding _another review process_ after peer review, I don't believe this is something that would happen at a public university.
Do you have an example of a public university censoring research papers that passed peer review because that university believed the paper cast "an innacurate negative light"... on what, exactly?
The tone of parent comment was "Google should tolerate research that's critical of Google", which I'm sympathetic to.
My rejoinder was that if it's inaccurately critical of Google, like hyping carbon impact without mentioning a decades-long carbon mitigation program, I get a lot more sympathetic to Google's position. Why should they pay someone to spread falsehoods about them?
The "mitigation" in question is buying carbon offsets (I mean there are improvements in DC efficiency also, but those only do so much, and language models ballooning 100x isn't going to be fixed with 10 or 50% efficiency improvements). For the moment "carbon neutrality" is only achieved through the purchase of energy offsets.
That doesn't mitigate. It offsets. Don't get me wrong, still better than nothing, but its not a mitigation.
What is the product the math department would be pressuring it's members to protect?
I’m not sure about math, but in physics it would be string theory which has been a dead end and has mostly served as a welfare program for boomer scientists
General proposition may be true, but how can a math publication place the department interests in a negative light? Like "Doesn't matter how hard we try, there are always undecidable facts in our own system"? :p
I still have faith in the research done by hard science research teams.
> how can a math publication place the department interests in a negative light?
With Math, the sensitive topic is utility. Math departments would tend to be upset by someone pushing papers that, say, call attention to the lack of social benefit relative to other fields of research that taxpayers could be funding instead.
I guess a researcher could also spin a narrative about mathematicians' contributions to cryptography, examining the negative consequences enabled by stuff like Bitcoin.
---
> I still have faith in the research done by hard science research teams.
That gets kinda complicated.
I mean, hard-science teams often rely on their expertise, as it's their unique source of value: if the area that they excel in is shown to be sub-optimal, admitting it would mean losing their current career path (and often taking a significant tumble down-the-ladder after transitioning).
So hard-science teams are often largely academically honest in what they do publish, though they're often biased toward casting what they do in a positive light.
For a common example, have you ever met an older computer tech who insists on using a legacy, obsoleted technology because it's what they know best? A lot of academics basically do the same thing.
Mathematicians are pretty open about the fact that their field is unimportant and their research funding is coasting off of winning WWII with Enigma codebreaking.
The only reason they deserve money is that they teach mandatory calculus to uninterested non-mathematicians.
>They'll almost universally tell you that their field is undervalued, underappreciated
This is most likely due to an availability bias. They have concrete examples in their own sphere but not near as much detail about competing domains. This leads to the mistaken assumption that their own field more important than it possibly would be if viewed in the overarching context
This sounds like the typical false equivocation about "all media being lies" when somebody brings up the danger of outright, constant propaganda like OANN.
Just because bias is impossible to avoid, does not mean that all bias is the same.
The goal isn't outright elimination of bias (which is completely impossible), its ensuring incentives that are aligned so bias doesn't become a self-reinforcing feedback loop.
Prior to publishing Assange worked with established reporters and US officials to ensure WikiLeaks' documents would not contain information that could be used to harm individuals prior to publishing.
The unredacted leak was put out by an unrelated third party (Cryptome).
So even if the unredacted documents did cause harm Assange would have zero responsibility for those outcomes.
Whereas we don’t need a citation for the “lots of decent, innocent people” killed by trigger-happy US troops in Iraq and Afghanistan, thanks to Assange that’s a historical fact.
Strangely, Assange's revelations haven't changed anything. Had they been the bombshells promised, there would be people upset, changes made. Look how different Snowden revelations were and how the public responded. Night and day, and for a reason. Freedom fighters and rabble rousers are completely different types of people with different motivations and results.
There's zero evidence of torture and murder inside Chinese political prison facilities. Offenders simply disappear. Probably to all inclusive resorts since there's no evidence otherwise.
Actually, you may be unaware due to the lack of or low-key reporting in the MSM, but Assange didn't actually leak that information; David Leigh leaked it in his book about Assange by publishing the password (which Assange had made him swear to absolute secrecy about as a condition of giving it to him) to the encrypted unredacted document store. Then John Young published the decrypted unredacted documents on Cryptome, the day after which Wikileaks also published them in order to warn anyone involved that they had probably been compromised by the earlier leak. This has all come out during his extradition hearing, and is uncontested by those directly involved. It also came out during the hearing that the USA could not provide evidence of a single person being harmed as a result of the leak, despite spending millions on an investigation.
For users wanting to prevent their ISP from sniffing around then tor works as intended. Against advertisers it also work decently as a self cleaning browsers that constantly change its IP address.
For developers and sysadmins that want to get an outside look at their own services or investigate third party websites (like fraudulent lookalike) it work pretty effective with some caveats.
It also works mostly fine against national and ISP firewalls that is intended to censor citizens and lead people away from places which the state has declared unsuited for its population.
Against police force it seem to mostly work as a free tool that get used by criminals as something better than nothing, but with some larger caveats and the police have cases from time to time where they have identified criminals (from either good investigations or parallel constructions depending on who you ask). The tor browsers has also not been immune to malware.
Against national-level intelligence agency, "citizen scores", and whistleblowers employed within such agencies, the protection granted by tor may be very far from 100%. It is not recommended by anyone to depend on tor against that threat model.
>> It is not recommended by anyone to depend on tor against that threat model.
That depends as much on the use case as the threat. Traffic analysis attacks require traffic. Short burst communication via tor (chat/email/bot control commands etc) are not traced as easily as large file downloads or random web browsing. Attacks on the client (malware) are also very hardware dependant. A target using the same Tor client on the same hardware regularly is a softer target than someone connecting randomly via a variety of devices.
The NSA (Or FSB/FBI/CIA et al) are not SHIELD. They operate in the realworld with realworld physics/math. If they did have reliable and simple backdoors into Tor we would have heard about them by now.
How you figure we would heard about it? I mean the only reason we know they can break RSA 50% of the time was because of Snownden and that was like 10 years ago or so.
I mean these people are really good at keeping things secret, I remember reading books written in the late 80's that still said the first use of computers was calculating artillery tables, not codebreaking.
> I mean the only reason we know they can break RSA 50% of the time was because of Snownden and that was like 10 years ago or so.
Edward Snowden's revelations were about seven years ago, and did not include anything about the NSA breaking RSA encryption or signatures 50% of the time or any other amount. Who knows where you got that from, but not Edward Snowden.
> I remember reading books written in the late 80's that still said the first use of computers was calculating artillery tables, not codebreaking.
That would be because it was true. The purpose of the Difference Engine and of early mechanical calculating machines that were actually built at the time was construction of tables.
Colossus (which was used for breaking Lorenz) is an early electronic computer, but certainly not the first such computer and it isn't a stored program computer (to change what Colossus does it's necessary to physically disassemble it) so it's not actually part of the lineage of stored program computers we use today.
The Ultra Secret was published in 1974 - after that point the fact that Colossus existed and everything else about war work at Bletchley was not a secret. So Ultra was kept secret for just over thirty years.
> Against national-level intelligence agency, "citizen scores", and whistleblowers employed within such agencies, the protection granted by tor may be very far from 100%. It is not recommended by anyone to depend on tor against that threat model.
Are there any alternatives then, that do work against this threat model? It seems like a lot of the real need for such a tool is for journalists and activists who do need protection against national-level threat actors.
I think you misunderstand. For such adversaries, Tor is good enough for what it does, but not sufficient. You probably want something like TAILS as part of a whole package of serious real-world OpSec.
>It also works mostly fine against national and ISP firewalls that is intended to censor citizens and lead people away from places which the state has declared unsuited for its population.
Can't most countries just block all Tor traffic? Russia does this as far as I know. If you're the kind of state that would have a national firewall, why would you let your citizens use Tor at all?
Sort of. There are transports that make Tor traffic look identical to generic HTTPS traffic etc. So you can filter based on endpoints, but that's hard to do for unlisted bridges and the like. In terms of exits, most countries prefer not to block them.
It seems that a lot of such blocking are done with a lower kind of effort by those who are tasked to implement it. An example is the UK porn and piracy filters,but also a bunch of east state countries with the "whoops, you entered a bad place" firewalls.
I would speculate that the purpose of those are not to be a perfect blocks but rather a methods to mold and redirect citizens towards what the state want.
I think it remains the best in class for private browsing. They have to make difficult trade-offs that achieve acceptable levels of performance while not leaking metadata like a sieve. They do also have a good track record of handling security vulnerabilities.
For the average user, the greatest threat is actually everything outside the Tor browser. For example, downloading certain files using Tor, then opening it in another application that leaks your address to other parties (e.g. certain video players). The chance of this happening might be a lot higher on a Windows system. Another big mistake is funneling unsanitized traffic through a Tor SOCKS proxy, because many applications leak their addresses.
It's also worth mentioning that Tor still allows plain HTTP between the exit node and the destination website, so an ordinary user may not realize that they might be sending plaintext data.
For people who may be targeted by governments, those scenarios are vastly more complicated and depend on how much of a prize you are. Tor's strength relies in numbers and on the uncooperative nature between certain countries. There will certainly be more traffic analysis based attacks.
There are some ways to mitigate some of the threats that you mention. Using Qubes or Whonix could prevent network access to other programs. The unencrypted requests can be blocked by turning on the EASE option in the HTTPS-Everywhere preferences. Tor doesn't have any way to protect against global adversaries performing timing analysis or attacks though.
It is though. Add HTTPSEverywhere to the toolbar using customize, and you will get the option to enable "Encrypt All Sites Eligible". Working as of Tor Browser 10.0 (ESR 78.3)
Version: 2020.8.13
Rulesets version for EFF (Full): 2020.9.14
Rulesets version for SecureDropTorOnion: 2020.7.30
It depends on how you use Tor. For browsing you will essentially remain anonymous forever unless you do something that can connect you between sessions, like logging into some user account. This excludes side-channel attacks and an adverse which controls a large number of nodes, or is able to listen to a lot of the global network traffic.
It's different for people who operates hidden services. They are always online, and it is easy to tie one session to another, because the session will always be tied to the service they are running. This means that over time, you will be able to identify the service even with control over a small subset of services. You can read more about the different ways this can be done here: https://www.hackerfactor.com/blog/index.php?/archives/896-To...
A huge caveat regarding the comment that said general browsing is ok
Browsing with JavaScript disabled (not just for some sites via the use of No-Script etc) is considered generally safe if browsing hidden services (ignoring traffic correlation attacks, adversary knocking nodes off line to increase the chances that your Tor circuit will use a guard and a relay node that they own and other tricks).
Browsing the clear web however is a rather different matter. Because exit nodes are a mixture of honeypots, servers run by kind hearted volunteers, servers run by three letter agencies and corporately sponsored servers, “Exit traffic” to the web should be considered at a 'roll the dice' level of probability.
Consider the example of person XYZ who is under an active investigation or there is a need for parallel construction. At (timestamp), Person XYZ activated a new Tor connection. This sort of info can be gotten from logs obtained from either your ISP or from any data centre or any point the connection that exited your building and connected to the guard node. Ok, so what, right? Agreed. However when correlated with Person XYZ also logged in to (or Googled ‘bad stuff keyword’, went to visit a site and was using a DNS server that logs queries, logged in to social media, sent an email, connected to IRC etc, etc) at (timestamp) then the ‘so what’ rapidly risks becoming rather more than a face-palm level of problem.
Let’s take a look at a real life example of someone that emailed a fake bomb threat at a US University https://nakedsecurity.sophos.com/2013/12/20/use-of-tor-point... Spoiler alert, the fact that it made the news sort of tells you already that it didn’t end well for him.
Bear in mind that as soon as you turn off JavaScript then you begin to stand out from the crowd (the Tor FAQ has a whole section on browser fingerprinting)
It seems to me that most de-anonymizing attacks used human operating errors, physical attacks like snatching a laptop with an open tor browser window from a user, or side-channel attacks based on malware like Finfisher.
Running it from Tails seems pretty secure...but, in the end, who does that consistently?
Also, it just doesn't seem very likely that the US DoD would fund a network which defeats their own surveillance efforts.
Being "anonymous" online is more a question of being anonymous from who's perspective. Fooling a sysadmin is easy. Fooling an ISP is hard. Fooling an NSA contractor is probably near impossible. I think you can achieve reasonable plausible deniability with enough inconvenience, though. Get rid of your smartphone, compartmentalize your activity, never enable JS, use public wifi, spoof your MAC, make a tinfoil hat, etc.
But a country’s policy is decided by the citizens, if you live in a democracy.
Facebook’s policy is decided by its CEO and few others. Citizens/users have no saying.
There’s a big danger in shifting most of the public discourse to private platforms. We already have many example, YouTube banning anti-Erdogan keywords is just the first popping to my mind.
I never said it’s OK for the government to control speech undemocratically like happens in China, maybe I didn’t explain myself well.
My point is: I would prefer the government decides democratically what can and cannot be told. And I am fine with a very liberal standard, in which almost everything can be told.
I am pretty much a free speech absolutist. Everything can be said in my opinion.
But, if limits are imposed by a democratic government they will at least follow some democratic standard. Limits imposed by private companies will only follow the money!