It's not safe to assume the NSA doesn't log DDG searches. Look at the PRISM logo - it's a beam splitter. Read the slide, look at the "Upstream" portion.
They're logging all your URLs and headers. How much are you willing to bet they can't decrypt https? I dont understand all the hubbub _is focused solely_ on direct server access (the bottom half of the slide), when "Upstream" access is just as big a concern.
EDIT: rephrased my concern about direct vs upstream
"How much are you willing to bet they can't decrypt https?"
I'd bet quite a bit, though not "my life", that they do not have a generalized "read everything" ability for all forms of SSL. They may have what cryptographers would call "a crack", but that's a low bar, and doesn't prove they have a practical attack.
However, DDG is currently using 128-bit RC4, which is very weak. [1] I wouldn't care to bet anything that the NSA doesn't have an RC4 cipher crack that is practical to run on wide swathes of traffic.
RC4 is very popular, which I believe is because some people claimed it was a defense against the BEAST attack. I researched this for work, and I couldn't find anyone whom I trusted saying that was a good mitigation. The people I trusted merely observed that RC4 was not vulnerable, but never said you should switch to it. Only secondary sources ever suggested that. My conclusion was that there was a reason for the primary sources never suggesting that; in response to a theoretical break of the rest of SSL, the correct move was not to move to a solution that had much more practical attacks already known than what BEAST demonstrated. But now it's even sillier; BEAST has been either entirely or almost entirely mitigated in browsers (there's no server-side defense against BEAST, but there's a client-side one you can use, and browsers now have it). As far as I can tell, RC4 should be abandoned and we should resume using stronger ciphers for SSL. Anyone still concerned about BEAST should update their browser.
Not without someone noticing. Some sites have pinned certs in Chrome, which would stop this, and even without that you would expect some knowledgeable techie at Facebook or Github or something to be using their home laptop and say, "Wait a sec, this isn't my company's public cert!"
Not having seen any blog posts screaming, "OMG, my site is being hijacked wholesale," I can only assume that the NSA isn't doing this (or has managed to squelch by legal order every single person privy to the real cert at MITM'ed sites, which is absurd and would beg the question, why not obtain the private key from these people in a similar way?).
Do they need to MITM? If they have a copy of the private key, can't they just use it to decrypt the data .. even old data for which they've only just acquired the key?
Having the root CA's private key doesn't give them access to the end entity's private keys. When you ask a CA for a cert, you only provide them with your public key (in the form of a CSR) for them to sign. The CSR does not contain the private key.
True, but they would have to do this for every single web server they would want to collect information from. Not impossible, but it'd be a lot of work.
They have to set up impersonating SSL certs for every connection they want to MITM. While there'd clearly be value in them inserting or subverting network hops between "the great unwashed" and gmail/facebook/aim servers, there's very little chance the NSA have access to hops along the path between my (Australian) adsl connection and my vps (located in Australia).
For internal (or routed through) US traffic - while Verizon's lack of interest in protecting customer data is probably shared by major backbone providers - I _strongly_ doubt even the NSA has enough gear hanging off backbones to actively MITM any significant proportion of the firehose that'd represent. Even the AT&T "secret room" probably doesn't house enough gear to be able to create fake(signed)certs and MITM every SSL connection for millions or more simultaneous users browsing every https site under the sun.
Having said that, I'd bet good money the _do_ target specific SSL traffic - has anyone checked the SSL connections to TOR entry and exit points recently? That'd be one spectacularly obvious path to try "speculative MITM attacks".
Heh, I was wondering if that might get noticed. :)
On the one hand, don't take my word for it; I also have not found anyone I trust who has verified my explanation directly. On the other hand, I did do my best to read the primary sources very carefully, both for what they say and what they don't say, and I was confident enough to implement more conventionally strong ciphers on the services I'm responsible for, so my money is where my metaphorical mouth is.
If they can decrypt some SSL (maybe low bit) it is likely a very intense process that requires vast hardware resources, so even if they can do it, it is not likely it is being done for all traffic, but could be applied to some traffic.
Doesn't necessarily mean they don't have the ability to decrypt, it just means that they have a process in place so in case shit ever hit the fan (like it just did) they can come back and say "what's the big deal? we have a process in place"
That's not universally true - there's a remarkable amount of TLS/SSL encrypted email-in-transit, either via STARTTLS ESMTP commands or SSL over port 465 (and 993/995 for IPAM and POP3).
I don't think there's a way to guarantee your mail always travels over TLS/SSL secured connections, but I suspect more of it does than you think.
There's a straightforward way to make sure your email is always encrypted in transit: encrypt it before you send. No promises about making sure your email can always be read by the recipient, though...
And here's the problem. Email needs to be able to be read by the recipient, so until a significant portion of email recipients can handle encrypted mail - the NSA doesn't need to attack my encrypted email storage, because enough of my correspondence ends up in cleartext in gmail/hotmail/yahoo et al.
This is a hard one to solve. GPGmail seems to get broken with every Mac Mail.app release. Vast numbers of people rely on webmail - which'd need server-side or in-browser GPG decryption. My Mom's not going to use command like gpg tools. How the hell do we bootstrap our way up to ubiquitous encrypted email?
Hm, should be easy enough to have some browser plugin that lets you select a text/data field and recipient list field and encrypt it with the appropriate key; and to do something similar for recognition and decryption of fields.
I think there are complications though - you need to be very sure that rogue javascript can't dig around in your plugin and extract my private key. I'm not sure how securely sandboxed plugins can be.
What's the normal procedure for making a call whose output depends on a file that must be kept secret? Is there a typical OS API pattern that's seen in the various programs like ssh, scp, and so on?
I think the one of the problems is that software like GPG and OpenSSL go to a lot of trouble to make sure private keys don't hang around in memory for any longer than absolutely required - minimising the risk of having the OS preempt the executing code and write the key out to swap (or having malicious code slurp it up out of ram). The bare-metal hoop-jumping required to get that right might not be possible in the context of a browser plugin.
More concretely, they don't have to rely on decrypting https. An approved NSA or FISA order to DDG will give the government an en clair "wiretap" on your DDG searches for up to a year. They may not be able to get the searches you did before the wiretap began, but that's all.
There's no question that they're monitoring upstream traffic. In fact they may still be doing the old ECHELON trick in which the US eavesdrops on non-Americans, the rest of the world spies on Americans (among others) - and then everyone swaps the data received.
But in the light of the PRISM documents it's even more likely than it was before that the NSA doesn't have the ability to decrypt HTTPS, or at the minimum that the US considers it too important to risk giving it away by using it on routine Top Secret signals intelligence. (And/or maybe too resource-intensive to use for that.) The strongest evidence for this is that we haven't heard anything about such a capacity yet from Snowden, Greenwald et al., who all have the full PRISM deck (along with other documents) in their possession and would surely tell us about it if they knew of it. So either 1) the PRISM slides do mention the ability to decrypt SSL or SSH streams but Snowden and the journalists haven't picked up on it (not impossible given the apparent incompetence they displayed over "direct access"), 2) it's too sensitive to mention in a self-aggrandising Top Secret overview of upstream and "direct collection" Internet signals intelligence, which probably means it's not in use (or at least not in regular use) for upstream collection or 3) they really don't have it.
A supporting reason to think that they don't have it, or hardly ever use it, is the apparent emphasis on "direct collection" in the PowerPoint. Why go to the hassle of dancing the frenemy minuet with Google and other fairly-anti-surveillance Silicon Valley firms when you can just get what you want from upstream collection at the apparently more-accommodating telcos? This isn't conclusive because even if you could understand all the traffic into and out of someone's Facebook account you'd still like to be able to see the internal state of the account, in particular so that you'd know what they'd been doing before the upstream surveillance began. But I think it's at least as likely that the whole new focus on direct collection is a workaround for the fact that, thanks to SSL and SSH, upstream collection just isn't what it used to be back in the days of ECHELON.
As the slide said, You Should Use Both: direct collection to give you access to US-company servers, probably bypassing the HTTPS problem, and upstream access to give you data, probably only unencrypted data (email!), that passes through the US without going to a US-company server.
(If you want an exotic alternative theory, you could speculate that the PRISM document is a fake, a limited hangout http://en.wikipedia.org/wiki/Limited_hangout by the US spooks, maybe precisely to direct attention away from their ability to decrypt HTTPS streams. But this now seems unlikely, for example because DNI Clapper would surely have to have approved a managed release of a set of documents that both gave away the Verizon metadata surveillance and so also implicated him in perjury.)
I can't remember which interview it was, if on Democracy Now, or his MIT lecture video, but Bill Binney stated that the NSA in fact does decrypt HTTPS.
If Bill Binney said that, and if he is right, I'd assume the most likely explanation is that NSA can push over some low-security SSL connections of the type jerf describes above https://news.ycombinator.com/item?id=5877362 , but has to rely on "direct access" to get around most or all high-quality (but still widely-used) SSL encryption. (Or, again, that it also has the capacity to break high-grade HTTPS connections, but it's holding that back for really important occasions.)
With the history of the gov/NSA being effective crypto gods - my money is they are ahead of decrypting SSL and HTTPS and even of it is not real-time, they store streams from target end points regularly for slower offline decrypt.
I wonder why so many people believe this. Many simple and weak ciphers have been around for decades and - although they are considered to be very insecure by cryptographers - certainly can't be decrypted in real-time (!) on this scale (!).
This has been talked about many times now. All compromising a CA lets them do is to create believable certificates to be able to man in the middle connections, but they can't be doing that for a large number of connections because it's resource intensive and detectable.
They still don't have the private keys of the sites if they break into the CA.
I find this quite plausible, with or without the knowledge of Page, Zuckerberg et al. the NSA might very well have the private keys of these companies. I would not be surprised if the CEO's of these companies choose to be ignorant of the NSA's methods to not have to lie to the public, shareholds and Congress.
Also, given that the world's best engineers work at either high-tech companies or the NSA there will be some who have switched between these industries, giving the NSA/CIA a headstart to get any information these companies hold through old-fashioned spy-tactics.
What about Yacy? (http://yacy.net). I am not sure if the queries inside a peer network can be decrypted as easily as http requests (I am not a networking specialist though, it is just an opinion).
http://commons.wikimedia.org/wiki/File:Upstream_slide_of_the...
They're logging all your URLs and headers. How much are you willing to bet they can't decrypt https? I dont understand all the hubbub _is focused solely_ on direct server access (the bottom half of the slide), when "Upstream" access is just as big a concern.
EDIT: rephrased my concern about direct vs upstream