I can imagine a Neal Stephenson book which makes a number of 50 year jumps into the future and explains how this ceremony becomes more and more religious in nature. Toss in a few dramatic changes based on either fanciful ideas of new ways to compromise the process or in reaction to actual attack attempts (or successes).
> The only way to move information from the outside world into the laptop/HSM is via USB drive. Accordingly, the key-signing request is loaded into the laptop via USB.
Hmm, how sure are they that the OS only allows USB drives (as opposed to keyboards, network controllers, etc.), and contains no exploitable bugs? Is an image of the media the laptop was booted from publicly available, and has it been audited?
I almost feel silly asking, given all the time spent designing this and how smart the people are, but after all that trouble it seems amateurish to use USB...
I too do not know enough to not feel silly for asking this, but I have a similar feeling, and I am not sure if I have misunderstood the past year's worth of popular articles on USB security, the risk posed by the USB in the context of the ceremony, or both. I had gotten the impression that the firmware vulnerabilities of USB flash drives had proven to be inherent, unavoidable, and utterly devastating; that, basically, any given USB drive could become a general-purpose bus-querying hostile computing device, if once plugged into a machine controlled by a capable and motivated adversary.
If that understanding was correctly received, and given the strength of potential motivation involved here, my thought is that the problem presented by the USB drive could only be combated by (a) ensuring that the USB drive was obtained in a random enough fashion that vast numbers would need to be deliberately compromised with this specific target in mind for that approach to be feasible, and (b) ensuring that the USB drive was neither ever unsupervised nor connected to any USB port (except one on a "known-to-be-clean" computer) after the selection occurred.
The problem that I see with those measures is that if compromising the laptop with a compromised USB drive were possible, then the actual security of the process would be purely dependent on the security of measure (b), as established by a single actor at the ceremony: that is, whomever provides the USB with the key-signing request. This last point just by itself would seem to be a degree of risk well beyond the criteria implied by the established protocol.
> I had gotten the impression that the firmware vulnerabilities of USB flash drives had proven to be inherent, unavoidable, and utterly devastating
My understanding is that it requires cooperation from the kernel. USB itself does not automatically allow DMA, but a driver can instruct the host controller to handle packets via DMA.
So if you blacklist all device types (input devices) except the SD card and the driver for that card does not need DMA then everything should be fine because they check the signature of the key signing request. I assume they transfer it to local storage first before signing it so that a potentially malicious storage device can't pull a switcheroo.
Long story short:
- they need a USB/udev firewall
- whitelist devices that don't allow DMA through their drivers
- verify the data *after* transferring it from USB.
> I assume they transfer it to local storage first before signing it so that a potentially malicious storage device can't pull a switcheroo.
In the article they mention it doesn't have local storage. It's too bad they didn't go into more detail about tamper proofing the USB portion of the ceremonies.
> This laptop has no battery, hard disk, or even a clock backup battery, and thus can’t store state once it’s unplugged.
I don't know too much about badusb, but I think some mitigation is possible.
1. Use an USB key that doesn't have an updateable firmware.
2. Configure the laptop to only accept a single USB mass storage device on a dedicated port. No keyboard, no mouse. I should be possible with udev rules.
There may also be the threat of a malicious entity submitting their key-sign request on a malicious USB device, although the article doesn't seem to say whose USB device gets plugged in.
I agree it's probably simple to lock down the possible USB devices (ideally, don't even compile-in support for them in the kernel), but I am wondering if anyone can confirm it's been done, and I would still worry about exploits in the USB stack or hardware. It seems like this is the biggest chink in the armor, so I'd go for hand-keying from paper (a redundant encoding can probably make this not excruciating for the humans involved), another optical disc (CD-R), a floppy disk, a camera for scanning something like QR codes, a simple wired serial link, serial over infrared. I suppose these all have some tradeoffs, but at least some of them don't involve plugging in another electronic device that uses a complex general-purpose protocol just to transfer what probably amounts to a few dozen kilobytes.
I can see it now: A hacker with tiny QR code imprinted cuff links or rings. They brush their hand close to the camera, seemingly by accident, and BOOM! Buffer overflow in the QR decoder :)
The fact that a burnable CD should have no parts that can think actually makes using one for input and one for output probably safer (though you would then need the trusted application to include a CD/DVD burning capability).
As an alternative, a 'raw' interface, such as an actual programmable flash device or maybe an SD card could work.
I thought that they have their own ARM processors inside together with (updatable?) firmware. You can buy wifi-enabled cards, for instance.
OTOH, I assume they don't have direct access to the USB bus and so can't pretend to be other devices. But that still leaves lots of room to do nasty things with the data 'secured' on it.
I thought a similar thing myself. The USB drive seems to be the weakest spot of this whole thing, especially with how prevalent USB-based infections are (Stuxnet, anyone?), and the fact that sophisticated enough malware can operate completely autonomously.
I also wonder what other things they may have locked down on the laptop itself to try to stymie key exfiltration? I assume they pulled the wifi/bluetooth chip, but what about the speakers? After all - the whole ceremony is being broadcast live, and depending on the audio degradation from the live stream the speakers may actually be stable enough for successful data exfil.
I'm also curious if they attempt to do any power cleaning/shielding to prevent any forms of tempest monitoring?
> I also wonder what other things they may have locked down on the laptop itself to try to stymie key exfiltration?
The private keys are in the HSM, a separate device (according to the article, connected by an Ethernet cable). The laptop doesn't have access to them.
The main risk I can imagine would be a compromised laptop signing a different KSR which has extra keys, and saving it in a hidden area of the USB key, while pretending to sign the original KSR (and presenting the hashes of the original KSR to the operators).
I agree. The interesting thing here is not even code injected into the laptop, but about fake writes.
Because the article states, that Verisign provide their KSR via the USB, which is than is PGP signed and that signature is checked visually.
But what about, when the DNSSEC signatures are written to the USB stick, which then is handed over to Verisign to be implemented. They are also checked visually? Or can the USB stick (the implemented logic) forge the write and basically modify the written results?
I don't understand the purpose of such an attack. The USB stick can't forge a signature (not even the laptop has the key -- it's in the HSM.) If it corrupted the signature by lying about the writes, they could just re-write on new media.
At 21.29, things go awry. A security controller slams the door of the safe shut, triggering a seismic sensor, which in turn triggers automatic door locks. The ceremony administrator and the keyholders are all locked in an 8ft square cage.
Okay, so this tremendously complicated and secure ceremony is performed to essentially sign a "Verisign is allowed to do whatever they deem appropriate to the root of DNS in the next six months" certificate.
This is really just security theatre, as all the power is delegated to Verisign, and it is almost impossible to detect Zone Signing Key abuse.
It's amusing that their plexiglass labeled holder for the OS DVD has OS misspelled as O/S. I believe this misspelling most likely stems from people having seen and slightly jumbled part of the name of the old IBM branded operating system, OS/2.
It is a usage that has been fairly widespread in computing literature since at least the early 1970s, pre-dating OS/2 by at least a decade and a half (possibly more). Such abbreviations punctuated by slash have been acknowledged widely even without the realm of computing (and for decades longer) in quite a number of manuals of punctuation and style. And they're still acknowledged today. Merriam-Webster's Pocket Guide to Punctuation and New Hart's Rules give examples of slash-punctuated abbreviations including N/A, c/o, w/o, and A/C.
The amusing thing is to see it being mis-categorized a mis-spelling based upon OS/2 of all things. You could at least have picked OS/360 or something. (-:
I occasionally saw operating system abbreviated that way before OS/2 existed, and obviously OS/2 doesn't do that, so I am not inclined to believe there's a connection.
This silly signing ceremony shows the fragility of a centralized trust model. If bitcoin/blockchain succeeds this hopefully will be relegated to the history books.
DNS is already a a distributed chain of trust. I trust that the well know ftp site will give me a good hints file to get to the root servers. I trust that the root servers will provide me with the proper ns and glue records for the tld servers. I trust that the tld servers will provide the correct ns and glue records for the domain I want to resolve. DNSSEC just formalizes the trust with digital signatures.
A traditional CA validates empirically that a customer controls a domain at some point in time. DNSSEC is a stronger validation of control of the domain, because it's a property of the domain itself.
Trusting the domain registry to indicate who controls a domain makes a lot more sense to me than trusting a third party. If I can't trust the DS records, I can't trust the NS records either.
A DS record doesn't indicate a connection between an organization and a domain though, which a traditional CA supposedly might.
> A DS record doesn't indicate a connection between an organization and a domain though, which a traditional CA supposedly might.
Only if you get an EV certificate, no? My understanding is that the only checks required for getting a normal certificate issued is to verify that the person holding the key that you're signing is in control of the domain. (Verified through methods such as setting particular DNS records, proving control of the email on the WHOIS data, or setting up an HTTP server at a particular DNS address.)
Then again, most sites just use a basic cert, so perhaps DNSSEC provides most of what is needed.
Some of the certificates I've purchased have involved verifying some details of the organization, even though they weren't EV. I believe we needed a Dun and Bradstreet number when I got a certificate from Thawte in the late 90s (although I might be misremembering, something at that company needed that number...). And a more recent issuance wanted some other proof of existence / location, they had asked for a lease/utility bill, but issued with our location found in a state corporation database, before I could get a copy of something they would accept. I won't disclose the issuer of the recent cert, but I would put them in the top tier of reputation (and prices).
I would hope an EV process would do a better verification, but I've never needed an EV cert, so I don't know.
DNSSEC is sort of like verifying to everyone that you control the DNS, near the time of use, as opposed to just verifying to a CA at time of issuance. Or in other words, if it's OK for a CA to trust DNS, letting everyone else trust it would be good too.
At least the concept is right, 1024-bit rsa keys are kind of scary. And DNSSEC doesn't address confidentiality, but TLS with SNI also leaks hostnames.
DNSSEC does seem pretty unnecessary at this point for security. It hands more power over the internet to fewer hands, whilst not providing any improved security.
DNSSEC only seems unnecessary under the assumption that you're already making full use of existing security measures everywhere else in the stack. If you live in a world where unencrypted HTTP still exists then it's nice to have some defense against ISPs who like to lie in DNS responses. And even if I am connecting over HTTPS through a shady ISP, I'd prefer not to send any packets at all to the wrong IP rather than wait until it presents the wrong certificate.
> If you live in a world where unencrypted HTTP still exists then it's nice to have some defense against ISPs who like to lie in DNS responses.
As tptacek and others have pointed out numerous times elsewhere on HN (and in the article linked above):
1. DNSSEC doesn't protect against ISPs hijacking DNS responses
2. TLS is easier to deploy than DNSSEC
3. TLS provides more security for the end user than DNSSEC does
3a. If TLS us used, DNSSEC provides essentially no additional security benefits to the end user.
So really, it makes sense to be advocating the use of TLS, which is what projects like Let's Encrypt are all about. DNSSEC is at best a waste of resources that could be better spent on actually securing the Internet through TLS, and at worst actively harmful (because the strongest criticism of TLS is that it centralizes trust in CAs, and DNSSEC centralizes trust even further - in a single entity!)
> I'd prefer not to send any packets at all to the wrong IP rather than wait until it presents the wrong certificate.
I'm not sure what difference it makes to be sending packets to the wrong IP. The whole point of TLS is that it doesn't really matter, because they can't read what you're sending anyway.
Also, the way the Internet works, you're always sending packets through the "wrong" IP addresses, so you should make the assumption that your raw traffic is visible to to any eavesdropper (and therefore encrypt your traffic so that this is not an issue).
> 1. DNSSEC doesn't protect against ISPs hijacking DNS responses
DNSSEC protects signed zones by allowing clients to notice a suspicious lack of a valid signature on responses that should have been signed. DNSSEC doesn't protect unsigned zones, but that shouldn't surprise anyone and isn't really an indictment of DNSSEC's capabilities.
> I'm not sure what difference it makes to be sending packets to the wrong IP.
That malicious IP gets to record what kind of connection my computer was trying to make to that domain, even if the connection attempt is aborted relatively early. That's more information being leaked than if my computer had been able to determine that it got a probably-spoofed DNS response and aborted there.
Playing shenanigans with the DNS server is a lot easier than full-scale snooping and tampering on all traffic, which is why ISPs commonly do the former but the latter is usually only done with NSA involvement.
It needs to be hard for ISPs to direct all mistyped domain names to their own advertising (and in the process, implicitly pretending that the Web is the only use for the Internet) or to claim that sites they don't like don't exist. DNSSEC helps with that.
Some clients don't validate. Some do. Everything on my home network is protected because my router's instance of dnsmasq validates. When I'm away from home, there's dnssec-trigger and a Firefox extension.
I really don't understand why the existence of software that doesn't try to take advantage of DNSSEC is being used as evidence that DNSSEC is incapable of doing something.
> I really don't understand why the existence of software that doesn't try to take advantage of DNSSEC is being used as evidence that DNSSEC is incapable of doing something.
It's not that DNSSEC is fundamentally "incapable" of doing this, but that it's literally not what the protocol is designed to do. As noted in the other HN thread on this announcement, the DNSSEC protocol is explicitly not designed for end-user verification, and end-users are discouraged from running their own valdiators. There's a reason that browsers don't support this out-of-the-box and (more importantly) don't ever plan on it.
If you're concerned about people snooping on your TCP packets, DNSSEC doesn't solve that. If you're concerned about people spoofing DNS responses, DNSSEC doesn't solve that[0]. TLS + HSTS does solve that, by making it impossible to load a forged page, regardless of what DNS records were returned[1].
Again, TLS solves all the problems you describe (including the problem of ISPs redirecting mistyped pages to their own advertising pages). TLS is also supported by every browser, out-of-the-box. It's more secure, easier to deploy, and already widely used.
[0] Again, as explained in the post, DNSSEC is not designed to protect end-users against malicious ISPs. This is literally a matter of what problems DNSSEC is even aimed at solving.
[1] Notice how http://google.com (not even https://) will never redirect to a captive portal. That's not DNSSEC. That's TLS + HSTS. DNSSEC is redundant in this situation.
> [...] but that it's literally not what the protocol is designed to do.
> the DNSSEC protocol is explicitly not designed for end-user verification
> This is literally a matter of what problems DNSSEC is even aimed at solving.
Repetition doesn't make that argument any more valid. If DNSSEC wasn't intended to have this capability, it's for the same reasons that it wasn't intended to be used for on-the-fly signing: it was designed in the mid-'90s when that was impractical. Nowadays it is practical and it does in fact work just fine for this purpose, and it provides an extra layer of defense in depth and stops some attacks sooner than TLS can and provides some added security to things that aren't using TLS (because remember, there's more to the Internet than just the WWW, and many of those things don't have the aggressive upgrade cycle that Chrome uses).