The private key is used to negotiate a session key, which is then used as the symmetric key for RC4 or whatever stream or block cipher you are using. Those session keys are ephemeral and per-session, so leaking them is only a problem for those sessions.
(Also, since it's a stream cipher, it can't use the same key ever again, else you can xor those ciphertexts to get 2 xored plaintexts, which are much easier to crack.)
If it was an open source library that was imported there would be a link to the CVE affecting that library most likely and that CVE would've been updated to announce that it affects additional systems (JUNOS/ScreenOS) this would usually not trigger a completely new CVE from being issues (i.e. Heartbleed and Shellshock which got updated for weeks and even months when new systems were discovered to be affected).
The "unauthorized code" also introduced 2 separate and unrelated vulnerabilities one which allows you to bypass the authentication by some means (logs you in as a SYSTEM user), and another which allows you to decrypt VPN traffic.
The overall phrasing (knowledgeable attacker), the fact that a fresh CVE was issued, and the fact that 2 unrelated but very specific vulnerabilities were introduced into the system makes me think that this was more intentional than just an issue with importing code from a 3rd party.
Then all Juniper code should be thought of as tainted. It's really as simple as that. Juniper has announced that everything they have released can not be trusted.
EDIT:
> it was an open source library that was imported there would be a link to the CVE affecting that library
That would only be if it was a error in the library that caused this and not the way it was used.
I just do not see Juniper coming out and so casually saying, "Our source code was clearly compromised, and this is the one instance of them changing our released code that we found."
If it was poor implementation that's not unauthorized code.
Also I don't remember the last time that "unauthorized code" was used to describe the cause of a vulnerability, and code being committed without undergoing the full code review and compliance process is quite a common occurrence and also a common cause for some security vulnerabilities, especially ones that are easily caught by static code analysis.
The phrasing, the very specific nature of the vulnerabilities, the "knowledgeable attacker" requirement which means that you can't just fuzz your way into it just like any other zero-day and the fact that some of the Snowden documents that were published mention an NSA specific backdoor for Juniper firewalls means make me think that this wasn't an internal process failure.
If the process would've failed we would've gotten an advisory at the most without any specifics, the fact that they've intentionally mentioned that unauthorized code managed to get there is almost like a canary, they said that they've been breached without effectively saying that.
The Chromebook method (to disable write protect) requires you to open the device and remove a specific screw, it's beyond just blindly following instructions.
Ha, yes definitely. Thankfully the state-of-the-art in hash-based crypto has progressed a fair bit since the original lamport signature. The stateless "SPHINCS" signature scheme is looking quite mature already: http://sphincs.cr.yp.to
Although if you are working with an Event-Sourced architecture you may as well just implement with OTS (One-Time Signature) chains.