Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenSSL 3.0 released; relicensed to Apache-2.0 (lwn.net)
161 points by jabo on Sept 20, 2021 | hide | past | favorite | 67 comments


I am surprised by the lack of corporate sponsors [1].

1. https://www.openssl.org/support/acks.html


https://www.openssl.org/community/thanks.html

"The following organizations who contribute staff time to work on the project (alphabetically): Akamai, Cryptsoft, Google, Oracle, Red Hat, Siemens, and Softing."


Uncredited is the NSA.

/s


Another thing to consider here is that many times, larger corporations are required to comply (or internally have hangups about not complying) with FIPS. And at least as far as I can tell, there are only a few older versions of OpenSSL compatible with a legacy FIPS module. Honestly I have given up on the politics of it so I may be incorrect, but I know the few times I've brought it up at (soon to be former) employer, that has come up.


OpenSSL sells support contracts ranging from $15,000 to $50,000. [1]

[1] https://www.openssl.org/support/contracts.html


I'm not, sadly. It's pretty standard in our industry that companies will use a ton of open source software without giving back at all, or giving back many orders of magnitude less than they get.


Reminds me of a tweet I saw:

"the most consequential figures in the tech world are half guys like Steve Jobs and Bill Gates, and the other half are some guy named Ronald who maintains a tool called 'RUNK' which stands for 'Ronald's Universal Number Kounter' and handles math for every machine on Earth"


Real life example of this is probably something like zlib[0]. Easily on billions of devices.

https://zlib.net/


It may be an awareness problem too. I know personally I've enjoyed having OpenSSL around and just became a GitHub sponsor (under same HN username).


> I am surprised by the lack of corporate sponsors

you energized me to look into donating, turns out the OpenSSL Software Foundation is a Delaware non-profit, but not a US 501c3, so not tax deductible at the federal level.

https://www.openssl.org/support/donations.html


Rarely does the NSA get added to the list of corporate sponsors though...


This is being discussed on the systemd-devel mailing list as well @ https://lists.freedesktop.org/archives/systemd-devel/2021-Se...


Maybe this is paranoid but they tell you how to check the hash of the download using openssl itself.

A compromised version of openssl could detect itself and return the "correct" hash.


Don't you check for the hash before you begin compiling? If not, you are in serious trouble.


Agreed.

For other HN users, this is the "Reflections on Trusting Trust" issue.

https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...


Use your current version of openssl...


... and if your current version of openssl is already compromised, nobody can help you anyway.


There are always components you cannot fully trust . Ken Thompson's reflections on trusting trust [1] comes to mind

Trust depends on threat model, if your threat model includes such actors/potential attack vectors then you should worry, ultimately you are depending on someone code for any reasonable abstractions (even ignoring chip level comprises)

[1] https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...


There is a way out: a simple assembler that can be hand typed and easily checked that allows the user to type-in a more advanced one ... until a compiler or interpreter with checked source code can be run and bootstrap tools whose source can be checked by hashes.

It is already doable: https://bootstrappable.org/ https://gitlab.com/janneke/mes https://git.savannah.nongnu.org/cgit/stage0.git/tree/README....


That is not doable, given the threat discussed in the mentioned paper, since the manner of data entry and the manner of "easily checking" can be compromised.

If you are hand typing an assembler in order to bootstrap a more complicated assembler, which creates a compiler or interpreter to check a single hash value, you are engaging in the wrong mitigation. You should instead use an airgapped computer with different software.


AFAIK, the bootstrapable project specifically addresses the reflections on trusting trust problem.


How could it be addressed given this, "the manner of data entry and the manner of "easily checking" can be compromised"? That is the reflection upon trusting trust.


I can't see where It fails... let's check...

1 - Somebody writes a very simple assembler that can be easily checked.

2 - I type-in that assembler from hex using specialized hardware for that purpose. What is typed is exactly what is turned into data unless hardware is compromised.

3 - From that assembler, I type-in a more advanced assembler that was checked by hand from more people and whose hash was hand-calculated. After that, I check its hash. Software on my machine can only be compromised if the people who wrote this more advanced assembler are compromised, I also check the hash to guarantee it was typed exactly.

5 - From this more advanced assembler I type-in the source code of a compiler or interpreter which was checked by more people and its hash was hand calculated. Same as before with regard to chances of being compromised.

6 - I can then bootstrap more advanced software whose source code I trust and I can check the source code was not tampered with with hashes.

What step fails?


> What step fails?

I already wrote it out twice, and manquer provided a link to the paper (which is also linked from your page, https://bootstrappable.org/) called Reflections on Trusting Trust. This is not a supply chain issue but a trust issue. This is the very reason Thompson's paper was called, Reflections on Trusting Trust.

If you still cannot see this, look at PKI and ask where trust can be broken.


AFAIK, Reflections on trust supposes a compromised compiler. I do not see how a compiler whose source code is compromised can be compiled with the described steps. At least its hash would not match.


That's not the case.

From Reflections:

MORAL

The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.


That is the point: trust can be reduced to hardware (considering microcode as hardware). Which is much better than hardware and packagers and tools used by packagers.


From step 2 to 3 you need some combination of hardware and software that you trust to take the bytes you typed in, run them exactly as you typed them, passing input and output between your program and a data store without modifying them. This involves the basic parts of an OS: filesystem, program loader, whatever you're using to calculate the hash, etc; you have to trust those too.

You also have to trust the CPU to run your program exactly as it appears in memory.


> This involves the basic parts of an OS: filesystem, program loader, whatever you're using to calculate the hash, etc; you have to trust those too.

No. This can be done with a ROM programmer. Trust is now reduced to hardware and people who checked the software. Assuming many people can check it, it can be considered safe. As I said, trust now depends on hardware and many people being compromised. Trusting software is no longer needed.

Type-in a bootloader is how it was done on old PDPs [0] and on the altair. I can't see how those bootloaders could be compromised except if the hardware was compromised too.

[0] https://www.youtube.com/watch?v=M94p5EIC9vQ


I don't think I'm understanding your threat model. It feels like you are waving away the trust chain required to obtain and execute trusted ROM programmer software. How can you verify that the software you are running is the same as the software that others checked?

But suppose I grant that the ROM programmer software is trusted. I still don't see how that sidesteps the problem of needing to trust the OS environment. Once your assembler is running, how will it get its input and write its output? How will you obtain the output of the the assembler and run it in a subsequent step? How will that subsequent program get its input and write its output?


> It feels like you are waving away the trust chain required to obtain and execute trusted ROM programmer software.

Take a look on the video I posted. There, the memory is programmed bit-by-bit. Fully through hardware, directly on the memory, the processor is not used when the ROM is programmed. In other words: the ROM programmer is 100% hardware. There is no ROM programmer software.

> How can you verify that the software you are running is the same as the software that others checked?

The ROM programmer can read any cell of the memory and display it. It can be checked visually if wanted or required.

Other hardware features can be implemented for this specific feature, but I don't think that is required.

The hardware still can be compromised, but such a simple hardware would need a very complicated way to be compromised and hide it from the user. For chips that don't need fast clocks, I think, it can even be built using only resistors and capacitors.

I once built a programmer (for RAM chips) myself using an arduino and resistors. It is reasonably safe to assume that the arduino was not carefully compromised to circumvent this very specific use case.

> But suppose I grant that the ROM programmer software is trusted.

There is no ROM programmer software!

> I still don't see how that sidesteps the problem of needing to trust the OS environment.

There is no OS until that point! Everything to that point will run on bare metal. See the link to stage0 I posted: "The stage0 is the ultimate lowest level of bootstrap that is useful for systems without firmware, operating systems nor any other provided software functionality"

> Once your assembler is running, how will it get its input and write its output?

Directly from hardware.

> How will you obtain the output of the the assembler and run it in a subsequent step?

Directly from hardware. The output of the assembler can go directly to RAM. Running it is just a matter of jumping to its entry point.

> How will that subsequent program get its input and write its output?

These can come from trusted libs or a trusted kernel which I compiled with a trusted compiler on my trusted system. AFAICS, until that point every software running on such system can be said to be trusted if at least one person who checked such software is trusted. If the software is checked by many people it is a reasonable assumption.

Vulnerabilities can still exist, but those are another kind of problem, they are bugs which may be discovered and fixed later but not built into the system by the tools. Cases like intentional vulnerabilities can still exist but would have to pass all the checkers, human or automated.

Look, the system now can only be compromised if people who checked the software are compromised or the hardware is compromised. Before mes and stage0, I had to trust the people who checked the software and the system (hardware and software) were software I run was built. If the compiler was compromised, all software generated from it could be compromised.

To illustrate it, suppose I trust GCC code as published by FSF. This is an entirely reasonable assumption. Now, if I use GCC compiled from my distro repositories, I'm trusting not only the packagers, but I'm also trusting the tools these packagers used. With stage0 and mes I don't need to trust any pre-existing binary tool that I can't check myself or be checked by other people or static analyzers. An entire huge chain would have to be compromised for my software to be compromised.

Reasonably, the system can only be compromised if hardware is compromised or all the people and tools which checked the code are compromised. Considering many people can check the code and vulnerabilities are hard to hide in the source code of a compiler, assuming not all of the people who checked the code are compromised is reasonable.

Trust is now reduced to hardware only. Without boostrapable, I had to trust the software tools that were used to build the software I use.


> There is no ROM programmer software.

This was a misunderstanding on my part, when you wrote:

> This can be done with a ROM programmer. Trust is now reduced to hardware and people who checked the software.

I thought you were referring to software related to the ROM programmer (some ROM programmers do have software). I see now that you meant the software of the assembler, etc, and your scenario involves a pure-hardware ROM programmer.

stage0 looks cool. I still maintain that it will need to be including OS-like functionality along the way, to have a way to invoke the desired programs with the desired inputs and outputs, allocate memory dynamically, etc. If you're dealing with the hardware directly, it just means you're implementing the OS functionality yourself. But stage0 is cool in the way that it bootstraps from such a small binary payload.

> Trust is now reduced to hardware only.

This is true, as long as you can get hardware that doesn't have any software underneath. That rules out any CPU with microcode, etc.


I think we can end the discussion here. I think I made myself clear and I thank you for allowing me to do so by asking clear, simple, direct and non-rethoric questions. It was a discussion without any unneeded friction like defiance, offenses or unrelated analogies. It was a bit longer than I wanted, but I liked it nevertheless.

Thank you!


The point is not that it whether it is possible, it is not going to be practical, this you will always have some level of accepted risk.

Even if you could ensure the compiler is secure, it is hard if not impossible to completely control what actually runs on the chip.


You can do it once in your life on a single machine. From there you compile the software for other machines. It is like a tree whose fruits you trust because you planted it yourself (actually it is more like if you wrote the seed yourself) and use it to plant other trees.

Only remaining vulnerability is the hardware, but as I like to say: security is not binary.


That doesn't protect against untrustworthy chips


Yes, but trust is delegated to hardware only instead of hardware and software.


General AI existential crisis.. how can I trust my sense of trust?


You can reduce the number of parts you have to trust.


This makes sense if you don't check a package's hash before you install it.


Or if you have supply chain-hacked version installed by your Distro that grabs the hash from the download page.


Unbelievably too late at that point that it doesn't matter even the slightest


Recent and related:

OpenSSL 3.0 - https://news.ycombinator.com/item?id=28443714 - Sept 2021 (54 comments)


Since there were so many TLS security bugs due to it's complexity, is there any push to replace it with something simpler and with less choices and attack surface?

Google gave us HTTP/2/3, but don't seem to care about fixing TLS.


TLS 1.3 is much better than TLS 1.2, and has fewer options and knobs (e.g. no need to choose cipher suites), but it is not what a modern protocol designed from scratch would look like. For that you should look at WireGuard, or the general Noise Protocol Framework.

For custom protocols, libsodium would be a popular modern approach. If you need compatibility with TLS, try locking down TLS to only version 1.3, or if you can't do that, lock it down to only TLS 1.2 with tls_ecdhe_rsa_with_aes_128_gcm_sha256.


This has been useful advice for many years, although restricting to AEAD is best when possible.

https://hynek.me/articles/hardening-your-web-servers-ssl-cip...


TLS the protocol has been simplified in version 1.3, with the goal of reducing complexity to improve security.

OpenSSL the implementation was forked a few times also with the goal of improving security. Notable forks: LibreSSL, BoringSSL.

PS: for all those confused why OpenSSL skipped version 2, it seems it's because FIPS builds identified themselves as version 2 (thanks to poster below!) Also the changelog explains the new version naming scheme:

"""

Switch to a new version scheme using three numbers MAJOR.MINOR.PATCH.

Major releases (indicated by incrementing the MAJOR release number) may introduce incompatible API/ABI changes.

Minor releases (indicated by incrementing the MINOR release number) may introduce new features but retain API/ABI compatibility.

Patch releases (indicated by incrementing the PATCH number) are intended for bug fixes and other improvements of existing features only (like improving performance or adding documentation) and retain API/ABI compatibility.

"""

Quoted from: https://www.openssl.org/news/changelog.html So there won't be a 3.0.0a, 3.0.0b, etc. They want to make it clear it will be 3.0.1, 3.0.2, etc


It's also because the FIPS builds of OpenSSL 1.x identified themselves as 2.x.


I didn't know! Yeah that seems to be the main reason


>Google gave us HTTP/2/3, but don't seem to care about fixing TLS.

Google is working on BoringSSL / Tink, which I believe is API compatible, but supports a lot less features. However I think a better way forward might be RustTLS, an implementation which is memory-safe. There is already support in Curl[1], showing there is a path forward for usage in languages other than Rust.

[1] https://daniel.haxx.se/blog/2021/02/09/curl-supports-rustls/


LibreSSL is an alternative from OpenBSD.


IIRC HTTP3/QUIC mandates TLS[1][2] so it itself still relies on TLS. It seems like it's set on TLS 1.3 as a baseline and I would hope the negotiation of the protocol is future compatible but I will admit I haven't fully read the RFCs.

[1] https://datatracker.ietf.org/doc/html/rfc9000#section-1 [2] https://datatracker.ietf.org/doc/html/rfc9001


Google employ at least one OpenSSL committer and have their own simpler version of the library, BoringSSL.


Do you enjoy perl constructed header files? VMS support? Inconsistent error codes across APIs? Then OpenSSL is for you.

None of the problems have been fixed.


> VMS support?

Is it a bad thing to support OpenVMS, given it is still an actively maintained operating system? It isn't used anywhere near as much as it used to be, but it is still used. It has even been ported to x86-64.


Some people (absolutely not me: I refuse to even work with such people) believe that supporting anything other than their computer is not just a waste of resources, but a bug.


I actually have an account on a VAX VMS 7.3 system.

I have an openssl binary there that I found. It is not linked against Multinet TCP, so none of the network functions work.

It is occasionally handy, even given the library problem.



Did they reject the PRs you sent implementing those updates?


I remember stumbling into that perl codegen years ago and just being horrified. I haven't done any OpenSSL internal work in years and recently I tried to find it and couldn't. I thought maybe it was just a vivid nightmare.


In what way was using Perl horrifying? Is it that they weren't using whatever flavor-of-the-week stylish language everyone now thinks is "cool", like some ridiculous node.js stack in Typescript? If they were using Python--which FWIW is extremely common in the build script space--would you have been any less "horrified"? :/


Considering the age of openssl, using perl was a smart choice...

Everything else would have been rewritten atleast three times by now... even porting from python 2.x->3.x takes a lot of effort...

...and perl? Just works!


> even porting from python 2.x->3.x takes a lot of effort...

> ...and perl? Just works!

Agreed! And, if anything, you might be understating it. We went through a Python 2 -> 3 migration recently and it required a massive effort across the entire company; ultimately, we decided to shelve it and port most of our code to Go; this effort has been ongoing to this day.

This choice to not offer backwards compatibility left a very, very bad taste in our mouths, and we'll probably never go back to Python. It's astonishing that the PSF just decided that probably trillions of lines of working, production-quality code at companies around the world would need to be re-written. Perhaps it's "their" code, so it's their prerogative, but Go's backwards-compatibility pledge was a huge factor in our decision to move toward it.

For those of us on the team who are old enough (honestly) to know Perl (as much as anyone can?), well, there are still some very large companies that have massive Perl codebases and they seem to be happy with them.


Yep.

I wouldn't care if the did what perl did... create Perl6, which is technically perl, but otherwise a totally new language, and keep supporting perl 5, but they decided to drop python 2 support totally. Combine that with distro maintainers seeing that, and "overnight" removing all the python2 software available (some even without suitable replacements) and totally removing python2 support, makes it a pain in the ass to develop anything and then fearing what will python 4 bring, and when will that happen.

And as I said... perl code, just works.


It's horrifying because it solves a problem most projects solve by using one assembler and having it as a build dependency. It makes reading the code hard and analyzing it tougher.

The header files are their own horror.


It's called "perlasm." First Google hit finds it, if you happen to want to see it again :)

(Though it's not header files, so I wonder if 'wbl was thinking of something else.)


They added it in 3.0.0. The headers go through perl now as well as the assembler.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: