Well that depends on the binding right? In case you use the "artifact binding" then theres also direct communication between SP and IdP. I havent seen it in the wild and I am also no professional, but I saw it in the 2.0 standard, e.g., see https://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-...
Outdated certificates are actually fine with regards to SAML, oddly enough; the logic being that the trust is handled out of band at metadata level, and the certificate is just a public-key distribution method. (That applies to Shibboleth at least; other implementations may disagree.) This does of course assume that you have a means of safely keeping metadata for the other end of the trust relationship up to date. In an eduGAIN/local federation setting, that's easy enough to do with signed XML metadata feeds and daily fetches, but far less so for bilateral trust.
The XMLDSig stuff is definitely a mess though. There were definitely issues with comments in signed content allowing values to be truncated to the start of the comment, along with some similar weirdness with XML entities. And that's before any of your (entirely valid!) complaints...
I'd say the main difference is that OAuth is granting the SP the ability to "do stuff" as the original user (including reading the user's profile details, as OIDC does), as opposed to SAML's approach of just sending attributes describing them.
For what it's worth, it is certainly possible for SAML SPs to flag that certain attributes should/must be released to them via their metadata, but the actual release is at the whim of the IdP and its operators. It's also possible for a SAML IdP to expose that level of detail to its end users and allow them to agree/disagree to the attribute release, although I'd be surprised if that behaviour was particularly common in practice.
The difference between OIDC and OAuth boils down to exchanging attribute assertions describing a user as opposed to the delegation of a specific set of allowed actions, as OAuth was intended to do. OIDC and SAML are basically the same thing, with OIDC being a somewhat less frightening and more modern protocol.
Reading the user's profile information _is_ the delegated action. OAuth providers were already doing this prior to OIDC but in incompatible ways. OIDC standardized how that information is requested and returned.
> What is an "OAuth key"? Do you mean an OAuth token? No, Golden SAML is worse than stealing an OAuth token, because an OAuth token is valid for 1 user, but Golden SAML can be used to impersonate any user. Also, OAuth tokens expire, but Golden SAML doesn't expire (although if you steal an OAuth refresh token, that won't expire).
Stealing the OAuth token signing key, since then any fake OAuth tokens signed by it would be considered authentic.
There isn't necessarily an OAuth signing key. The OAuth tokens might not be signed. They might be random values, which act like a password, with a hash of them stored in a database so they can't even be stolen from the database.
Even if they are signed, it doesn't need to be as bad as Golden SAML, because OAuth tokens have a short expiration, so the signing key can have frequent automatic rotation, so any stolen signing key will quickly be useless. For the refresh tokens, they don't have fast expiration, so frequent rotation won't work, but you could have a hybrid system where the OAuth tokens use a frequently rotated signing key, but the refresh tokens are random values with hashes stored in a database.
> But that's the thing: deciding how software is built and which features are shipped to users _is_ under our control. The case with xz was exceptionally bad because of the state of the project, but in a well maintained project having these checks and oversight does help with delivering better quality software. I'm not saying that this type of sophisticated attack could've been prevented even if the project was well maintained, but this doesn't mean that there's nothing we can do about it.
In this particular case, having a static project or a single maintainer rarely releasing updates would actually be an improvement! The people/sockpuppets calling for more/faster changes to xz and more maintainers to handle that is exactly how we ended up with a malicious maintainer in charge in the first place. And assuming no CVEs or external breaking changes occur, why does that particular library need to change?
Yes, but as MongoDB is a document database, storing and updating giant blobs of JSON as a single operation as opposed to breaking the JSON down into individual fields is intended behaviour. This works in Postgres too, of course, but then you lose the relational database advantages on top of the large-single-field issues.
All this really comes down to is picking the right database type for the problem you're trying to solve.
In fairness, they also gave us the joys of `strcpy(src_ptr, dest_ptr)` and `scanf("%s", str_ptr)`, which with the benefit of hindsight and many buffer overflows later were a terrible idea.
Those numbers look like they could be about right for 2020/2021, but using them in a 2023 article is meaningless given the effects of the war in Ukraine on gas and electricity prices. I don't know how much Portugal's grid depends on gas, but I could believe that it's less affected by gas prices than the UK grid is.