Hacker Newsnew | past | comments | ask | show | jobs | submit | cflee's commentslogin

I'm curious, which ultraportable is that?


thought you'd never ask :^)

GPD Win Max 2 with the AI 9 HX 370


It's definitely not all assigned to Singapore or to ap-southwest-1, the ip-ranges.json file reports assignments to other regions, i.e.

3.0.0.0/15 ap-southeast-1 EC2 (Singapore) 3.8.0.0/14 eu-west-2 EC2 3.16.0.0/14 us-east-2 EC2 3.40.0.0/14 eu-west-1 EC2 3.80.0.0/12 us-east-1 EC2 3.104.0.0/14 ap-southeast-2 EC2 3.112.0.0/14 ap-northeast-1 EC2 3.120.0.0/14 eu-central-1 EC2


That is really great if you don't have any outbound deliverability issues due to IP reputation on a VPS host! Under those circumstances, that sounds like a great arrangement.

I think that is not quite the norm, lots of these hosts (and home internet connections) tend to have rather bad reputations, and chasing down the various RBLs can get really old really fast, especially since the most common response is to silently blackhole so you don't get a bounce.


I haven't had any issues with my personal domain in years, ever since I moved it from random web host to GApps, to deal with IP reputation issues, and have SPF+DKIM setup. (but my domain is a .net one)


WebAuthn is backward compatible with U2F tokens, but naturally only for use as second factor. They defined CTAP1 as the U2F protocol for existing tokens, then defined the new CTAP2 for communication with the new tokens, and made both part of the spec.

From a Yubico blog post (https://www.yubico.com/2018/05/what-is-fido2/):

> WebAuthn and CTAP2 are both required to deliver the FIDO2 passwordless login experience, but WebAuthn still supports FIDO U2F authenticators, since CTAP1 is also part of the WebAuthn specification.


Technical Manual at https://support.yubico.com/support/solutions/articles/150000...:

> Like FIDO U2F, the FIDO2 standard offers the same high level of security, as it is based on public key cryptography. In addition to providing unphishable two-factor authentication, the FIDO2 application on the YubiKey allows for the storage of resident credentials. As the resident credentials can store the username and other data, this allows for truly passwordless authentication. YubiKey 5 Series devices can hold up to 25 resident keys. If RSA keys are used, there is a maximum of three RSA with the rest being ECC.

I wonder what the user experience will be like at 25 resident keys, they mention that the YubiKey Manager (ykman) can set/change FIDO2 PIN and reset FIDO entirely, but nothing about managing individual resident keys/credentials.

It seems like it might be a bit challenging to manage this, especially if end-users accidentally register the authenticator multiple times or run out of the 25 slots for some other reason, and be told that they need to reset the whole authenticator and do recovery for all their sites...


My understanding is that for udf / fido2 authentication, keys are not stored but rather regenerated with an HMAC

https://developers.yubico.com/U2F/Protocol_details/Key_gener...

I'm curious what these resident keys are for.


> keys are not stored but rather regenerated with an HMAC

That's an implementation detail to support an unlimited number of registrations. FIDO doesn't require derivation this way. Keys can be stored if desired. IMHO it would be superior, given that the device/protocol is designed as a first-class web-aware protocol, not a generic abstraction divorced from the reality of the primary use case. So, given that you are going to use the device with a web browser, the browser should assist you in storing the keys in the cloud. (NB: doesn't have to be and shouldn't be the raw key, it can be sealed by the device or even device/browser combination). This way you have a central location to find all of your registrations and can selectively revoke them easily.

Anyway ...

> I'm curious what these resident keys are for.

It was stated in the parent you are replying to:

>> As the resident credentials can store the username and other data


They continue doing the key wrapping with HMAC for U2F. Resident keys are for the "passwordless" authentication method under FIDO2.

U2F requires that the server must know exactly which keyHandles to request, based on the username (and probably password) that is supplied earlier by the user, so that the token can take the keyHandle and derive the key.

In FIDO2 "passwordless" mode, there's no username or other identifier presented, so it's just a generic request for credential from the server -- the authenticator has to independently figure out which key to present based only on the origin/domain, and maybe even present a list of stored keys (probably effectively a list of accounts?) to the user for selection. So it'd need some local/resident storage of various bits like the origin, maybe a user-chosen account name, and the actual credential, since it can no longer rely on the server to do store all these bits.


The spec doesn't insist on it, but that's how Yubico devices do it, yes. It's the straightforward thing to do when your scheme eventually relies on ECDH and there's an obvious and performant way to go from a base secret to a specific-use secret (via a KDF, here HMAC) to a public key (via scalarmult). It'd be less straightforward if your key generation is expensive and complicated.


If I'm understand correctly, the resident keys can be used in the case of a non-ECDH scheme and otherwise they wouldn't be used? How flexible is the FIDO2 specification on crypto schemes?


I don't understand what you're saying. Which resident keys?

WebAuthn adds a number of crypto schemes -- to wit, I think they add RSA. You can certainly deterministically generate RSA keys but it's a lot more of a pain in the neck than x = HMAC(k, "u2f" + custom); P = xG :)


In the parent comment link to the technical manual it mentions 25 resident keys can be stored.

It is now starting to make sense to me why. As jiveturkey pointed out, it allows usernames to be stored. And, as you're pointing out, it's useful for RSA or maybe other crypto. Thanks.


Does anyone have an opinion on the new "three-message modification of the standard DH key exchange" they introduced for calls?

From their API doc: https://core.telegram.org/api/end-to-end/voice-calls#key-ver...

> Party A will generate a shared key with B — or whoever pretends to be B — without having a second chance to change its exponent a depending on the value g_b received from the other side; and the impostor will not have a chance to adapt his value of b depending on g_a, because it has to commit to a value of g_b before learning g_a.

> The use of hash commitment in the DH exchange constrains the attacker to only one guess to generate the correct visualization in their attack, which means that using just over 33 bits of entropy represented by four emoji in the visualization is enough to make a successful attack highly improbable.


I like it. I tried to explain it in slightly simpler terms to some friends in a group chat like this:

> reading about the emoticon generation thingy, it's actually worth a read

> they use a DH KEX[1], but wrapped with something which is interesting. Client A generates a, client B generates b, and g seems to be an already-exchanged finite group generator. That's all standard.

[1] diffie-hellman key exchange

> now before A sends g^a to B, it will send hash(g^a) to B. B responds as normal (with g^b) to which A will respond with what it normally would send first: g^a.

> after receiving g^a, B can check whether the initially received hash(g^a) matches. This means that A can't brute force a specific value of a, so it doesn't matter that it's only 33 bits of entropy in that emoticon thingy. Any brute forcing will change the hash (unless you collide, iirc, sha256) and B will go "dude wtf" and kill the connection

> I tried to summarize in more understandable terms, but if it's too shortened or something, the original thing is here: https://core.telegram.org/api/end-to-end/voice-calls#key-ver...


I haven't personally reviewed it, but based on their previous inability to make principled cryptographic choices and resistance to critical feedback from actual cryptographers… I'm skeptical.


can you point to some examples where the resisted critical feedback? I've heard people mention this but haven't actually seen anything convincing yet


Read up the comments on the telegram announcement post here on HN. From couple of years back, IIRC.

PS: on mobile. Could not.search.


Found it [1].

tldr: First message from TelegramApp has some marketing copy ("acm winner phds") but not horrible. The TelegramApp user remains calm/careful and mostly polite in every message after that. There are only a few cases of sideways slapping and they come from HN users. Despite this the conversation between TelegramApp and HN users remain informative debate and discussion.

1. https://news.ycombinator.com/item?id=6913456


Being cordial and polite on Hacker News wasn't my concern. My concern is their repeated dismissal of feedback from experienced cryptographers.

Cryptographers can't seem to make sense of a lot of their design decisions in the MTProto protocol. Their response to criticism has mostly been in the form of: if you can't demonstrate a break directly, then we don't care.

Given how fragile cryptography can be, this is an absurdly irresponsible way to maintain a cryptosystem. Modern cryptographic designs try to be very principled, and steps are taken to prevent any kind of theoretical weakness, even if we don't know how to break it in practice. This is because cryptographic breaks only ever get stronger — never weaker.

As an example, TLS 1.0 using doing authentication for CBC modes with MAC-then-Encrypt was known to be weak, but it was only years later when researchers were able to turn this into a plaintext-leaking break. And MTProto is absolutely littered with unconventional or known-weak constructs, giving a lot of potential levers attackers can use to break it.

You might argue that it's fine for this to be the case, as long as they respond quickly to protocol breaks. The problem is, the good guys only learned how to break TLS 1.0 CBC when the attack was published. Did the NSA/CIA/GRU/FSB know about these attacks before we did? There's no way to know. But if it had conservatively chosen an Encrypt-then-MAC scheme to begin with, such an attack would have never been possible in the first place.

That's not to throw the TLS 1.0 authors under the bus here. The weaknesses of that type of scheme were yet to be widely known. In the case of MTProto, weaknesses in their use of certain constructs are widely known, and they don't seem o care.


I read through the linked thread in my previous post and I think it gives a good idea of the rift between the in-house-HN-cryptographers and those at Telegram. One will find the exact same disconnect between cryptographers that worked under heavy computational constraints, and those that have not. For example, in the Satellite TV industry.

The Telegram designers built a protocol with anticipation of some constraints. But rather than debate the plausability of the percieved constraints, the HN-crowd just dug into whatever they already knew and threw in a lot of snark in their response to close the door.

> Their response to criticism has mostly been in the form of: if you can't demonstrate a break directly, then we don't care.

I haven't seen that. I went looking for it. If you have the patience and time please dig up a link or quote.

> That's not to throw the TLS 1.0 authors under the bus here. The weaknesses of that type of scheme were yet to be widely known. In the case of MTProto, weaknesses in their use of certain constructs are widely known, and they don't seem o care.

I did see a good bit of discussion about the feasability of some of the weakness pointed out. They responded in a way that seemed to indicate they fully understood the issue but "chose" to take the risk. I'm not sure this means "they don't care". Perhaps it does. But this is where I started to see that the rift here was really about the perception of constraints, not lack of knowledge or, in my opinion, lack of care.


https://news.ycombinator.com/threads?id=paveldurov

By the way, it is also worth noting that Nikolai Durov, designer of MTProto, is completely absent from any public discussions of that protocol. Doesn't like talking to the plebs, I suppose.


That just sounds like ZRTP to me, from the short description in your comment.


If it is it will be interesting to see if they managed to get zrtp correct. It is a pretty complicated protocol with many use cases.


They say they are using 333 emoji to represent ~34 bits.


33.52 bits to be precise, which is where the many-nines claim comes from: 1-(1/(333^4)) = ~0.999999999918


> > With SIM cards, users can switch to a new phone by just moving the SIM, or switch to a new provider while keeping their phone (assuming its unlocked) by just replacing the SIM.

> Unlocked phones are still relatively rare in the US so I don't agree with your second point either.

As you point out, where GSM networks are concerned, this observation is mostly specific to the US - swapping phones and swapping SIMs has been a reality in the rest of the world for years.

Instead, the main source of friction is frequency bands. When swapping phones, it's not often an issue when switching between locally distributed phone models, since they are the Asia/international models with more band compatibility. When swapping SIMs domestically, it's not an issue for the same reason. When swapping SIMs internationally, phone service typically works, but if you want high speed data _then_ you check for band compatibility.

I'd say that for most of the world, the reduction in friction is real. It's a pity that the US market is so different.


> swapping phones and swapping SIMs has been a reality in the rest of the world for years.

It's still prevalent here in the UK, although the competition is fierce enough for you to be able to find a vendor that sells a phone unlocked.


I'm all for reduction in friction and I believe software sims will reduce this. I mean I can conceive of a world where connecting to a 3g+ network is little harder than a WIFI network.

It wouldn't be good for the carriers but it'd be great for consumers.


If you only have a 3G device, or a 4G device that doesn't support the available FDD-LTE bands, you might be better off getting a China Unicom HK card instead. They have HSPA+ on 2100 MHz. Google services work fine on them.

The cards are hard or impossible to find within the HKIA transit area, so you will want to either pick it from a street retailer in HK, or order from their English webstore - http://www.cugstore.com/hk_en/. Street prices are usually cheaper.

(No affiliation, just a happy customer from a few months back. I got their "Greater China 30 Days Data SIM" because I was spending time in Macau and Hong Kong as well.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: