Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can anyone explain to me why some implementations only support 512-bit or 1024-bit parameter? Aren't the algorithms the same for all sizes? Why can't a certain implementation handle arbitrarily large parameters?


Probably because for performance reasons, much of this is hard coded. Buffers of a fixed length, etc.


"Performance reasons" is one of the better excuses to use if you want to force a committee to approve a weakened version of a standard. It has the air of "meeting everyone's needs" and it is unlikely that complaints about the potential weakening (or downgrade attack) will be listened to.


Probably a good idea to see if the performance issues are real, though. Modular arithmetic is really expensive; to do the two exponentiations used in (non elliptic) 2048 bit DH, you'll be needing 6 million cycles on an Intel chip at an absolute minimum. That's 5 milliseconds in wall time (at a mobile standard 1.2GHz)).

Imagine that you've got to load code from 8 different domains for a website. Then your CPU is working flat out for 40ms. Probably more like 50ms on ARM.

That's a lot of time.

Don't try to discredit those with perf concerns when the operation they're complaining about is incredibly expensive; they have genuine concerns.


I'm not trying to imply that all of these concerns are fake, just that performance is one of the easier places to hide bullrun-style sabotage. All claims should be checked, of course.

The link in my earlier post to djb's talk also discusses this issue, if youre' interested.

As someone who has written an embedded webserver... including the underlying TCP/IP layer and the driver for the "supposedly NE2000 compatible" Realtek chip... on a Z80 clone using only about 4k of flash and ~1.5k-2k of RAM, I sympathetic to real performance limitations. (that device only handled once minimum-size packet only one socket at a time)

That said, if you have a 1.2GHz chip, you have enough CPU for crypto. 40ms is a trivial cost for crypto, especially as you only use DH and pubkey to negotiate a symmetric key that isn't going to cause the same kind of CPU load.

There isn't anywhere close to a real performance limitation on that kind of platform, and I would regard any complaint about the performance on a >1GHz CPU as highly suspicious. When you have 1/100 or even 1/1000 the CPU cycles, that something else entirely.


Let's presume that 5ms is a lot. Does fixing the bit size cut this down significantly? If not, then better just handle all parameter size with a single codebase.


Absolutely not

What's the cost of increasing, let's say, the key size of a webpage serving SSL content. Merely adding SSL has an non-negligible cost for sites with some traffic.

Then you're forgetting all the dedicated hardware that needs to deal with that encryption. Sometimes it's a smartcard, or a security token, sometimes it's a mobile phone.


So because costs are hypothetically increasing, you prefer to live in a fantasy land where 512-bit keys are actually sane?

I am not arguing that there isn't a cost associated with crypto. For almost all uses[1], the price of crypto part of the cost of making something that connects to the internet. If you leave it out, you're creating an attractive nuisance and potential liability for someone. If you use the bad defaults of 512-bit crypto, I suggest that any claim of a product being "secure" or "using SSL" is a lie.

> smartcard

A smartcard isn't plugging into the internet on its own. Whatever it reads the card can wrap everything in proper crypto.

> mobile phone

You have far more CPU than you need.

[1] The exceptions are limited, such as a device that literally cannot do the crypto (I'm thinking an old 1MHz micro), Note that these devices shouldn't be directly on the internet, either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: