a) It's a pretty large percentage (~50%) for viewers within my target critical region (east coast) but it becomes less important when accessed worldwide.
b) Session tickets cuts it from 60ms down to about .5ms extra delay (when loadtesting with a keepalive https connections) so this is really only an initial handshake problem.
c) The localhost full handshake latency problem is really a proxy to the real problem: the CPU load. TLS/SSL is adding a lot of compute requirement to each initial connection. This becomes important as I have to deal with celebrity content, where a single Twitter link can lead to hundreds of thousands of incoming connections within a few minutes.
TLS/SSL handshake compute requirements really need to be sped up somehow..
60ms sounds like too much. How are you measuring this, and what does your TLS config look like?
A quick test with ab (-n 1 -c 1) against a nginx instance shows about 5ms for me, on an Intel E3-1245 V2 @ 3.40GHz. This is with a P-384 key, so it would be even less with P-256 or RSA-2048 (which, IIRC, have fast assembler implementations in openssl).
b) Session tickets cuts it from 60ms down to about .5ms extra delay (when loadtesting with a keepalive https connections) so this is really only an initial handshake problem.
c) The localhost full handshake latency problem is really a proxy to the real problem: the CPU load. TLS/SSL is adding a lot of compute requirement to each initial connection. This becomes important as I have to deal with celebrity content, where a single Twitter link can lead to hundreds of thousands of incoming connections within a few minutes.
TLS/SSL handshake compute requirements really need to be sped up somehow..