Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its amazing that not everyone is on http2 yet when its basically free speed.


It's hardly a surprise. It takes time for people to migrate to new protocols. Not everyone can just leave 20 years of engineering effort behind and switch to HTTP2 because it's a bit faster in some situations.


HTTP2 has multiple optimisations e.g. I only just realised it compresses XMLHTTPRequest requests, not just responses.

We use CloudFlare, so most of our users get HTTP2 even though our own infrastructure is still HTTP1.1 (however some corporate customers have proxies, which usually downgrade the browser connection to HTTP1.1).

We log whether HTTP2 or HTTP1.1 is used by the browser by JavaScript reading `window.performance.getEntries()[0].nextHopProtocol` which is supported by most modern browsers.


It’s not quite; HTTP/2 is not in fact uniformly superior to HTTP/1.1. Search around and you’ll find the reasons; all I’ll mention here is the two biggest keywords: WebSockets and head-of-line blocking.

The end result is that HTTP/2 is an improvement for most common workloads, but not all; especially in app-type scenarios with lots of mobile users with suboptimal connections and comparatively few requests (e.g. because you already do batching rather than sending zillions of requests), HTTP/2 can regress typical performance.

WebSockets over HTTP/2 is now specified in RFC 8441; not sure what the implementation status of that is. That solves one of the main problems.

My understanding is that HTTP/3 (with UDP-based QUIC instead of TCP) then resolves all remaining known systemic regressions between HTTP/1.1 and HTTP/2. So yeah, HTTP/1.1 to HTTP/3 should be pretty close to “free speed”.

But even then, it changes performance and load characteristics, and requires the appropriate software support, and that means that many users will need to be very careful about the upgrade, so that they don’t break things. So it’s not quite free after all.


H2 is nearly always better than HTTP/1, BUT it also turns some H1-specific perf optimization techniques (eg sharding, sprites...) into anti-patterns.


I’m not amazed. While many stacks support it, most organizations still have lift on their end to implement this behind the other “priority” customer change requests.


Of all the micro-optimizations I could think of for web apps, the one with the highest cost and the least benefit would probably be supporting http2 (or *quic). In almost all cases, there is a fix that will speed up http1.1 to acceptable levels.


What's the "cost" to supporting HTTP 2 as an app developer, though? As far as I know, adding support to nginx requires changing one line of code. That's about as close to free as you can get.


For a tiny startup, you might be able to just add http2 support in 10 minutes and everything might be fine, but most of the time it's more complicated. It's a bit like if I said, can I change your app libraries to bleeding edge? It's just a one line change.


Could you be more specific, though? What's more complicated? I'm legitimately curious because I know very little about HTTP 2, but at work (not a tiny startup) we recently enabled it and it turned out to be a trivial change. Unless you're implementing the networking layer of your backend yourself, it seems like a change with practically no cost or tradeoff, as long as your server software supports it.


I haven't implemented it myself, but here's some example scenarios:

Policy: What is allowed architecturally and what isn't? Are there regulatory requirements? Do you have strict enforcement mechanisms?

Instrumentation: Do you need to watch traffic going over the wire? Will your network filters flag it? Do you have application proxies that route traffic based on payload? How is it going to handle multiplexing if existing solutions don't take it into account? Are you using any proprietary stuff?

QA: Every client, server and intermediary may be using different implementations, and that means bugs. Have you certified all the devices in the chain to make sure they operate correctly? (It doesn't matter, until it really matters)

Operation: Each implementation needs to be upgraded one at a time, so the extent of your technology will determine how long and potentially error-prone all this will be. It will be different for each org, but definitely take a long time for really big ones.


This all makes sense. I guess ultimately, the more moving parts you have, the more things can go wrong with a change like this. Thanks!


I imagine its more bureaucratic complexity than technical. This change would require lots of committee meetings, reviews, meetings, discussions, etc. at my company. It would probably take a year to decide to do it and 5 days to actually do it (Get all IT groups into a large war room, make the change on dev servers and then everyone has to completely test all their apps and sign off on it. Then do it again on staging. Then again on prod. It would probably require a bunch of all nighters. I wish I was joking.)


How do you get your work done when 1 line change takes so long.


Changing a version number is a special type of one line change because it only appears to be a one line change. In reality, that could end up being potentially millions of lines of code being changed in dependencies.


We had nginx running as a proxy for one of the apps. It runs on RHEL 7 because that's the standard for the enterprise. The stock nginx available in nginx did not support http/2.

* There is no chance someone will approve this server to run a nginx instance someone compiled themselves

* There is no chance someone will approve this server to run anything but nginx as that's the company standard for proxy servers.

* There is no chance someone will approve this server to install software from a 3. party yum repository. (And that's even a much bigger chance than someone allowing the firewall in front of that server to allow outgoing connections to the internet, so installing form 3. party repos could even be performed)

In the end there was likely 2 ways to get http/2 support for that service: * Pay some 3.d party to make it happen and be responsible for that server. * Wait until nginx in RHEL (the epel repository, which was approved and mirrored internally) supported http/2.

We did the latter, which happened many months later.


One thing I’ve ran into is misconfigured native apps who accidentally treat headers as case sensitive. In particular the usual HTTP client in iOS handles header case sensitivity for you, unless you use newer versions of Swift where it converts the custom header dictionary into a vanilla Swift one that doesn’t treat header lookups as case-insensitive :/


Supporting http2 at an nginx reverse proxy doesn't help the problem in the original post, which is mostly about internal connections between microservices, e.g. going from your nginx proxy to your node or rails server.

Putting http2 here is a pain because you probably don't want https. You'd have to have nginx decrypt and reencrypt all the traffic, and you'd have to deal with certificates etc.


Server push can’t be free. You need the web server to somehow know what resources will be required by the page and I still don’t understand how it doesn’t defeat browser caching but I presume it must involve some non trivial configuration.


Server push also isn’t required to reap most of the benefits. In the places where I’ve tried it I’ve not seen any benefit over link preload headers + HTTP2 without push. Many CDNs that support HTTP2 haven’t bothered to support server push at all, I suspect due to the limited advantages compared to the extra complexity.


Pretty sure server push is being deprecated - current implementation is 'only half a feature', as clients lack the ability to tell the server what's already in cache.

In theory the client can cancel the response for a resource it's already got but by the time the response bytes reach the client it's really too late


Yeah and if the client volunteers the list of all it has in cache it would result in some massive requests and a kind of quasi-cookie.

I always found this feature interesting but weird.


Random thought. But isn't this a potential use for a bloom filter?


Yes bloom filters were a potential for Cache Digests, original prototype for Cache Digests used Golomb coded set as memory representation is smaller than Bloom filter

https://github.com/h2o/h2o/issues/421

But then Cache Digests moved onto Cuckoo Filters - https://github.com/httpwg/http-extensions/pull/413


Your webapp needs to start the push, browsers will interrupt it if they find the resource is already in cache after it has parsed the HTML.


It's also a huge amount of complexity.


Well, mod_http2 is incompatible with mpm-itk, and while it’s possible to run nginx as a front-end proxy, such a solution has its own complexities making it not really worth it in most cases where speed is not a top requirement.


Not when you consider that the majority of linux distributions haven't picked up support for it yet in their versions of nginx, apache et al.


Like which?


RHEL7/CentOS 7 is one of the more significant distributions out there, you can yum install version 1.12 of nginx. It wasn't until 1.13 shipped that nginx picked up any support for http/2.

On the Apache front, mod_http2 didn't ship until 2.4.17, again CentOS 7 and other RHEL7 based distributions lags behind on 2.4.6.

Sure, that doesn't mean you couldn't compile / install your own version, but for a lot of people that's just not likely to happen. Sticking with the distribution version keeps you within any support contracts, gets you security patches etc. and all the information you need to keep auditors and the like happy.


nginx is not shipped by RHEL so you are probably pulling from epel. you can also pull the latest stable version directly from nginx's repo http://nginx.org/en/linux_packages.html#RHEL-CentOS which has supported http2 since RHEL 7.4 when they released alpn support in openssl.

https://ma.ttias.be/centos-7-4-ship-tls-1-2-alpn/

you can also install the latest version of apache from red hats software collections repo that supports http2 but it throws everything into /opt/rh/rh-httpd24/ which is a bit weird.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: