> What about your cache, should we just introduce HTTP into the caching mechanism and hope that one of your clients has the data you wanted to deliver to someone else?
HTTP caching is a standard built explicitly for this. And it's gotten better at precisely these jobs over the years (substitute for server memory and indexes). So if that "someone" is the user who has the cache, yes.
> Are your clients going to be responsible for ensuring you don't max out your server with the amount of processes your httpd can handle?
HTTP 307 Temporary Redirect. 420 Enhance Your Calm. Keep-alive. Etc. There are many standards in place to let the client participate in not overloading the server. The server must always be willing to deal with a new actor, but that doesn't mean that there aren't systems in place to help clients be nice to servers that are overloaded without breaking other parts of the spec (caching).
> Decades of networking work, by extremely talented individuals, to make sure the client has to do very little lifting at the layers closer to the metal by people much, much smarter than us.
I don't see any evidence that all that networking work is to make sure that clients don't need to do work. Instead, I see many clever systems on both client and server being created to push the boundaries of resource usage past previous limits.
On the server, it sounds like you are familiar with these techniques. Load-balancing, edge-caching, distributed applications, etc.
On the client, it must come about through standards and behavior specifications. If you're imagining a world where servers are placing requirements on clients before they are ready to handle them, I agree, that would be awful. Servers should always have ultimate responsibility for being a good host to any potential connection.
However, once a client behavior standard has been well agreed upon for a significant amount of time, then servers should absolutely take advantage of the behavior. Should programmers of web application skip any of the behaviors above (HTTP caching and REST), or shun JSON APIs or pushState or any of those things?
No way! They should take full advantage, as long as they know what they are doing. And yes, you do have to still be a responsible citizen (for your own good as much as anyone's).
For example, if you use pushState, you still must keep in mind how search engines may observe your pages[1]. Using cache expirations aggressively but correctly still takes some careful planning. And so on.
But the principle of "offloading" as you put brings the possibility of optimization by factor of N, where N is your audience size, instead of some constant K as many other server-side optimization are limited.
This is not "nullifying" anyone's networking work. Clients, under carefully grown and tested behavior standards, _should_ be able to do some lifting.
If you want to talk about user experience, that's a whole other issue. And yes, there is significant evidence (responding to your earlier comment) that things have gotten better -- just a couple years ago, web fonts were almost unusable because of the flicker that happened on load. I am sure some clients still see it, but for my own personal use, the flicker is virtually 100% gone.
If the client can't display the page in a reasonable replacement time for the server, then it's a no-go. But if it can, it's a huge win and IMHO doesn't subtract anything from the (admittedly genius) networking stacks underneath.
Finally, if the technology in these networking layers could so easily be nullified by so-called "ignorant" individuals, wouldn't it be not-so-brilliant to begin with?
HTTP caching is a standard built explicitly for this. And it's gotten better at precisely these jobs over the years (substitute for server memory and indexes). So if that "someone" is the user who has the cache, yes.
> Are your clients going to be responsible for ensuring you don't max out your server with the amount of processes your httpd can handle?
HTTP 307 Temporary Redirect. 420 Enhance Your Calm. Keep-alive. Etc. There are many standards in place to let the client participate in not overloading the server. The server must always be willing to deal with a new actor, but that doesn't mean that there aren't systems in place to help clients be nice to servers that are overloaded without breaking other parts of the spec (caching).
> Decades of networking work, by extremely talented individuals, to make sure the client has to do very little lifting at the layers closer to the metal by people much, much smarter than us.
I don't see any evidence that all that networking work is to make sure that clients don't need to do work. Instead, I see many clever systems on both client and server being created to push the boundaries of resource usage past previous limits.
On the server, it sounds like you are familiar with these techniques. Load-balancing, edge-caching, distributed applications, etc.
On the client, it must come about through standards and behavior specifications. If you're imagining a world where servers are placing requirements on clients before they are ready to handle them, I agree, that would be awful. Servers should always have ultimate responsibility for being a good host to any potential connection.
However, once a client behavior standard has been well agreed upon for a significant amount of time, then servers should absolutely take advantage of the behavior. Should programmers of web application skip any of the behaviors above (HTTP caching and REST), or shun JSON APIs or pushState or any of those things?
No way! They should take full advantage, as long as they know what they are doing. And yes, you do have to still be a responsible citizen (for your own good as much as anyone's).
For example, if you use pushState, you still must keep in mind how search engines may observe your pages[1]. Using cache expirations aggressively but correctly still takes some careful planning. And so on.
But the principle of "offloading" as you put brings the possibility of optimization by factor of N, where N is your audience size, instead of some constant K as many other server-side optimization are limited.
This is not "nullifying" anyone's networking work. Clients, under carefully grown and tested behavior standards, _should_ be able to do some lifting.
If you want to talk about user experience, that's a whole other issue. And yes, there is significant evidence (responding to your earlier comment) that things have gotten better -- just a couple years ago, web fonts were almost unusable because of the flicker that happened on load. I am sure some clients still see it, but for my own personal use, the flicker is virtually 100% gone.
If the client can't display the page in a reasonable replacement time for the server, then it's a no-go. But if it can, it's a huge win and IMHO doesn't subtract anything from the (admittedly genius) networking stacks underneath.
Finally, if the technology in these networking layers could so easily be nullified by so-called "ignorant" individuals, wouldn't it be not-so-brilliant to begin with?
[1]: http://stackoverflow.com/a/6194427/143295