1. Long polling can tie up server resources too. There is a process on the other end of that long AJAX request. The mechanism for delivering the streaming connection to the client is orthogonal to the mechanism for handling that connection on the server.
2. You say a streaming connection "ties up a huge amount of server resources," but the whole point of the linked article is that this does not have to be the case; Node.js can (when used correctly) handle loads of connections in a single process, and Phusion Passenger is clearly trying to evolve their model to achieve similar if not fully comparable results.
> Node.js can (when used correctly) handle loads of connections in a single process
Node uses an evented IO layer, that is completely orthogonal (and thus irrelevant) to streaming responses. You can stream responses with blocking IO, and you can buffer responses with evented IO.
> Phusion Passenger is clearly trying to evolve their model to achieve similar if not fully comparable results.
If you think that can happen, you're deluding yourself. Ruby+Rails's model means you need one worker (OS-level, be it a process or a thread does not matter) per connection. With "infinite" streaming responses this means each client ties up a worker forever. OS threads may be cheaper than os processes (when you need to load Ruby + Rails in your process) but that doesn't mean they're actually cheap when you need a thousand or two.
2. You say a streaming connection "ties up a huge amount of server resources," but the whole point of the linked article is that this does not have to be the case; Node.js can (when used correctly) handle loads of connections in a single process, and Phusion Passenger is clearly trying to evolve their model to achieve similar if not fully comparable results.