Paying for traffic to be prioritized is the exact opposite of net neutrality. No need to pay lip service to the idea then. The obvious danger here is that small indie studios may not be able to afford making enjoyable real-time multiplayer games.
How about improving public internet infrastructure instead?
As a small indie developer I don't understand your argument. Building a network like Riot can only be done by billion dollar publishers, but I could conceivably use this network.
If you don't want solutions like this, are you also against private networks? In addition to Riot, Google famously cuts the gordian knot of CAP in Spanner by declaring their network won't go down.
If it's ok to do it yourself but not outsource it, "think of the little guy" holds no water.
Steam might be a better example here, as Valve already allows devs to use their private network for game servers.
Although it should be noted that all of this effort is only worth it because greedy ISPs are unwilling to peer to certain other networks in most (all?) cases, as the private network is still running through the same cables for most of the way.
Why would Riot give this network to competitors? Only to make money. If that price gets too high. You, as s small indie developer, have just been driven out of the market.
That's the logic of the above post. The answer is NEITHER "outsource it" NOR "do it yourself". Instead, if the public offering is good enough, you dont need either option. That's how the internet has worked and grown so far.
This isn't me, this is the whole concept of net neutrality - the internet is a utility where all access is equal access, at least in terms of priority. (obviously we have rate limiting and people generally don't have protests about that - it's priority that is a problem)
Many of the complaints in the reaction to the article is that the article is using the language of net neutrality to argue for the opposite (pay for access, which includes the ability to pay more for privileged access).
Net neutrality is about content-neutrality and origin/destination neutrality. It is not opposed to give packets higher priority when there is a technical reason for this.
Having a protocol and routes that guarantee low ping but require low bandwidth makes sense to me. A lot of the current infrastructure is built for TCP: something that can handle losses and actually uses packet loss to maximize its bandwidth use.
Having specific shortcuts to open fast lines do not threaten net neutrality. For instance, I think game devs, would love if you could "reserve" 10KB/s of the lowest ping you have, out of your 500 MB/s "bandwidth budget" and reserve it to a given port.
And note that it is not just games that require it. Telepresence applications are limited also because of this problem.
There are also technical reasons why Netflix (or any other company significantly affected by net-neutrality) needs to pay Comcast extra in order for their packets to get higher priority, so everyone can watch on Netflix without stuttering and drop-outs.
Paying for higher-priority traffic/more bandwidth looks very, very similar to paying for traffic not be throttled. Some might argue it's the same thing.
IMO the difference is that backbone packet transit is different than end-user ISP transit. Unfortunately net-neutrality gets very muddy with, eg Verizon being both the backbone and the end-user ISP. Netflix network engineers are keenly aware of transit, which is part of the reason why they run their own CDN.
A protocol like that would be terrific because then each hop along the path would know exactly what is going on without needing to have explicit rules built in or be making it up based on heuristics and DPI.
I don't see how this is any more against network neutrality than regional mirrors / CDNs.
This is an optimisation which can be made by the game developer / host, where the alternative would be prioritisation by the ISP, which would go against net neutrality.
The ISPs charge lots of money for the CDNs and private networks that want to connect to their customer networks at the edge. This is the net neutrality argument; it's almost impossible to get good performance at scale without paying extra for it.
I actually think the problem this is meant to solve is an example of why, from a purely pragmatic sense, net neutrality is effectively impossible to deliver at scale. Someone has to eat the costs somewhere, and the publishers are the ones who benefit the most financially.
A bike courier or lawyer racing to the court house benefits far more from the roads then others but doesn't pay more. Common carrier regulations were widespread in moving atoms.
Toll roads charge you for travelling over them. They don't charge the business you're travelling to based on how much value you have to them as a customer.
not really a faster road as anyone stuck in an expresslane behind a prius will tell you, but a less congested road free of peasants. Not really sure how this analogy syncs up but just throwing in my $0.02.
That's how it works; no large telco I've ever worked for is willing to put third party gear (or really any gear that hasn't gone through extensive validation) into their head ends. An IXP is just a datacenter that the big ISPs and transit providers use to exchange traffic. The business agreements determine the price paid (which in some cases is zero, but often not, and is negotiated like any other business deal).
> The business agreements determine the price paid (which in some cases is zero, but often not, and is negotiated like any other business deal).
I do not understand this part: why is a business agreement necessary to simply discover BGP paths?
There is an ISP with users that want to go to Youtube or Akamai: the ISP has a router with the full BGP prefix list of the Internet, including for those of YT and A. YT and A are probably in most of the larger IXPs, and presumably the ISP in question is also at a few IXPs.
Why would a business agreement be necessary for the ISP to send traffic over the IXP's switches to the CDN(s)? What's the point of connecting into an IXP if you add the 'overhead' of business agreements?
> Why would a business agreement be necessary for the ISP to send traffic over the IXP's switches to the CDN(s)?
Because that's how distribution works and has always worked under corporate capitalism: the company making money off the content itself pays to distribute it (on the Internet, this is generally understood to be the originator of the packets). So the ISPs charge the CDNs and companies like Netflix to accept their traffic. A common argument is "well I already pay for that as a customer!" Telecom business models are a lot more complicated than that, and if 100% of their revenue came from subscription fees you'd be paying a lot more than you do now.
> What's the point of connecting into an IXP if you add the 'overhead' of business agreements?
Because it's a convenient location to house network gear with easy access to multiple large ISPs and upstream networks? An IXP is just a datacenter. What makes it an IXP is the fact that multiple large telco networks are hosted there.
From the wikipedia article linked above: "The Vancouver Transit Exchange, for example, is described as a 'shopping mall' of service providers at one central location, making it easy to switch providers, 'as simple as getting a VLAN to a new provider'. The VTE is run by BCNET, a public entity."
I am aware of paying for transit, which allows you to access larger swaths of the Internet through someone else's (better connected) network.
But if I am with an ISP A, and I want to watch something on YouTube, then given that I am paying my ISP for connecting me to "the Internet", and YT is on "the Internet", how/why would YT pay the ISP anything?
How exactly would an ISP charge a CDN, YT, or Netflix? The content distributors simply connect to the Internet and advertise via BGP: besides paying their own ISP(s), how would a content provider pay a 'distant ISP'? If a content provider is willing to pay for dark fibre and install their own gear into IXPs, how would any ISP issue an invoice to the CDN(s)?
And the "transit exchange" at VanIX seems to be separate from the open peering option available by simply advertising to the router servers. From the sentence right before the one you quote:
> When these conditions are met, and a contractual structure exists to create a market to purchase network services, the IXP is sometimes called a "transit exchange".
TorIX has the majority of participants doing simple peering:
> The Exchange also offers two BGP Route-Servers, which allow peers to exchange prefixes with each other while minimizing the number of direct BGP peering sessions configured on their routers.[3] Participation is voluntary, with approximately 85 percent of the membership using the free service.
This is not the net neutrality argument. It should be a cost saving for both the CDN and ISP to connect directly, because this saves both parties needing to pay for IP transit. That the ISP may charge the CDN for this is because in a vacuous regulatory environment they have been allowed to ransom their customers.
The behaviour of discriminating based on application or financial potential is not tolerated (legally or socially) in other common carriers like mail.
> It should be a cost saving for both the CDN and ISP to connect directly, because this saves both parties needing to pay for IP transit.
It almost always is a cost savings -- I've worked enough interconnect agreements and the dispute is always over value capture (who gets how much). Transit is expensive; but transit is paid by the originator. The ISPs incur indirect costs that are harder to measure, so it has to be negotiated.
> The behaviour of discriminating based on application or financial potential is not tolerated (legally or socially) in other common carriers like mail.
It's not? Because big companies absolutely get preferential treatment with mail too. USPS does things for Amazon they do not for anyone else.
It hasn't always been like this, but deregulation craze in the 90s/2000s weakened a lot of the protections we had against it.
> Paying for traffic to be prioritized is the exact opposite of net neutrality
This is exactly what peering is a significant chunk of the time. You pay for better peering in the fees that you pay to the right server parks. If that's the exact opposite of net neutrality, it has been dead for a very long time.
Last time I've heard the term "peering", it didn't describe an exchange of money. Just an exchange of bits, where the peering partners set up a link and send data to each other.
Transport however is indeed traditionally paid.
As for paying someone to send data destined to their network, that's weird. Unlike transport, they want that data, why would you pay to deliver it to them?
Improving public internet infrastructure is an excellent idea, but if you're expecting game devs to rely on that solution, you're asking them to please defer being successful at their goal until we've finished boiling the ocean.
We should anticipate some will say no thank you and strike out forging their own waterways.
If you want net neutrality you should be excited by companies building private networks because it means there's a competitive market that no one ISP or government can control.
One main difference is anyone can buy a truck and start a shipment company. Unless you have a ton of cash you cannot lay down infrastructure for your own ISP and must pay someone else to sue theirs.
Also I am pretty sure a lot of our infrastructure for the internet was subsidized (via grants or what not) by the government for the big tel-co companies (I have been trying to find legit links to source, but in my quick limited search only found Reddit threads). If this is true, then we, the taxpayers, helped pay for the internet we use and should get at least net neutrality.
Just my opinion, probably wont match everyone else.
Building a shipment company which can reach everyone in a country is a substantial capital investment, which I suspect is higher than that for a nationwide private network, though which is higher is irrelevant to the argument.
The roads, which the trucks require, are subsidized by the government.
In other words, I don't see the differences you're claiming.
Net neutrality, as an argument, applies to Internet; I've never seen a claim that it should apply to private networks, even those which use IP/TCP/UDP.
Thats more like saying that anyone can pick up a piece of mail and become a mail carrier. There are all manner of legal and practical barriers that require an internet to work outside of your street.
Where I am, it's a simple matter of setting up some high-power relays and building a mesh network in the permitted radio band. People do it all the time.
Also, I think you meant courier. Mail is quite tightly regulated at the federal level, at least in the United States.
There are a number of feasible things which are not permitted by law. There are significant advantages to ensuring net neutrality exists and is maintained.
This sounds like you are saying that reclassifying ISPs to be regulated under the FCC instead of the FTC would ban traffic shaping.
Obviously not all packets are equal. VOIP is an example of a high priority protocol. BitTorrent is low priority.
Legislating traffic shaping at this level would be absurd. Have I been living in a cave? Are Net Neutrality advocates arguing that it should be illegal for network operators to perform any kind of traffic shaping, even that which would prioritize the traffic for latency sensitive applications?
> Are Net Neutrality advocates arguing that it should be illegal for network operators to perform any kind of traffic shaping, even that which would prioritize the traffic for latency sensitive applications?
At least some of us NN advocates believe it should be illegal for ISPs to perform the kinds to traffic shaping that explicitly identify and prioritize certain ports or protocols over others. Because that kind of traffic shaping is not really necessary to offer good QoS, and thus there's no reason to continue allowing it.
> that kind of traffic shaping is not really necessary to offer good QoS
It is nobody on HN's responsibility to educate me on this, but I'd love some good hearty technical reading on this topic, because this is definitely counter-intuitive to me. If anyone has a link to a resource on this topic or feels motivated to type out a technical description of how QoS could work without explicitly identifying and prioritizing certain traffic, I will read it raptly and greatly appreciate the additional education.
For home routers, there was a major breakthrough in 2012 with the CoDel AQM algorithm, which was paired with a flow queuing system to create fq_codel and later Cake. These systems do not have any rule sets of the form "prioritize port N". fq_codel and Cake do look at port numbers and protocols, but only for the purposes of sorting packets into separate bins for separate network flows. Each bin gets the same set of rules applied to it, so in that sense they are Neutral.
CoDel on its own prevents a high-latency queue of packets from building up, but since it operates on a single FIFO queue is indiscriminate about which packets get dropped when it's time to drop something. fq_codel and Cake will tend to give priority to new or sparse traffic flows, and when they need to drop a packet they will drop from a flow that has a standing queue—on the assumption that those high-bandwidth flows are likely to be less latency/drop sensitive and probably can back off on their transmit rate. So any protocol with VOIP-like traffic patterns will tend to get prioritized enough to have minimal added latency and no packet loss (provided it's using a small share of available bandwidth), and the packet drops/ECN markings will hit the network flows that are behaving like TCP bulk file downloads. These heuristics do imbue fq_codel and Cake with a bias toward certain traffic-handling policies, but it's very analogous to the heuristics used by a typical operating system CPU scheduler, and well-grounded theoretically and empirically.
I said at the beginning "for home routers", because these new AQMs have not yet been incorporated into the kinds of ASICs used for carrier-grade equipment. But anywhere that it is practical to deploy these algorithms, they are easier to configure and offer better performance than the now-obsolete QoS strategies that depend on things like trying to decide whether port 53 should go to the head of the line to speed up DNS queries. These new algorithms have proven that ISPs do not need to buy any equipment to do things like detect and throttle bittorrent traffic in order to prevent it from overwhelming their network. They just need to upgrade their routers and gateways to use good general-purpose traffic management techniques.
> This isn't animal farm, some packets are not more equal than others
Some packets are more equal because they are being greased with payment to transport providers. Like Bitcoin transactions, you can pay more to have it done faster. Or the HOV lane in big cities - pay a fee or bundle the 'packets' to get downtown ahead of the rest of the traffic.
Or the Chicago expressway. You get the convenience of skipping local roads (and a higher likelihood of getting shot than driving through Baghdad, but that's a digression), but it costs $5-6 to use the road.
That sounds like a physical fast lane to me. Pay more, get a faster, more direct route.
But it does means the USPS truck is less than optimally full, but still costing about the same to run, meaning tighter, if not negative, margins.
These tighter margins will (if they weren't government supported) eventually mean that the USPS should reduce the number of trucks they run to better optimize their costs.
Or at the very end, USPS is not able to compete, as it has less packages and their cost is higher. And in the end FedEx keeps all the packages and once the competition is out, they can raise the prices and not improve the quality of their service, as the cost of entering to compete is too high for any other company to afford it...
The cost would be much less for a company to just take over the USPS infrastructure and run it in a different manner. No need to reinvent the wheel or start from scratch.
And if indeed a monopoly were to develop, it’s not as if they could charge whatever they want with impunity. There are many multi-billion dollar companies that would be more than happy to expand into the logistics field if the potential profits are there. FedEx would need to keep their costs reigned in to stave off that threat.
Even if a monopoly could never be assailed (untrue, but for the sake of argument) they still couldn’t do whatever they want. They raise prices, people will search for alternatives. Overnighting a contract too expensive? Companies will start to shift more towards secure verified digital signature and verification methods, and the service providers lose money. Bandwidth being too throttled to play online FPS games or stream HD shows? People will start to look towards more localized or even non-digital options, and the service providers lose money.
Ugggggh. Net neutrality analogies are hard. Why are they so hard?
The point I was making was a lot simpler: you could fly 1, 100, 100000 additional FedEx priority jets around the world and it wouldn't delay a single USPS truck one second longer.
The original point at the top was that low-latency gaming across the net reduces a QoS issue to a net neutrality issue in the end if we can't add more bandwidth (or airspace) the moment its needed.
But if the higher latency traffic is asynchronous, like an email, then it simply doesn't need the same prioritization as the low-latency traffic.
And those FedEx planes could slow down the USPS traffic, more FedEx planes means more last mile trucks, more road congestion, FedEx buying more optimally located sorting centers, last mile carriers prioritizing FedEx pickup/drop-off over USPS, etc.
What you're trying to put your finger on is that we are beginning to hit utilization levels of network resources where the previous model of statistical multiplexing is starting to show signs of strain.
You used the FedEx plane as an analogously to new faster/higher bandwidth infrastructure. The USPS trucks contents may have been on one of those jets at one point (if the contents are considered to be packets, and the truck was a lower bandwidth link).
However, network operators nowadays, instead of building up infrastructure capable of ensuring all traffic gets similar QoS (which costs money), resort to traffic shaping, route analysis, and DPI (looking in the box to figure out what it is), to try to eke out that QoS is more congested links by making the losses at least explain away-able.
This is the opposite of the colloquial way of handling net neutrality, which is to faithfully deliver every packet regardless of content, and mind your business in terms of what it contains.
You don't lookatthe package. You don't shake it. You don't paw at it, poke it, or molest it. Just get it from A to B.
Is the analogy for what Network Next is doing more like adding special trucks to the existing traffic or like building roads that only its trucks can go on (which don't interfere with existing roads)?
FedEx doesn't have the ability to control the stop lights at each intersection along the way, the major peering providers do. This analogy doesn't work.
Except that the marginal cost of each "shipped good" is only a very small handwave away from zero. Which, as I understand it, is fundamentally different from the cost model of shipping physical packages.
Up to a certain point, yes you pay per physical unit.
You get a lot better rate if you can fill a whole intermodal container. You get an even better rate if you buy that containers capacity for an agreed upon period of time.
There's a huge industry built around optimally buying space in shipping containers and transportation.
Look at it from the perspective of a random game developer: which one do you actually have more control over?
You're essentially asking, "why don't you ignore the thing that definitely works right now, and instead try the thing that might work out in a decade or two if you get lucky?" Why would any sane business choose that option?
No, the actual business question for indy developers is "why do you expect us to agree to buying a thing that we can't afford, instead of demanding at the societal level that we all improve the public good that we actually do have access to instead?"
That's what GPs remark of "only big players can afford private networks" was about, and it's the opposite of what you're claiming.
> Steam Datagram Relay (SDR) is Valve's virtual private gaming network. Using our APIs, you can not only carry your game traffic over the Valve backbone that is dedicated for game content, you also gain access to our network of relays. Relaying the traffic protects your servers and players from DoS attack, because IP addresses are never revealed. All traffic you receive is authenticated, encrypted, and rate-limited. Furthermore, for a surprisingly high number of players, we can also find a faster route through our network, which actually improves player ping times.
> every ten seconds, Network Next runs a bid on its marketplace to find the best route for your players across our supplier networks.
> The winning bid carries your player's traffic for the next 10 seconds, then the process is repeated every 10 seconds, for each player
Markets in everything! Like realtime ad markets, it's one of those things that's simultaneously an impressive technical achievement and a Randian nickel-and-diming nightmare.
Like surge pricing, I wonder if there's any incentive for some of the actors to manipulate the system to raise prices ...
You could start undercutting bids when your competitor's games come out, hoping that if their game's multiplayer is shitty enough on opening weekend it'll reduce the game's playing population in the long run.
A lot of games already (excusively!) use private networks these days, and even older games like Dawn of War 1 and Civilization 4* had their Internet multiplayer removed for Steam's (V?)PN.
A lot of blame is on "I"SPs filtering traffic and sometimes not even giving access to the router so that the host can do port forwarding!
*Non-Steam Civ4 still has Internet multiplayer. After complaints Steam also allowed to revert to the latest official patch using their "beta" feature (for some reason the new Steam patch also breaks the mods - even if they are played even more overwhelmingly in single player! - and Civ4 has one of the biggest mod communities thanks to its semi-open source C++/Python/XML modding).
Technically maybe not (?), but if the end result is that you cannot connect to someone hosting the game without also both of you running Steam (and a Steam version of the game), because you don't even have the option to input their IP - what's the difference ?
> The internet doesn't care about multiplayer games
Good. This is how things should be with regard to dependencies.
Games depend on the internet. If the internet gets rewritten, you'd have to rewrite your games. Low likelihood (of the internet changing), "medium" cost (to rewrite games).
If the internet did care about multiplayer games, then changes to multiplayer games could require us to change the internet too. High likelihood (of games changing), enormous cost (to update the internet).
I think it should care about games and that it already does. A lot of development for computers was made possible because of gaming.
Today streaming is also a huge factor. But the principle is the same that any low latency application will profit from expectations that are created by consumers on a large scale.
So I think the premise of the article is wrong. The "net" does care about speed and latency. And I doubt a Balkanization of private networks will be advantageous for anyone besides the respective providers. Premium traffic shouldn't be the goal. Common infrastructure that can handle the load is and exclusive network access should be minimized to as few applications as possible.
If the needs of the internet using public change, the networks that server that public should also change. The internet should absolutely care about how well it is serving the needs of its users.
Yet another network provisioner's marketing department trying to spin the 'network neutrality is bad' spiel as if life would be so much better if only those pesky regulators would allow any means for ISP's to extract the maximum profit out of every packet, even if that means creating artificial scarcity.
It's like hearing Nestle arguing how the availability of drinkable municipal tap water is bad for their bottled water business.
They're not saying net neutrality is bad, but that it isn't well suited for latency sensitive applications like gaming. Which appears to be true; it's the whole reason Riot has their own network, after all.
Thing is that both are orthogonal concepts.
You can optimize a network for latency or for throughput.
A network optimized for latency can still be net neutral in concept and intention. Even an under-provisioned network that tries to fairly deal with optimizing for both low volume low latency applications and high volume latency tolerant applications at the same time while filtering out bad actors can still be net neutral.
Riot has their own network because they want a different kind of optimization than the traditional ISPs and Internet backbone providers aim for.
Network Neutrality was born out of the observation that dominant (read quasi monopoly/duopoly) were actively undermining offers from 'over the top' (meaning offered over the 'generic' Internet service rather than packets over a dedicated managed service offering from the ISP, think Netflix vs your ISP's own VoD offering) service providers by selectively shaping the traffic to degrade their offers. At the time, services such as Skype (pre-Microsoft acquisition) were their primary target as they impinged on the juicy profits from both traditional PSTN and ISP owned VoiP offers (ISP and Telcos were/are pretty tight).
The history of telecommunications has been a perpetual battle to reign in rent seeking natural oligopolies. To pretend no active fairness policies are needed or desirable is denying the obvious.
This by no means negates that lively and continual debates on exactly how to translate policy intentions and objectives into a real network fabric aren't necessary, as fairness is an emergent property relying on very many different punctual choices in configuration and deployment, the impacts of which can be detrimental to the objective both by accident or through deliberate obfuscation.
> Even an under-provisioned network that tries to fairly deal with optimizing for both low volume low latency applications and high volume latency tolerant applications at the same time while filtering out bad actors can still be net neutral.
How? We seem to be discussing QoS. As in ranking and prioritizing different services. Something that is fine for your local network, but has been used to punish torrent users by ISPs. Possible solution the wiki-page [0] describes are over-provisioning (as in never-even-needing-QoS) followed by a bunch of protocol stuff that would be problematic once being payed for.
I'm very much in favor of regulations like net neutrality for e.g. situations with no effective competition. But at the same time, I'd also accept specific connections with special properties for services needing them. A remote surgery wanting a highly reliable connection with consistent latency being the classic example. And also needing high bandwidth, in case your model only allowed for low bandwidth ever needing high priority.
I don't see a way to allow this while keeping a hard-line net neutrality over everything else stance.
"a bunch of protocol stuff that would be problematic once being payed for."
And there lies the rub. It is not that NN means ISPs aren't allowed to manage their networks and have to let it all melt down into a cesspool of fully congested non yielding packet spewers (which is a strawman the industry likes to portray).
If network traffic is shaped transparently and non-discriminatory (for competing services types, e.g. buffering all latency robust VoD traffic while letting all eSports gaming packets skip the queue to increase overall Quality of Experience (quality as experienced by the end users rather than pure IP level QoS is not a problem) to provide a fair playing field for service providers (favoring a YouTube packet over a Vimeo packet because the former cut you a deal is not OK) and customers (dropping Anna's Instagram packets because she did not order the 'social' option for a few dollars more, or more likely, Zero rating Instagram packets against a volume cap whereas TikTok traffic is not excused) , there is no NN problem
Intents in regulation are a thing. Disputes on interpretations and compliance are a normal part of the process.
The speed of light prevents people on the other side of the planet from having low latency. Networks can’t significantly improve latency at those distances. So, games need to avoid latency being an issue through game design or split things into separate instances.
In the end it’s just a question of where the servers are relative to you and nothing else really matters. Private networks tend to make things worse by having fewer paths available increasing the average minimum distance.
"In the end it’s just a question of where the servers are relative to you and nothing else really matters."
Assuming a 'fair' network. In absence of Net Neutrality it was "Dear company X, you better pay us or your packets will be delayed/dropped and your customers unhappy", and " Dear customer Y, you better opt for extra service A,B,C,D, ... if you want to Game/VoiP/VoD/Chat ..." and very much worse.
"Private networks tend to make things worse by having fewer paths available increasing the average minimum distance."
This is not how internetwork routing works. Obviously a straight line darkfibre line from A to B will beat any route a generic Internet can come up with. It is not a case of Riot Direct xor 'the rest of the Internet', but selectively supplementing by deploying handover points at strategic peering locations in the internetworking fabric where things can have a large impact.
As you say, Riot Direct is depending on the internet. Sure, adding routes can make slight improvements, but outside of high frequency traders or data centers users are semi randomly geographically separated. This puts a hard upper bound on any improvement of reasonable cost that quickly approaches just adding more routes on the internet.
Having said that, I admit it’s not pure snake oil and provides minor benefits to their customers.
When a server sends some state to a player, they player can’t get a response back without a round trip. The same is true in reverse. Further, the path through fiber optical cables is not the strait nor are fiber optic cables a vacuum. Using lasers through a vacuum tube strait from the user to a server on the other side of the planet might hit 134ms of light lag which is already a significant issue for many games.
PS: LEO satellite constellations like Starlink have a lower theoretical latency around the world than fiber as the signals are traveling in space, but a longer path. Significantly beating that is going to require magic aka a useful fundamental change in the laws of physics as we know them.
It's not spin at all though. web devs and high bandwidth users think the world revolves around them since they consume the majority of traffic; but latency is a huge issue in multiplayer games and players and providers are absolutely willing to pay a premium for lower latencies. You sometimes see professional e-sports players actually relocating closer to the servers for better ping.
Video streaming is probably the majority of last-mile internet traffic. Along the public backbone is a different story as most video streaming services serve content at the edge.
Their entire business sounds like a way to circumvent network neutrality, by claiming their network of interconnected private networks is something other than the internet.
Well, what they actually seem to be saying is that network neutrality should protect against discrimination based on user, application, content or other details that are irrelevant to network transit. The argument is that performance requirements are something a bit different and there is a need for market to allow for competition on performance without allowing discrimination based on the other aspects of the traffic.
The idea being that if your ISP owns a game company that makes a popular FPS, they shouldn't be able to charge higher rates for packets that need high performance based on the fact that those packets are carrying content for a competing FPS.
No, he is pointing out that 'there is a market for it therefore it is not a sin' is a shitty argument because there is a a market for blatantly amoral things. You don't understand how analogies work.
I'm not sure I necessarily find this inimical to net neutrality or a problem. Isn't this more like a way to provision on-the-fly peering agreements based on pricing and latency?
Of course, if as a consumer you have a lousy ISP and the problem is occuring in last few miles, I don't see that this can help.
Netflix is a very special case. When your service comprises 10% of all Internet traffic, a lot of network management things that are normally irrelevant at the application level start to have shaping effects. A typical website won’t be affected at all by, like, Comcast’s peering policies.
One could easily imagine lanes controlled by the end user.
For example, "You pay $60 per month for a 50Mbit cable connection consisting of 1 Gigabyte of low latency traffic and unlimited regular traffic". Then the user can set certain programs to be "low latency" using an app available from the cable company. If they want more low-latency traffic, they can pay extra.
Actual implementation through CGNAT could simply involve having different source port ranges for low latency traffic.
Now the key thing is: does traffic not specially marked as "low latency" get artificial latency imposed on it in order to maintain the value of the low latency offering?
You can imagine all sorts of things that would be great, but you have to acknowledge the reality that market forces don't work brilliantly on ISPs in many places.
That implies a consumer-friendly regulatory regime that can be persuaded to keep up with all the different attempts to game the system. The idea behind net neutrality would be only having to have this fight once to assert that all traffic is of equal value.
It's one of those things where the success is unsung because when it's working you shouldn't notice it's there but when it's not then it's really obvious.
Average network implementations exist its just that they work over a smaller range of conditions than top tier implementations. The enemy of average implementations is actually low populations with a wide geographic spread so players are encountering poor conditions more often. With high populations average implementations generally do okay because you can match up players that will all experience good conditions.
Then you have all the gameplay smoke and mirrors that can dramatically change the felt impact of poor network conditions.
> the biggest enemy of performant gameplay is lack of servers, or incredibly shitty netcode.
If an ISP were down-prioritizing unknown packets (such as most games since they don't usually use HTTP) then how would you notice that instead of "incredibly shitty netcode"?
There is no such thing as an 'unknown packet'. Your ISP employs Deep Packet Inspection. So they can downgrade your game but not mine because I pay them to.
From my experience of playing way too much computer games in my youth, the Internet seems to work extremely well for multiplayer games... Of course, if you play against someone on the other side of the world, there will be some lag but that is mostly due to things that are not so easy to change, like the speed of light.
We still have quite a bit of optimizations to make until real-life latencies are limited by the speed of light over such distances.
For any given two points, the shortest distance between them along earth's surface is never going to be longer than half the earth's circumference. Light travels that distance in 67ms:
$ units --verbose 'pi * earthradius / c' milliseconds
pi * earthradius / c = 66.763248 milliseconds
Double that for a full roundtrip time.
In reality, the ping time between myself (in Germany) and the opposite site of the earth (in this case, my company's datacenter in Sydney) is more like 300 ms, i.e. less than 50% of the speed limit imposed by the laws of physics.
Your calculation doesn't seem to account for the fact that light in fibre travels slower than in vacuum, 2/3 if memory serves, so it's a lot closer than you think.
A lot of the latency will be due to the conversion between light and electricity for processing in routers/switches. If we had all optical processing that would probably bring the latency down close to the theoretical limit
67ms each way. My UK-Syd latency is currently 265ms, in theory it could be as low as 112ms if it were in a vacuum. In glass with a refractive index of 1.3-1.5, you’re looking at nearer 150-170ms rtt.
Starlink and similar Leo satellites might even have a lower latency as they use laser via vacuum
Isn't this why we used to get together and have LAN parties, so that we could play on a 10/100mbps network without other Internet traffic in the way? (And so that if a player started trash-talking the rest of us could beat the shit out of them?)
Also, was the sales pitch in the second half really necessary?
Surely this is preferable to the trivial circumvention of your proposal where you just ask a friend to post something for you. A trick that basically silences all Redditors that would otherwise complain about “ads”. I’d rather people just post their own stuff and we judge the content on its own basis.
It is preferable to the extent that the poster is transparent about their intentions.
The user name posting this could be 'NetworkNext' or a brief disclosure could be in the original post -- either of these would provide some transparency. With transparency there would be fewer clicks on the link but greater trust in the content.
I think this is exactly what HN is for. You're free to promote your project. It's just a shame that this project, while a technical achievement, puts them solidly on moral low ground.
At my time it was because the Internet was too slow in the first place and too expansive. Also it was nice to get together, play, drink and smoke weed for a whole weekend.
This reflects the fact that a shadow 'internet' is emerging: alongside the public internet with its tier 1 ISPs, there are global private networks with vast bandwidth, ostensibly for a company's own use, but with opportunities arising for use cases just like this, selling off surplus bandwidth via marketplaces.
The Internet also doesn't care about near-zero latency for things like telerobotic surgery, so this could very well be a step in the right direction.
Granted, regarding the specific details of this solution, I don't think I want to trust my telerobotic surgery to a system that could let a multiplayer game distributor out-bid my knife control packets...
I think at a high level we can all agree that there are two main classes of networks. Bandwidth sensitive: for example Netflix, or a JS bloated website. And latency sensitive: for example gaming and autonomous vehicles.
My top concern is transparency with respect to what routes are available. I wouldn't want my domain to be excluded.
How the backend economics play out will influence this. Large players will eat the variations (in the beginning at least), and small devs could have access to a better tier of networks.
I just fear pricing models will become even more user hostile however (compared to the current service economy). Think Uber pricing for games. Surging to 4x+ prices after school, for example.
"Net Neutrality" is a poisoned term. All it means is "good prices on MY use case."
People whom want free stuff claim "neutrality" means
"same price for everything (without defining the units of 'everything'), and no QoS"
"neutrality" means not discriminating on the basis of identity. It means offering the same terms to everyone, and not letting Disney pay a fee to ban Comcast or BitTorrent, not offering only one flat price for all possible services.
I see a bunch of network neutrality confusion here.
This is not anti-net neutrality, nor is it pro-net neutrality.
It's saying:
Rather than accept the unknown path between your customers and you, pay for this service and you can use our optimized backbone reducing the amount of time your traffic is on random networks that aren't optimized for the exact network tuning you need/want.
Network neutrality has to do with public transit networks not favoring certain traffic, generally identified by having _paid_ for preferential treatment.
These folks are _not_ running a public transit network, they're running a _private transit network_ with peering points back to the public internet for access by consumers and producers.
Articles should declare up front if they are sales pieces. I got most of the way through this article without realizing Network Next was a product and I was reading a marketing piece. Not cool.
I wonder if something like a gaming VPN is any improvement on the current way games use networking resources. This could be a service that 3rd party games require, or could be built into Unity etc., such that online multiplayer (at least on PC) could be secure and also fast.
It is useful, or in some cases required when inbound ports/nats can not be opened up. That's why hamachi was so popular with gamers for a while. You basically bridge your LAN with the other players. Valve is just doing this in a way that is more integrated into their platform. In garry's mode, they call it "Peer to peer" but they are using this relay network.
Depends on the game: a real time first person shooter needs better latency than a real time strategy game which needs much better latency than a turn based game.
Nothing neutral about that. The one with the most money gets the better QoS. We don't need yet another market ruining things the real world uses so that some finance type can make bank. I wish ads for hostile ideas like that one would be flagged as such.
How about improving public internet infrastructure instead?