Way back then there were competing visions of what the internet might be - some were corporate (and somewhat based around corporate lock-in) DNA/BNA/SNA/etc others were more com ing from a postal/telegraph sort of world X.25/OSI - in many ways TCP/IP was an outlier, the fact that it didn't really belong to anyone had a lot to do with why it succeeded (also they understood datagrams, and weren't really worried about how to charge for dropped packets).
I suspect (I wasn't even close to being in the room) that freezing TCP was likely a very pragmatic thing, if you wanted to be accepted as THE internet you had to be perceived as finished, otherwise someone else's many 1000 person-year project would have won.
One of the great things about IP is that it's extensible, there's still room for protocols other than UDP/TCP, you can still write something new and better, or a fixed TCP, and install it along side the existing protocols - of course getting everyone to accept it and use it will be difficult
> One of the great things about IP is that it's extensible, there's still room for protocols other than UDP/TCP, you can still write something new and better, or a fixed TCP, and install it along side the existing protocols - of course getting everyone to accept it and use it will be difficult
Sadly, this isn't the case anymore since shitty middleboxes have taken over the internet. Using extensibility features in all kinds of protocols (be it TCP, TLS, UDP, or any other three letter protocol, really) causes middleboxes to drop traffic. Since we've all decided that shitty middleboxes are the protocol's problem to solve (because end users usually can't decide whether or not they're behind shitty middleboxes) all attempts at a new protocol are big puzzle pieces now.
Even existing protocols like MTCP and SCTP are dropped in favour of UDP because shitty firewalls can't deal with anything that's not TCP or UDP anymore. TLS 1.3 looks like TLS 1.2 packets because shitty middleboxes will just terminate your connection if they see TLS with an unknown version number.
It's quite depressing, really. I'm sort of hoping that at some point some weird proprietary or open source product with a fancy protocol comes out and becomes a hit, forcing shitty middlebox manufacturers to get their shit together (or tell the end user that the product won't work right because of shitty middleboxes and that they should complain to their service provider about it).
I'm all for backwards compatibility, but I'm sick and tired of MitM protocol parsers that refuse to follow spec being respected and worked around when it comes to new protocols. If you can't read the RFC, get out of the network protocol space.
As a silver lining to that particular cloud the shitty middleboxes made the case for work resulting from BCP #188 (Pervasive Monitoring Is An Attack) very easy.
It wouldn't be difficult to find engineers at an IETF meeting who aren't very interested in whether Fatemah can send messages to her mother without them being read by her husband - that's a policy problem, not their business. But once they realise the shitty middleboxes to prevent Fatemah from evading her husband's surveillance make their beautiful new protocol design undeployable they're on board with encrypting absolutely everything everywhere all the time.
Terrific example. However, even if Fatemah's husband can't read the messages he can still see who his wife is messaging. And it turns out that is a problem no smaller than the content of the messages themselves, especially if Fatemah's sister is a vegan Jew-loving revolutionary and messages sent to her make the husband suspect Fatemah of wrongdoing.
I’m not familiar with the background of the poster you replied to, but your assumption that they are not simply choosing a common name from their culture is telling.
Encrypting at the protocol layer absolutely everything everywhere all the time doesn't solve that policy problem in any way. It does solve the protocol ossification problem though.
Truth be told, those who do not understand Internet are condemned to reinvent it, poorly [1]. Remember ATM? (hint: it's not the machine you withdraw the money from).
Internet is specifically designed with a narrow waist, as most things in nature as posted in HN recently [2].
I think the latest and most promising Internet protocol is Generic Network Virtualization Encapsulation (GENEVE) protocol that can hopefully generalized tunneling for the Internet and it's probably as simple as it can get [3].
The chicken/egg problem is a hard one to solve. Looking at quic and noise, the future seems to lie in extending on top of UDP as a way of sidestepping the adoption problem.
Authentication and encryption- I believe we would be in the same place we are today, with these responsibilities being delegated to the application layer, as the features of auth&enc changed faster than OS vendors are prepared to support.
Eg how much longer would we have had to wait for ChaCha20-Poly1305 in web traffic if we had to wait for the major OS vendors to upgrade? And would they have backported this support to earlier OS versions? Probably not.
> For one we’re limited to TCP and UDP– without a better protocol for media streaming .
What's wrong with UDP for media? Is it the lack of multicast?
> authentication was omitted , resulting in horrifying UX and security holes
While this is painfully obvious now I don't think the original Internet pioneers have ever really thought of the need for authentication as the ubiquity and threat landscape was very different. Regardless, we're all paying the price now.
that's not quite right. Bellovin et al in the late 80s early 90s really made a big push for authentication infrastructure (SPKI). but at the time PKI was being held hostage by ITAR (I remember sending off to get permission to use the RSA library, and getting something back several months later.
so yes, many of the people working on these things didn't have a security mindset (I had unix accounts at all kinds of random places), but some did...and the US DoD really threw cold water on the whole business
He’s talking about late 80s , early 90s. Go back a few more years. Definitely before HTTP. FTP authentication was optional and I think sent in plain text or some really simple encryption (this is before SFTP). Telnet was used everywhere… plain text. And finger… really?
I don't think most of these things should live at the Transport Layer personally. I am certainly a fan of SCTP but SCTP hasn't yet received widespread middlebox adoption. Unfortunately, the state of IP Media is pretty terrible. SIP/RTP/RTMP are all very complicated and fiddly to get working. The WebRTC stack, which wraps some of these protocols up, is its own beast. XMPP media stacks tend to be IMO the "simplest" and even they are quite complicated.
As someone who's rolled a few custom UDP network stacks I would tend to agree.
UDP is pretty low level(yay MTU discovery) but it gives you most of the tools you need which is why a good number of SCTP implementations are just built on top of it.
Yes, he kind of does. The article was exactly like that for me: "What things needed to be implemented to mark TCP done?" And in video he mention that they split TCP and UDP and he talks about audio needs where if you don't get the packet in time for the speaker you might not send it at all. And he mention this as just only one out of several in the list they had on a board.
Watch the video, it's more complete than the article. After watching it I came to the conclusion the article is cherry-picking stuff from the video to make a click-bait-ish kind of stuff.
One of the biggest bummers is how much the Internet has mostly collapsed to TCP and of that a very large share is http/https. UDP is still going strong for a handful of important applications. But if it's not one of those two -- good luck getting end-to-end transit.
UDP has always been used in at least VoIP and games. Basically any application where data is time-sensitive, and you'd rather not get a packet at all than have it retransmitted a bunch of times before it comes through. Also DNS.
But then there's QUIC, which is a better TCP built on top of UDP. It's new, but there's already some adoption.
> good luck getting end-to-end transit
This isn't a TCP/UDP problem, you need to go down a level. This is because of NATs, which exist because the IPv4 address space can't possibly fit all those networked devices people use these days. IPv6 solves that. Why is its adoption so slow despite widespread support? I have no idea.
Residential ISP policies on port filtering, NAT, and lack of IPv6 uptake are also to blame. The average consumer doesn’t care about any of that though, as long as FB, Netflix, and a handful of other sites load. Today’s internet may as well be interactive TV.
That middleboxes constantly stymie innovation through ossification. At least that's what I would think and I was going to post something very similar. At least we can just put overlay networks up and ignore most of the middleboxes out there.
As someone who doesn't have much background in networking, can you give some examples of what innovation could be possible without those middleboxes? I don't doubt you, just curious what you have in mind.
I'm not sure why you're getting downvoted. There's nothing wrong with asking questions.
Packets on the internet are transferred from host to host until they reach their destination. Your laptop on your local network is probably sending packets to your consumer router, which sends the packet to the ISP's router, and to other routers, until the packet reaches its destination. At each step of the way, routers can choose whether they wish to drop the packet or not. Due to various historically complicated reasons, many routers will drop packets they don't recognize.
To give some examples, a small transit provider may have some really old routers that don't recognize modern versions of TLS, and these older routers usually did packet analysis on the hardware, so changes to the protocol need to be reflected in a hardware upgrade, which the transit provider will not do unless absolutely necessary for cost reasons. The provider may then choose to drop the packet because they don't recognize what kind of packet it is.
Why should a middlebox care about what kind of packet is being sent? Many complicated reasons. CGNAT (Carrier-Grade NAT) is one of the iconic examples. Organizations do not want to learn IPv6, administer IPv6, or upgrade their hardware for IPv6, so they instead put layers of NAT in and block end-to-end connectivity on the internet. Another is that some netops will try to blanket drop packets that they don't recognize (think only allowing DNS queries to their specific DNS server, HTTP/TCP traffic, and maybe UDP traffic on a blessed VOIP port) to both defend from malicious users (instead of putting in the work to detect malicious traffic) and to better utilize the network they have and defer any network upgrades until absolutely necessary. Many middleboxes just drop non TCP/UDP traffic altogether, so protocols like SCTP which were designed for media streaming have a really hard time getting off the ground.
It's complicated. Combinations of organizational culture, low margins, and lack of consumer understanding of their networks creates situations where middleboxes are aggressive without reason upon their traffic. This phenomenon is called protocol ossification, or just ossification in networking speak.
You're correct, of course, but that routing should strictly be done on the network layer. If some downstream router receives a packet for host 2002:abcd::abcd:ef that uses some unknown transport protocol, it should just forward the packet (unless a firewall rule explicitly bans the packet, of course).
What we're seeing today is that routing hardware is also parsing the session transport layer (and sometimes even more!). TLS 1.3 lies about its TLS version (pretends to be a TLS 1.2 client hello) because some big companies can't deal with arbitrary version numbers. To "fix" this problem in the future, the real version numbers in the TLS protocol are now pretty much randomized to ensure no new middleboxes look at them again.
Also take HTTP/2: most of the fixes HTTP/2 provides (basically forming a tunnel) are already embedded into smarter transport protocols like SCTP. Because of lack of middlebox support, using such an alternative transport wasn't even an option. Alternatives like SST also had no chance, leaving everything to the TCP-packets-within-TCP HTTP/2 approach because of compatibility.
It really doesn't care what sort of wire it comes to router and what sort of wire it leaves over. And then not really even what is inside, it supports up to 252 different protocols inside.
Now gathering and transferring information where to transfer is somewhat more tricky, but really IP is pair of addresses and some data.
Except that for many users IP is unusable, only the TCP and UDP layers are actually properly handled by most "home" device. SCTP? Maybe over UDP... definitely not as protocol 132 inside IP. And CGNAT makes things even more complicated.
TCP/IP as we know it today was an ARPANET compromise between various views. A great first-party account of it (assume the RINA views carry some bias, but the paper is a first principles type look at networking):
Architecturally, speed prevailed over security and control. This was great for web evolution of the past 20 years.
Now is the time to assess if alternative architectures are better fits for use cases which could benefit from recursive architectures (and likely with different business models than today's commercial web) such as RINA, including use cases in which security, control and quality are of the highest value.
That's always been obvious to me. The Web & Net are built on RFCs: Requests for Comment. They aren't called PRD's (protocol requirement documents) or something along those lines. Kind of beautiful actually, that it wasn't as top down as I would have expected.
Reminds me of RINA (Recursive Internetwork Architecture):
> RINA's fundamental principles are that computer networking is just Inter-Process Communication or IPC, and that layering should be done based on scope/scale, with a single recurring set of protocols, rather than based on function, with specialized protocols. The protocol instances in one layer interface with the protocol instances on higher and lower layers via new concepts and entities that effectively reify networking functions currently specific to protocols like BGP, OSPF and ARP. In this way, RINA claims to support features like mobility, multihoming and quality of service without the need for additional specialized protocols like RTP and UDP, as well as to allow simplified network administration without the need for concepts like autonomous systems and NAT.
Just a minor gripe, perhaps. Where does it say that Haverty is an original person behind FTP? RFC 959 seems to give most of the credit to Abhay Bhushan [1] from one of the older RFCs, 114 [2].
I suspect (I wasn't even close to being in the room) that freezing TCP was likely a very pragmatic thing, if you wanted to be accepted as THE internet you had to be perceived as finished, otherwise someone else's many 1000 person-year project would have won.
One of the great things about IP is that it's extensible, there's still room for protocols other than UDP/TCP, you can still write something new and better, or a fixed TCP, and install it along side the existing protocols - of course getting everyone to accept it and use it will be difficult