The trouble with end-to-end

We’ve all heard that the Internet is an end-to-end network. This means that it’s different in some major way from the telephone network, and different from other types of packet networks that we might very well be dependent upon had it not been for the success of the Internet, which of course is a result of its superior, end-to-end architecture. And end-to-end network is one in which “intelligence” is concentrated at the end points, with the network itself operating as a “magic cloud” that simply delivers packets as the end-points dictate. End points control delivery destination and rate, and the network simply passes them through.

But is this really true? In many important respects the Internet actually gives up much less end-user control than the telephone network. Routing, for example, is not conducted under end-point control. It could be, if we used the technique known as “source routing” where the transmitter of a packet doesn’t just specify where a packet should go but how to get there, hop-by-hop. The IBM Token Ring used this technique, and there’s a form of it in the RFCs that describe the Internet’s operation but it’s never been more than experimental. The phone network actually allows the user much more control over the routing of calls than the Internet does. I can choose any long-distance carrier I want for each call that I make by dialing a certain prefix before the phone number. So I can use one carrier for regional long distance, another for national long-distance, and different ones for each country I dial. That’s end-user control.

If I had that kind of end-to-end control on the Internet, I could select one NSP for bulk data transfers such as BitTorrent that would be really cheap and another NSP for VoIP that had to be really regular.

The Internet puts control of network congestion at the end-points, but that doesn’t do anything for the user as it’s all a magic cloud to him. It compromises the integrity of the network, however, as the health of thousands of internal links – selected by the network and not by the user – is dependent on good behavior at all of the end points. We’ve talked about how this works before. When a queue overflows, TCP eventually notices packet loss and throttles back its send rate, which eventually alleviates the overload condition. It’s the same logic that’s supposed to operate when the electric grid is overloaded because we’re all air-conditioning like mad. The power company tells us to turn off our air-conditioners and enjoy the heat. Some do, and others don’t. TCP’s good neighbor policy is just as easily defeated as the power company’s, so the good neighbors have to throttle back twice as hard to make up for those who don’t throttle back at all.

So it’s actually quite easy to argue that the Internet has botched the location of major control functions. Routing has great significance to the user and less to the network, but it’s all under network control, while congestion is just the opposite.

This dubious assignment of functions is exactly what net neutrality is meant to protect. It has real impact on future applications. We use a lot of mobile devices today, a big departure from the way we did things in the 70s when the Internet was designed and the PC was not even a pipe dream. Mobile devices – laptops and phones – should be reachable wherever they’re attached, but the Internet doesn’t allow this as their physical location is encoded into the addresses they use. Your IP address isn’t just a device address, it’s a network attachment point address. This is not a sound way to do things today, but having the network keep track of where you are is a “smart network” feature, a heresy in the religion of end-to-end.

These tradeoffs may have appeared sensible in the 1970s, but they don’t any longer, and no religion should force us to accept them indefinitely.

2 thoughts on “The trouble with end-to-end”

  1. We’ve all heard that the Internet is an end-to-end network. This means that it’s different in some major way from the telephone network, and different from other types of packet networks that we might very well be dependent upon had it not been for the success of the Internet, which of course is a result of its superior, end-to-end architecture.

    I have heard many neutralists argue this, but I question how many actually believe it. When you press some Neutralists, they eventually say something like: “If I pay for 6 Mbps and a website pays for 6 Mbps, then we should connect at 6 Mbps.”

    Correct me if I am wrong. However, that statement seems inconsistent with the actual structure of the Internet. Futhermore, it sounds more like the old “hub-and-spoke” switched telephone system.

  2. That’s one of the great ironies of net neutrality, isn’t it? The phone network, with all of its wicked “intelligence”, was designed to provide a fixed pool of resources – bandwidth – to every call in progress, with no variation. But packet networks simply provide us with an opportunity to grab what we can from a common pool of resources and count on us to use them in a kind and gentlemanly fashion. It’s completely absurd to demand that the packet network behave as nothing more than an amped-up phone network, yet that’s what they seem to want: dedicated bandwidth at the price of pooled bandwidth.

    It’s completely nuts.

Leave a Reply to Richard Cancel reply

Your email address will not be published. Required fields are marked *