The Internet’s Big Idea

Because of current events, it strikes me that we need to review the fundamental idea that the Internet was based on, packet switching. This goes back to the dawn of time in the 1960s, before any corporations were involved, and is the pure essence of the thing. Only by understanding the basic idea can we see who’s true to it today and who isn’t.

Packet switching is not a hard notion to grasp, as it involves the spirit of cooperation, a commons, and mutual benefit. Recall that communications networks of the earlier sort allocated bandwidth strictly. On the telephone network you always got the same slice of bandwidth, neither more nor less. On some rare occasions like Mothers’ Day you couldn’t make a call right away, but for all practical purposes it was always there and always the same.

This isn’t a very efficient way to allocate bandwidth, however, because much of it goes to waste. When you’re on a call, you consume just as much bandwidth when you’re speaking as when you’re not, and a great deal of bandwidth is idle for most of the day because it’s simply a reserve for peak calling times. So the designers of the early Internet – it was called ARPANET back then – wondered what would happen if they built a network where bandwidth was a common pool that each user would draw from when he needed it, as he needed it, instead of being strictly divided in the old-fashioned way. In this scheme, during periods of low usage, each user would get tons of bandwidth so the network would appear to be really, really fast, and during periods of high demand it would partition up fairly just like the phone network, or so it seemed. So they launched this great experiment to see what had to be done to make a network that would scale up in performance under light load and scale down to fairness under heavy load. The method is called “packet switching” to differentiate it from the “circuit switching” technology in the phone network, and the ARPANET became the Internet in its second incarnation of protocols.

Packet switching is the single most important idea in the design of the Internet, even more than universal connectivity; after all, the phone network gave us the ability to reach out and annoy anyone on the planet long ago. Packet switching as a way to manage bandwidth is the Big Idea.

It always strikes me as odd that there’s so little understanding of the Big Idea at the base of the Internet’s design pyramid among our would-be Internet regulators and ISP critics. They’re always complaining about the deceptiveness of “unlimited access” rates and “all you can eat” deals that don’t guarantee any constant or minimum rate. (Duh, we tried that already.) This is an experiment in another direction, where the deal is that it’s going to be faster at some times than at other times, but overall it’s going to be much better and much cheaper than guaranteed bandwidth. And sure enough, it works: you can even make phone calls over the Internet of exceptional quality anywhere in the world for peanuts. It’s marvelous.

Well, mostly marvelous. Throughout the Internet’s history, even when it was a closed garden for the research world and long before the great unwashed were allowed on it, the “fairness” problem has proved very difficult to resolve, because each user and each application has a different appetite for bandwidth and a different demand for response time. In the early days, the interactive terminal protocol “telnet” was often stymied by the bulk data transfer protocol “ftp”, and today Skype has to work around BitTorrent.

In theory, it shouldn’t be hard fit the needs of programs that communicate small chunks of data on a tight time line around programs that move massive amounts of data with no particular time requirement around any one chunk. In theory, we should be able to design networks that do that, either by booking reservations for the call or by giving Skype priority over BitTorrent. And in fact we have a number of experimental protocols that will do just that, especially within the confines of a private network in a business, an organization, or a home. And they all depend on a prioritizing or scheduling function having a clear idea of which packets belong to which program, and of the programs being willing to settle for less than what they want for various periods of time. And that’s the way things were on the Internet before it went commercial.

In the mid-80s, we saw Congestion Collapse (“Internet meltdown”) during periods of heavy ftp usage. The quick fix that was cobbled together required TCP to voluntarily throttle-back on the amount of data it transmitted when messages weren’t delivered. This “overloaded” dropped packets, giving them two meanings: either a packet was hit by noise and corrupted, or a network queue was full and the packet was discarded because there was no more room in the line for it. Error rates were low (there was no WiFi back then) so it was fine to react as if the network was overloaded. And we could count on everybody being polite and graciously accepting slow response time until the overall load went down.

This could have been a fairly crappy solution as it didn’t distinguish application requirements between our interactive application and our bulk data application, but implementation did what design failed to do: in practice, telnet data came in much shorter packets than ftp data, so when you combine that with the fact that the packet droppers are looking for space in network queues, you obviously get more space out of dropping long packets than short ones. So voila, in one step you’ve got priority enforcement and congestion control.

And it’s all going to be fine until the next generation of protocols comes around and our assumptions once again have to be revised. I’ll go into that tomorrow.

(Updated by removing some stuff about a blog post that inspired me to write this stuff. That material now has its own post, right below.)

Try this article for a little insight into changes afoot inside the Internet.

A Rather Shoddy Analysis

Former head of AT&T’s dial-up Internet access service Tom Evslin has jumped into the Comcast pile-on, in a particularly disappointing manner. While I’m not surprised when non-technical people like professor of media law Susan Crawford, media regulator Craig Aaron of Free Press/Save the Internet, or lawyer Harold Feld make a hash of the fundamentals of the Internet, I expect better from a fellow who once was in the business of packaging and selling access to it. Evslin simply repeats the same tired claims that Crawford makes about impersonation that isn’t personal, blocking that doesn’t block, and conflicts of interest that don’t exist in the Netflix era, without any bothersome analysis of how BitTorrent behaves relative to other Internet applications or why an ISP might legitimately want to protect interactive response time from background noise.

He goes off on one riff that’s especially odd about a fragmented Internet:

Each of us “sees” the same Internet. Communication becomes much more constrained if each of us sees a different and perhaps incompatible Internet. You can see Google but I can only see Yahoo. I can upload photos to Flickr but you can only upload to dotPhoto. My email can’t get to you; you and I can’t share files (although we can both share with Ellen—today). Gee, almost sounds like mobile phone networks—or cable networks.

This observation, which has no material relevance to the actual case (Comcast lets you download anything from anywhere, and upload as you do it,) is an imitation of one of Tim Berners-Lee‘s complaints about tiered service from last year, and it’s apparently meant to lend gravity to the situation. I don’t know why he bothered to write this piece.

The underlying theme of all these criticisms seems to boil down to one thing: whatever the broadband Internet access providers do is wrong by definition, and whatever the freedom fighters who wrote BitTorrent do is right by the same a priori assumption. So there’s no use for Comcast to try and defend itself, it’s guilty of crimes against democracy simply because it bills consumers for service instead of being a good guy like Google and billing companies with something to sell.

Traffic Shaping and Net Neutrality: Good Versus Evil

Brian Boyko, editor of Network Performance Daily, has written one the better and more thoughtful essays on net neutrality:

See, at the core of Network Neutrality issues are appliances or programs which conduct traffic shaping. In traffic shaping, some packets are prioritized, others are held back. This prioritization can be done on the basis of content (what type of data is being transferred,) on the basis of application (what program is transferring the data) or on the basis of IP address (which computer is sending the packet, and which computer is receiving it.)

Now, here’s the rub: Traffic shaping can help improve network performance, decrease latency, and increase bandwidth by delaying those packets deemed to be of a low priority. Sounds good, right?

Not so fast. Traffic shaping can degrade network performance, increase latency, and decrease bandwidth… by the same means.

The arguments for increased regulation of Internet access boil down to the claim that carriers are bad people who can’t be trusted with such an important job as managing Internet traffic; and we know they’re bad people because they routinely lie to us, suppress viewpoints, and cooperate with the government. Yet one could easily make the same charges against the very advocates of net neutrality regulations, and make a strong case that they too are bad people and bad corporate actors. Google and Moveon.org have misbehaved around the Susan Collins ad, and advocates who equate TCP Resets with identity theft are being less than honest.

So you can’t ultimately resolve this issue by identifying the good people and standing with them. There are good and bad people on both sides, so it’s the specific behavior that matters, and how much blind faith you have in telecom regulators to envision the potential of the Internet. And that’s where I have to part company with the pro-regulation crowd, because I’ve never met a regulator who was a man (or woman) of vision; typically, they’re small-minded and vindictive.

Somebody always ends up managing the traffic on a network. If the carriers are forbidden from doing it, the job will ultimately end up in the hands of the largest users of bandwidth, the Googles and Yahoos of the world. And when you put them in control of a cable plant they didn’t have to pay for, do you really suppose they’re going to run it in anybody’s interest but their own?

TLF Podcast on the Comcast Kerfuffle

Stop everything and cruise on down to Technology Liberation Front so you can listen to this stellar podcast on the Comcast Kerfuffle:

Two networking experts join us in the podcast this week to discuss exactly what Comcast is doing and its implications for public policy. The experts are Ed Felten, professor of computer science and public affairs at Princeton University, and Richard Bennett, a network engineer and frequent commenter to the TLF. Also on the show are Adam Thierer of the Progress and Freedom Foundation, James Gattusso of the Heritage Foundation, and Jerry Brito of the Mercatus Center at George Mason University.

And try to ignore the heavy breathing, the NSA was obviously listening in.

You can also download the podcast here, or on BitTorrent here.

Correcting the Record on Comcast

Cynthia Brumfield tries to set the record straight on the wild claims about Comcast, noting some back-pedalling by the AP reporter who carried the net neutrality water on this story:

Svensson’s follow-up article, however, retreats from the notion that Comcast “blocks” P2P uploading. Instead, Comcast “delays” P2P uploads, Svensson now writes, a nuance that the Comcast executives believe is critical. During heavy congestion, Comcast slows down P2P uploads by postponing the transaction. The system will, however, repeatedly keep trying to complete the upload until it has been completed.

Svensson does back-pedal a bit in his latest story, but not enough:

On Tuesday, Mitch Bowling, senior vice president of Comcast Online Services, added a nuance to that statement, saying that while Comcast may block initial connection attempts between two computers, it eventually lets the traffic through if the computers keep trying.

“During periods of heavy peer-to-peer congestion, which can degrade the experience for all customers, we use several network management technologies that, when necessary, enable us to delay — not block — some peer-to-peer traffic. However, the peer-to-peer transaction will eventually be completed as requested,” Bowling said.

The explanation is not inconsistent with the AP’s tests. In one case, a BitTorrent file transfer was squelched, apparently by messages generated by Comcast, only to start 10 minutes later. Other tests were called off after around 5 minutes, while the transfers were still stifled.

He doesn’t quote anybody who thinks Comcast’s actions are legit, only the snake-oil peddlers and alarmists who insist that Comcast engages in identity theft in order to prevent dorm-room innovators from cutting into Comcast’s cable TV revenues. (Frankly, I think they’re more worried about DirecTV’s 70 HD channels.)

What happened to good old-fashioned All-American balance in journalism? The one editorial I’ve seen on this story, in the LA Times, was nothing more than a plagiarized EFF press release, and surely they can do better than that.

Ed Felten’s Alternate Internet

Professor Ed Felten tells Comcast to stay after class and clean erasers:

There are well-established mechanisms for dealing with traffic congestion on the Internet. Networks are supposed to respond to congestion by dropping packets; endpoint computers notice that their packets are being dropped and respond by slowing their transmissions, thus relieving the congestion. The idea sounds simple, but getting the details right, so that the endpoints slow down just enough but not too much, and the network responds quickly to changes in traffic level but doesn’t overreact, required some very clever, subtle engineering.

Indeed, if everybody was nice, polite, and well-behaved, the Internet’s traffic management features would be enough for Comcast and everybody else. And we wouldn’t need jails, or police, or traffic signs because everybody would just be good. That’s the end-to-end world, and it exists nowhere in this universe.

What does exist is a program called BitTorrent that allows the user to set targets for bandwidth consumption in both the upstream and the downstream direction, and strives to reach those limits by any means necessary. If the link is slow, it opens additional connections. If TCP is slow, it uses UDP. If its connection requests are filtered, it encrypts them. If its port is blocked, it uses a different one. It worms through firewalls and works around NATs. Nothing in the conventional arsenal of TCP effectively limits BitTorrent’s appetite for bandwidth, it’s all up to the user. And if he’s a hog, it’s out of control.
The long-term solution to congestion is to increase bandwidth, and there is no cheaper way to to that than to expel bandwidth hogs. Comcast doesn’t always go that far, and for that they get blasted in the blogs. Life is not fair.

Fundamentally, the problem that Comcast addresses with its TCP RSTs isn’t an Internet problem, it’s an Intranet problem, as in the DOCSIS network inside Comcast doesn’t handle high loads of upstream traffic without going unstable. This isn’t a problem that the Internet can address, although TCP does provide Comcast with a knob to turn.

H/T Tech Lib.

Alien Brain Stimulation

This editorial in the WSJ explains the mechanism of alien attack on our freedoms:

Bless Ed Markey, the House telecom subcommittee chairman, but it didn’t enter his head unaided to hold up an iPhone at a hearing last week and — like the ape in the movie “2001” — ponder why he shouldn’t use it with any wireless network he wants rather than just AT&T’s.

He was inspired by an alien epiphany (though he did not throw the iPhone in the air). Under brain stimulation from Jupiter, the movie ape noticed that a bone could be used to club a fellow ape. Under brain stimulation from the Google lobby, Mr. Markey noticed a club with which to bash the wireless industry into changing its business model and adopting one that better suits Google.

What Google and allied special interests want from their pet apes is slightly more intelligible than “2001’s” notoriously psychedelic ending — they want cellular operators to package and sell access to their networks the way landline broadband operators do.

Kind of cute, and essentially correct.

Why Comcast Rations BitTorrent

I found an interesting academic paper on the dilemma that Comcast faces with BitTorrent uploads: “The Interaction Between the DOCSIS 1.1/2.0 MAC Protocol and TCP Application Performance”:

“We have developed a model of the Data over Cable (DOCSIS) 1.1/2.0 MAC and physical layers using the ‘ns’ simulation package [2]. In previous work, we reported on the impact of several DOCSIS operating parameters on TCP/IP performance [3]. In this paper we extend those results by looking in greater detail at the impact that the MAC layer has on TCP performance when using the DOCSIS best effort service. We show that the interaction between DOCSIS and TCP exposes a denial of service vulnerability. By taking advantage of the inefficiency surrounding upstream transmissions, a hacker can severely impact network performance.”

In effect, several BT streams in the DOCSIS return path mimics a DoS attack to non-BT users. That’s not cool.

BitTorrent vs. the Typical User

It turns out the war between P2P programs like BitTorrent and the typical network user is much larger than the current spat between the bandwidth hogs and Comcast. There’s an entire Wiki article on ways to avoid traffic shaping.

It includes a list of world-wide ISPs who try to keep the weeds out of the garden, and it’s long.

Despite the fact that P2P has some legitimate uses, such as distributing Freeware such as Linux, the fact remains that its primary uses are illegitimate, and even if they weren’t, the bandwidth it sucks out of cable modem networks inherently makes them less responsive for typical users. The answer to the load P2P puts on cable isn’t just “add more bandwidth” because the design of these networks is inherently asymmetrical. Adding massive amounts of new bandwidth is enormously expensive. Cable networks were designed on the assumption that the typical user does more downloading than uploading, but P2P violates that assumption.

So the only practical means of ensuring that P2P doesn’t drown out the typical user is to employ traffic shaping, and that gets the P2P freaks hopping mad. But there’s no free lunch, boys and girls, and somebody has to pay if everybody’s going to play.