The Comcast Net Neutrality Controversy: A Discussion

The Heritage Foundation has graciously transcribed the discussion we had a couple of weeks ago on Comcast, featuring Adam Thierer, Jerry Brito, Ed Felten, James L. Gattuso and yours truly.

The Comcast Net Neutrality Controversy: A Discussion

On October 25, the Technology Liberation Front, a technology policy weblog, hosted an online discussion concerning recent actions by Comcast Corporation to limit certain types of traffic on its network in order to manage demand. This is an edited transcript of that discussion.

The actual podcast is here.

The End of the Stupid Network

Writing in EE Times, Mobidia CEO Fay Arjomandi offers suggestions for managing IP traffic on wireless networks with QoS:

Achieving the best results requires the mobile device to participate with the carrier network as an active and intelligent element so that it can share the responsibilities of network traffic delivery management in a wireless-friendly manner. A service element within the device cooperating with the carrier node would enable the device to handle the following tasks:

* Determine the type of service and identify its nature (e.g., interactive, background or real-time).

* Apply the proper network policy to each different type of service on the device.

* Determine the available network type, its characteristics, the available channel, status and allocation.

* Schedule and prioritize traffic based on the service type.

* Use a network-aware and network-friendly transport layer protocol for data communication over the wireless network link.

In an IP Multimedia Subsystem (IMS) architecture this service element resides underneath the IMS client or can be seen as part of the IMS client framework.

This should all sound familiar to anyone who’s read the recent posts on this blog, of course. Mark Goldberg suggests the “stupid network” model is on its last legs because it doesn’t allow this sort of interaction. While it served us well when networks and the traffic they handled were uniform, as we move to more diverse data streams over less reliable media (wireless), we have to move toward a more cooperative approach. Technology is often unkind to our idealistic illusions.

Tit-for-Tat on BitTorrent over Comcast

Hiawatha Bray writes a nice, balanced column on legislative efforts to revive net neutrality over the Comcast thing. Highlight:

Comcast senior vice president Mitch Bowling said that BitTorrent users absorb a disproportionate amount of network capacity. That makes it necessary to throttle back BitTorrent transfers so that e-mails and other Internet traffic can get through. Bowling also said that BitTorrent slowdowns are based only on the quantity of traffic, not its content, which is not monitored by Comcast.

Snowe said that there may be a legitimate need for Comcast to limit some traffic in order to manage its network, and that the Verizon and AT&T incidents may have been isolated blunders. But she wants hearings to determine whether legislation may be needed to establish guidelines for network operators. “I think we need to glean what the issues are and get more transparency,” Snowe said.

Jim Harper, director of information policy studies at the libertarian Cato Institute, agreed that network operators have made some poor decisions recently. But Harper said Internet performance could suffer if network managers were barred from setting limits on their traffic. “How do you write a law about this?” Harper said. “This is technically challenging and unsettled.”

This is the most moderate net neutrality statement Sen. Snowe has yet made in print, so maybe she’s learning something from the debate.

Telecom Regulators Eye the Internet

(Note: A slightly revised version of this post is in The Register, titled Harold and Kumar Go to Comcastle.)

As expected, the coalition of the frustrated who comprise Save the Internet! have filed a multi-trillion dollar complaint with the FCC regarding Comcast’s blatant exercise of, um, reasonable network management. The key fact seems to be this: if you live in a world of secret manipulative forces, evil wizards, fire-breathing dragons, scary ghosts and freaky monsters, the actions of ordinary businesses seem peculiar. Appropriately, it was filed on Halloween Eve.

The complaint is a combination of science fiction and group therapy, with generous doses of pure paranoia thrown in. The highlight:

The FCC should act immediately to enjoin Comcast’s secret discrimination and, even before deciding the merits, issue a temporary injunction requiring Comcast to stop degrading any applications.

Secret discrimination? The same complaint describes Comcast’s network management practices as blatantly violating the FCC’s Internet Policy Statement. So which is it, secret or blatant?

And it gets better. The basis of the complaint is the claim that Comcast discriminates against BitTorrent in particular and Peer-to-Peer file swapping applications in general. The FCC’s principles say that customers can run any application they want, and BitTorrent is an application. Hence, limiting the bandwidth P2P can use is breaking the law.

There are only two problems with this line of reasoning:

1) P2P applications aren’t the only ones that Comcast throttles. The complaint itself admits that classic ftp has problems when trying to act as a server on the Comcast network, and further charges that Comcast’s management has much wider effects:

While only Comcast knows the algorithm they use to decide when to forge RST packets, it is unlikely that they ever tested the plethora of applications that are potentially broken by that algorithm.

2) BitTorrent isn’t disabled on the Comcast network, not even the seeding mode where it acts as a file server. I’m a Comcast customer, and as I write this I’m seeding several video files from the current season of a certain murder mystery series set in the most dangerous county in England. The key fact about BitTorrent that the reporters have missed is that it typically takes hours for a large file or set of files to transfer, so a five to ten minute test doesn’t say anything.

It’s a nonsense complaint.

Should the FCC be issuing injunctions before deciding on the merits of completely meritless complaints? In a way I wish they would, so that Comcast’s customers could see what their network would look like without any traffic management. I’ll predict an outcome: people swapping illegal videos would be thrilled, everybody else would be super-pissed-off. And if that harms Comcast’s business, then it would certainly be reasonable for the complainants to be required to compensate them.

And finally, how is Comcast to manage those situations that arise in the course of operating a network millions of times a day when the traffic bound for a certain path exceeds the capacity of the path without degrading any applications? Surely some applications are going to be degraded, assuming the network’s capacity is, you know, finite.

And this brings us to the funniest part of the complaint, the helpful suggestions about how to manage the Comcast network:

…if Comcast is concerned that the collective set of users running P2P applications are affecting quality of service for other users on a cable loop, they could readily set dynamic quotas for each user on the loop, so as to ensure that there is always bandwidth available for users who are not running P2P applications – and they could do so without interfering in protocol choice

As far as we know, this is exactly what Comcast is doing, setting dynamic upload quotas and enforcing them by preventing excessive numbers of upload streams from starting inside their private network.

The angels apparently believe there’s a magic “quota” knob inside each cable modem owned or rented by each Comcast subscriber, but that’s not the case. These modems can take a hard cap at boot time, but after that they lack a mechanism to prevent them from issuing excessive numbers of upstream transfer requests. That’s undoubtedly a flaw in the definition of the DOCSIS protocol, but it’s one that isn’t going away simply because we may earnestly wish it would.

So what we have here is a demand that the FCC and Comcast repeal the laws of physics and provide unlimited non-delayed uploads to everybody everywhere on the Comcast network all the time. That’s not going to happen no matter who’s in charge of the agency.

I would like to see the FCC clarify the nature of reasonable network management so forcefully that the coalition is motivated to take its regulatory zeal elsewhere. In particular, that would mean saying that the following are presumed reasonable unless proven otherwise:

* Any network management practice whose effect is to limit traffic load to the capacity of the network.

* Any network management practice whose effect is to prevent a small number of users from dominating limited network resources and increasing delay for others.

* Any network management practice necessary to keep the network operating within its stability parameters, which depend on the technology itself. E.g., Wi-Fi, DOCSIS, and Ethernet are all stable in different ranges of load.

* All that virus and security stuff is also clearly fine.

I believe that we need to ignore the methods and concentrate on the effects because network equipment doesn’t always allow multiple choices of the methods that manage traffic. The regulation advocates suggest, for example, that Comcast use “dynamic caps” to manage BitTorrent traffic rather than busy signals (TCP Reset is a digital busy signal). Dynamic caps aren’t available in all DOCSIS modems, but the busy signal at the network portal clearly is.

Comcast manages their network according to what works, in other words, not what some weird coalition of angry lawyers and sweaty advocates think will work.

And I’d seriously like for the FCC to consider referring the complaint to the Justice Department for consideration of perjury charges. Some of the signatories have filled out “penalty of perjury” declarations after making irresponsible charges and claiming to be subject matter experts. I know this is pro forma, but some of their charges are so extreme that it would be wise to make examples of them. I don’t expect that to happen, of course, because there’s a wide latitude for lying in advocacy, but it would thrill me if it did.

The bottom line here is that the Internet, which has succeeded largely because of its lack of regulation, has become the lust-object of a whole slew of regulators. Its existence and success are an affront to the livelihood and reputation of regulators everywhere, and they can’t allow such a rebuke to stand. And sooner or later, the 535 regulators who make up the Congress will side with them. The Internet is too juicy a plum for them not to get their grubby little paws on it.

Net neutrality’s father, Tim Wu, committed a gaffe recently and admitted that his movement is a power struggle for control of the Internet. Indeed it is. We can either allow the Internet to continue operating as it always has, under the control of engineers and users, or we can transfer control to lawyers, bureaucrats, and appliance testers. There are disturbing signs that the bureaucratization of the Internet is well underway. ICANN, the group that oversees domain name registrars and other sundry details, recently elected a lawyer to lead it in place of a former network engineer.

So enjoy it while it lasts.

The Internet’s Big Idea (part 2)

In the first part of this series I tried to explain that pooling communications bandwidth is the central fact of Internet architecture, and that the problem raised by pooling – fairness – hasn’t been resolved in the general sense. The Internet has a mechanism that prevents it from becoming unstable, but that mechanism (TCP backoff) doesn’t ensure that users have equal access to bandwidth. In fact, the amount of bandwidth each user has is directly proportional to what he tries to get. The greedy flourish, in other words.

TCP backoff was a reasonable means of allocating bandwidth in the early days because there were social controls acting on the Internet as well as technical ones. The early users were all academics or network engineers who were well aware of the effects that large file transfers (using ftp) had on the network, and who were sufficiently joined to a community of common interest that they didn’t want to abuse the system. There was also no spam on the Internet in those days (the 1980s) for the same reason that there was no bandwidth hogging; everybody knew everybody, there were no anonymous users, and nobody wanted to get a bad reputation. We’ve obviously come a long way since then.

Ftp is a fairly simple program. The client opens a connection to a server, locates a file to download, starts the process and waits until it’s done. The download (or upload, the process is the same) runs over a single TCP connection, with a second connection providing for control. If the network becomes congested, IP drops a packet, TCP backs off to a slower rate of transfer, and eventually speeds up again if network conditions become more favorable. When the network is congested, ftp is slow and that’s just the way it is. Ftp users are encouraged not to run too many downloads at once, and ftp servers place a hard limit on the number of downloads they’ll provide at any given time. When the limit is reached, it stops accepting new connections. Ftp is an example of a good network citizen.

BitTorrent is a product of a different era. The weakness of ftp is its centralization. Clients and Servers play very different roles, and the role of the server requires a great deal of bandwidth and processing power. For large-scale distribution, servers have to be capable of handling hundreds of simultaneous connections, driving the bandwidth bill to the roof because wholesale bandwidth is sold by the usage unit, not by flat rate. BitTorrent (and other P2P applications, they’re all the same) exploits the fact that broadband consumer accounts are typically flat rate and broadband consumer networks typically have unused bandwidth. And it also exploits the fact that software and movie pirates are willing to trade with each other as long as they can remain anonymous (yes, I know there are legitimate uses of P2P as well, but who are we kidding when we ignore the fact that P2P’s primary uses are illegal?)

If ftp is sedate and reserved, BitTorrent is hyperactive and frenetic. It connects to multiple peers for downloading, and is always looking for faster ones. In terms of network overhead, it’s a much less efficient protocol than ftp, because the ratio of protocol-related chatter to actual file data is much, much higher. But in terms of economic overhead, BitTorrent is sweet, trading pay-per-use wholesale data pipes for flat-rate residential ones. That’s its rationale, that’s what it does best, and that’s why it’s a problem for every ISP in the world.

ISPs, like the Internet as a whole, depend on users sharing a common infrastructure in a predictable way, and tend to have problems when they don’t. The predictions that held good until BitTorrent came along were that downloads would happen over flat-rate links and uploads over wholesale metered links, hence the residential network should be asymmetrical, allowing more download than upload. This wasn’t theft or deception, it was (and largely still is) a rational appraisal of network traffic. And it was a system that largely regulated itself because the wholesale links were the economic limit on the traffic that could enter an ISP. Nobody was able to put traffic on the network for free, but lots of people were able to take it off the network for no additional fee above their basic subscription fee.

So what happens when P2P becomes truly mainstream and uploads are free? I think I’ll take that up in part 3.

Obama Desperate

In a desperate attempt to find an issue that might excite Democratic primary voters enough to overlook his youth and inexperience, Obama pledges Net Neutrality, Ewok Safety (The Register)

Asked whether he’d “re-instate Net Neutrality” as “the Law of the Land”, trailing Presidential Candidate Barack Obama told an audience in Cedar Rapids, Iowa pledged that yes, he would.

He also said he’d protect Ewok villages everywhere, and hoped that Tony Soprano had survived the non-existent bloodbath at the conclusion of The Sopranos.

(So we made the last two up – but they wouldn’t have been any more silly than what the Presidential Candidate really said.)

Andrew’s in fine form, underscoring the fundamental deception in the network neutrality debate: there’s never been a network neutrality law, so there’s no question of “restoring” it.

The Internet’s Big Idea

Because of current events, it strikes me that we need to review the fundamental idea that the Internet was based on, packet switching. This goes back to the dawn of time in the 1960s, before any corporations were involved, and is the pure essence of the thing. Only by understanding the basic idea can we see who’s true to it today and who isn’t.

Packet switching is not a hard notion to grasp, as it involves the spirit of cooperation, a commons, and mutual benefit. Recall that communications networks of the earlier sort allocated bandwidth strictly. On the telephone network you always got the same slice of bandwidth, neither more nor less. On some rare occasions like Mothers’ Day you couldn’t make a call right away, but for all practical purposes it was always there and always the same.

This isn’t a very efficient way to allocate bandwidth, however, because much of it goes to waste. When you’re on a call, you consume just as much bandwidth when you’re speaking as when you’re not, and a great deal of bandwidth is idle for most of the day because it’s simply a reserve for peak calling times. So the designers of the early Internet – it was called ARPANET back then – wondered what would happen if they built a network where bandwidth was a common pool that each user would draw from when he needed it, as he needed it, instead of being strictly divided in the old-fashioned way. In this scheme, during periods of low usage, each user would get tons of bandwidth so the network would appear to be really, really fast, and during periods of high demand it would partition up fairly just like the phone network, or so it seemed. So they launched this great experiment to see what had to be done to make a network that would scale up in performance under light load and scale down to fairness under heavy load. The method is called “packet switching” to differentiate it from the “circuit switching” technology in the phone network, and the ARPANET became the Internet in its second incarnation of protocols.

Packet switching is the single most important idea in the design of the Internet, even more than universal connectivity; after all, the phone network gave us the ability to reach out and annoy anyone on the planet long ago. Packet switching as a way to manage bandwidth is the Big Idea.

It always strikes me as odd that there’s so little understanding of the Big Idea at the base of the Internet’s design pyramid among our would-be Internet regulators and ISP critics. They’re always complaining about the deceptiveness of “unlimited access” rates and “all you can eat” deals that don’t guarantee any constant or minimum rate. (Duh, we tried that already.) This is an experiment in another direction, where the deal is that it’s going to be faster at some times than at other times, but overall it’s going to be much better and much cheaper than guaranteed bandwidth. And sure enough, it works: you can even make phone calls over the Internet of exceptional quality anywhere in the world for peanuts. It’s marvelous.

Well, mostly marvelous. Throughout the Internet’s history, even when it was a closed garden for the research world and long before the great unwashed were allowed on it, the “fairness” problem has proved very difficult to resolve, because each user and each application has a different appetite for bandwidth and a different demand for response time. In the early days, the interactive terminal protocol “telnet” was often stymied by the bulk data transfer protocol “ftp”, and today Skype has to work around BitTorrent.

In theory, it shouldn’t be hard fit the needs of programs that communicate small chunks of data on a tight time line around programs that move massive amounts of data with no particular time requirement around any one chunk. In theory, we should be able to design networks that do that, either by booking reservations for the call or by giving Skype priority over BitTorrent. And in fact we have a number of experimental protocols that will do just that, especially within the confines of a private network in a business, an organization, or a home. And they all depend on a prioritizing or scheduling function having a clear idea of which packets belong to which program, and of the programs being willing to settle for less than what they want for various periods of time. And that’s the way things were on the Internet before it went commercial.

In the mid-80s, we saw Congestion Collapse (“Internet meltdown”) during periods of heavy ftp usage. The quick fix that was cobbled together required TCP to voluntarily throttle-back on the amount of data it transmitted when messages weren’t delivered. This “overloaded” dropped packets, giving them two meanings: either a packet was hit by noise and corrupted, or a network queue was full and the packet was discarded because there was no more room in the line for it. Error rates were low (there was no WiFi back then) so it was fine to react as if the network was overloaded. And we could count on everybody being polite and graciously accepting slow response time until the overall load went down.

This could have been a fairly crappy solution as it didn’t distinguish application requirements between our interactive application and our bulk data application, but implementation did what design failed to do: in practice, telnet data came in much shorter packets than ftp data, so when you combine that with the fact that the packet droppers are looking for space in network queues, you obviously get more space out of dropping long packets than short ones. So voila, in one step you’ve got priority enforcement and congestion control.

And it’s all going to be fine until the next generation of protocols comes around and our assumptions once again have to be revised. I’ll go into that tomorrow.

(Updated by removing some stuff about a blog post that inspired me to write this stuff. That material now has its own post, right below.)

Try this article for a little insight into changes afoot inside the Internet.

A Rather Shoddy Analysis

Former head of AT&T’s dial-up Internet access service Tom Evslin has jumped into the Comcast pile-on, in a particularly disappointing manner. While I’m not surprised when non-technical people like professor of media law Susan Crawford, media regulator Craig Aaron of Free Press/Save the Internet, or lawyer Harold Feld make a hash of the fundamentals of the Internet, I expect better from a fellow who once was in the business of packaging and selling access to it. Evslin simply repeats the same tired claims that Crawford makes about impersonation that isn’t personal, blocking that doesn’t block, and conflicts of interest that don’t exist in the Netflix era, without any bothersome analysis of how BitTorrent behaves relative to other Internet applications or why an ISP might legitimately want to protect interactive response time from background noise.

He goes off on one riff that’s especially odd about a fragmented Internet:

Each of us “sees” the same Internet. Communication becomes much more constrained if each of us sees a different and perhaps incompatible Internet. You can see Google but I can only see Yahoo. I can upload photos to Flickr but you can only upload to dotPhoto. My email can’t get to you; you and I can’t share files (although we can both share with Ellen—today). Gee, almost sounds like mobile phone networks—or cable networks.

This observation, which has no material relevance to the actual case (Comcast lets you download anything from anywhere, and upload as you do it,) is an imitation of one of Tim Berners-Lee‘s complaints about tiered service from last year, and it’s apparently meant to lend gravity to the situation. I don’t know why he bothered to write this piece.

The underlying theme of all these criticisms seems to boil down to one thing: whatever the broadband Internet access providers do is wrong by definition, and whatever the freedom fighters who wrote BitTorrent do is right by the same a priori assumption. So there’s no use for Comcast to try and defend itself, it’s guilty of crimes against democracy simply because it bills consumers for service instead of being a good guy like Google and billing companies with something to sell.

Traffic Shaping and Net Neutrality: Good Versus Evil

Brian Boyko, editor of Network Performance Daily, has written one the better and more thoughtful essays on net neutrality:

See, at the core of Network Neutrality issues are appliances or programs which conduct traffic shaping. In traffic shaping, some packets are prioritized, others are held back. This prioritization can be done on the basis of content (what type of data is being transferred,) on the basis of application (what program is transferring the data) or on the basis of IP address (which computer is sending the packet, and which computer is receiving it.)

Now, here’s the rub: Traffic shaping can help improve network performance, decrease latency, and increase bandwidth by delaying those packets deemed to be of a low priority. Sounds good, right?

Not so fast. Traffic shaping can degrade network performance, increase latency, and decrease bandwidth… by the same means.

The arguments for increased regulation of Internet access boil down to the claim that carriers are bad people who can’t be trusted with such an important job as managing Internet traffic; and we know they’re bad people because they routinely lie to us, suppress viewpoints, and cooperate with the government. Yet one could easily make the same charges against the very advocates of net neutrality regulations, and make a strong case that they too are bad people and bad corporate actors. Google and Moveon.org have misbehaved around the Susan Collins ad, and advocates who equate TCP Resets with identity theft are being less than honest.

So you can’t ultimately resolve this issue by identifying the good people and standing with them. There are good and bad people on both sides, so it’s the specific behavior that matters, and how much blind faith you have in telecom regulators to envision the potential of the Internet. And that’s where I have to part company with the pro-regulation crowd, because I’ve never met a regulator who was a man (or woman) of vision; typically, they’re small-minded and vindictive.

Somebody always ends up managing the traffic on a network. If the carriers are forbidden from doing it, the job will ultimately end up in the hands of the largest users of bandwidth, the Googles and Yahoos of the world. And when you put them in control of a cable plant they didn’t have to pay for, do you really suppose they’re going to run it in anybody’s interest but their own?