Comcast was right, FCC was wrong

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.

What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:

Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.

“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.

Korzeniowski then explains the facts of life:

The reality is that all ISPs are overbooked–they have sold more bandwidth than they can support.

This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.

ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.

“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.

But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.

Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to you computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.

Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.

One approach is to cap upstream traffic:

However, the “all you can eat” model may no longer be viable–a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.

In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.

Capping has its critics, mostly the same people who object to traffic management as well:

For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.

Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.

Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena–it’s not a great business.

In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory–and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.

The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.

Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.

Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.

This is not good.

Cross-posted to CircleID.

UPDATE: See Adam Thierer’s comments on this article at Tech Lib.


Technorati Tags: , ,

Comcast files their compliance plan

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted to the Comcast web site shortly.

The filing corrects some of the false allegations made by critics with respect to privacy, making it very clear that the existing system simply inspects protocol headers (“envelopes”) and not personal data. David Reed in particular got himself worked into a tizzy over the idea that Comcast was deciding which streams to delay based on content, but this is clearly not the case. Inside the IP envelope sits a TCP envelope, and inside that sits a BitTorrent envelope. User data is inside the BitTorrent (or equivalent) envelope, and Comcast doesn’t look at it.

The current system sets a bandwidth quota for P2P, and prevents P2P as a group from crossing the threshold from this quota (about 50% of total upstream bandwidth) with new uni-directional upload (AKA, file-server-like) streams by tearing down requested new streams with the TCP Reset bit. The system is a bit heavy-handed, but reserving 50% of the network for one class of application seems pretty reasonable, given that no more than 20% of customers use P2P at all.

Nonetheless, the new system will not look at any headers, and will simply be triggered by the volume of traffic each user puts on the network and the overall congestion state of the network segment. If the segment goes over 70% utilization in the upload direction for a fifteen-minute sample period, congestion management will take effect.

In the management state, traffic volume measurement will determine which users are causing the near-congestion, and only those using high amounts of bandwidth will be managed. The way they’re going to be managed is going to raise some eyebrows, but it’s perfectly consistent with the FCC’s order.

High-traffic users – those who’ve used over 70% of their account’s limit for the last fifteen minutes – will have all of their traffic de-prioritized for the next fifteen minutes. While de-prioritized, they still have access to the network, but only after the conforming users have transmitted their packets. So instead of bidding on the first 70% of network bandwidth, they’ll essentially bid on the 30% that remains. This will be a bummer for people who are banging out files as fast as they can only to have a Skype call come in. Even if they stop BitTorrent, the first fifteen minutes of Skyping are going to be rough. A more pleasant approach would be to let excessive users out of QoS jail with credit for good behavior – if their utilization drops to Skype level, let them out in a few seconds, because it’s clear they’ve turned off their file sharing program. This may be easier said than done, and it may raise the ire of Kevin Martin, given how irrational he is with this anti-cable vendetta.

The user can prevent this situation from arising, of course, if he wants to. All he has to do is set the upload and download limits in BitTorrent low enough that he doesn’t consume enough bandwidth to land in the “heavy user” classification and he won’t have to put up with bad VoIP quality. I predict that P2P applications and home gateways are going to incorporate controls to enforce “Comcast friendly” operation to prevent de-prioritization. There are other more refined approaches to this problem, of course.

At the end of the day, Comcast’s fifteen/fifteen system provides users with the incentive to control their own bandwidth appetites, which makes it an “end-to-end” solution. The neutralitarians should be happy about that, but it remains to be seen how they’re going to react.

It looks pretty cool to me.

UPDATE: Comcast-hater Nate Anderson tries to explain the system at Ars Technica. He has some of it right, but doesn’t seem to appreciate any of its implications. While the new system will not look at protocol headers (the evil “Deep Packet Inspection” that gets network neophytes and cranks so excited) , and it won’t use TCP Resets, that doesn’t mean that P2P won’t be throttled; it will.

That’s simply because P2P contributes most of the load on residential networks. So if you throttle the heaviest users, you’re in effect throttling the heaviest P2P users, because the set of heavy users and the set of heavy P2P users is the same set. So the “disparate impact” will remain even though the “disparate treatment” will end.

But the FCC has to like it, because it conforms to all of Kevin Martin’s rabbit-out-the-hat rules. The equipment Comcast had had to purchase for this exercise in aesthetic reform will have utility down the road, but for now it’s simply a tax imposed by out-of-control regulators.

Regulate first, ask questions later

Press reports on the FCC’s vote on the Vuze/Free Press petitions against Comcast suggest a peculiar outcome, where FCC orders Comcast to stop managing BitTorrent and to also tell the FCC how and when it manages BitTorrent:

The FCC would require Comcast to stop slowing or blocking access to certain online applications, mostly video file-sharing services such as BitTorrent. The company would also be required to provide more disclosure to consumers about its network management practices and provide more details to the FCC about how it’s blocked or slowed traffic in the past.

If the FCC is convinced the management is wrong, why ask for the data? And why only ask for the data after nearly a year of investigating and three raucous public spectacles?

Vuze recently changed its business model, providing search service for piracy sites such as Mininova and Pirate’s Bay:

In addition to Vuze.com, the new search box gives users the option to search third-party web sites, with Mininova, Sumotorrent, BTJunkie and Jamendo being preselected. With the exception of Jamendo, all of these also feature unlicensed content. In fact, Mininova was sued by Dutch rights holders just a few weeks ago. But Vuze CEO Gilles BianRosa told me that he doesn’t think his company could run into trouble by searching these sources. “We have considered the existing legal framework and feel comfortable about the addition of this feature to our new release,” he told me, adding that rights holders could use the search to add their platforms to the mix as well.

We have a curious outcome where the FCC is ordering carriers to provide free bandwidth to pirates.

Small, wireless ISPs are hit harder by this order than the large corporations. If they can’t manage BitTorrent, they’re out of business. Brett Glass is in that situation.

See my recent FCC Comments here.

More as this develops, but for now enjoy the debate at DSL Reports, where the nefarious scheme to allocate bandwidth fairly first emerged.

John Dunbar’s AP story is here. Pretty straight coverage.

Nate Anderson’s Ars Technica is not so straight, tilting toward an editorial.

Blog talk is here, thanks to the good folks at GoogleTM.

Adam Thierer gives props to the big gov’t-pro regulator team at TLF:

It is a difficult thing for me to say, but I am man enough to do it: I must congratulate our intellectual opponents on their amazing victory in the battle to impose Net neutrality regulations on the Internet. With the Wall Street Journal reporting last night that the FCC is on the verge of acting again Comcast based on the agency’s amorphous Net neutrality principles, it is now clear that the folks at the Free Press, Public Knowledge, and the many other advocates of comprehensive Internet regulation have succeeded in convincing a Republican-led FCC to get on the books what is, in essence, the nation’s first Net Neutrality law. It is quite an accomplishment when you think about it.

Indeed.

Bob Fernandez covers the story in the Philly Inquirer, Comcast’s hometown paper:

Consumer and advocacy groups say action by Martin is necessary to preserve First Amendment protections on the Internet and to protect broadband consumers. Free Press, an advocacy group opposed to media consolidation, filed the complaint with the FCC. It was disappointed that Martin wouldn’t fine Comcast to send a message to the industry.

But others warn that Martin’s decision, announced at a Washington news conference, advances the FCC’s powers on the Internet without new laws.

“This is the foot in the door for big government to regulate the Internet,” said Adam Thierer, a senior fellow at the Progress and Freedom Foundation, a free-market think tank in Washington. “This is the beginning of a serious regulatory regime. For the first time, the FCC is making law around net neutrality.”

Net neutrality refers to the concept that Internet operators should treat all data traffic the same and not interfere with it – a subject hotly debated in recent years on Capitol Hill. Companies say they sometimes interfere with Internet traffic for practical reasons, like easing data jams.

Nobody ever mentions that unmanaged traffic causes more delay for users than managed traffic.

Comcast sets the record straight

In the course of pursuing its grievance with the FCC over broadband traffic management, Free Press and its allies have developed annoying tendencies to overstate the qualifications of its “experts” and to make wild technical assertions unsupported by empirical data. They pass Robb Topolski off as a “network engineer” when he was, while employed, a low-level tester of PC software. David Reed, who was in the design loop for TCP/IP in the 1970s but has gone in other directions since then, is represented as having worked continuously for 35 years on the advancement of Internet protocols. Free Press now employs Topolski and increasingly relies on him for analysis.

Comcast has finally said “enough is enough” and filed a document with the FCC addressing the inaccuracy of Free Press and Topoloski’s claims about their management systems:

• First, Comcast’s High-Speed Internet customers can and do access any content, run any application, and use any service that they wish.

• Second, our network management practices are similar to those deployed by other Internet service providers in the United States and around the world, and are reasonably designed to enable, not hinder, the high-quality user experience that the Internet Policy Statement contemplates and that competitive marketplace considerations require.

• Third, although Free Press and its consultants believe they know and understand Comcast’s network and how it manages that network, they do not, and they have made no legitimate effort to gain such an understanding (as others have recently done).

• Fourth, Comcast’s network management practices are not discriminatory and are entirely agnostic as to the content being transmitted, where it is being sent from or to, or the identity of the sender or receiver.

• Finally, Comcast’s customer service agreements and policies have long disclosed that broadband capacity is not unlimited, and that the network is managed for the benefit of all customers. Comcast’s disclosures have always been comparable to — and are now far more detailed than — almost any other Internet service provider’s disclosures.

The bottom line is this: the Internet is a web of shared communication links provisioned by statistical predictions about traffic. Any application or user which uses more bandwidth than the typical profile takes it away from others. The owner/manager of every link has a responsibility to assure fair access, and allowing applications with enormous bandwidth appetites to gobble up an unfair share of communication opportunities is a failure to own up to this responsibility.

Comcast has been charged with degrading an innovative new application, but the facts don’t support the charge. Actually, the innovative new application – P2P as presently implemented – has the effect of degrading traditional applications. Hence, P2P has to be managed.

So the only interesting questions are how. There are several members in the set of reasonable means of managing P2P traffic. The burden is on the FCC and the petitioners to show that the Sandvine system isn’t one of them, and they haven’t seriously attempted to do so.

Hiding behind wild claims and overblown rhetoric doesn’t help consumers, doesn’t protect free speech, and doesn’t improve the nature of broadband networking.

Sober analysis does, and that’s what we try to do here. Kudos to Comcast for standing up to these bullies.