BBC Breaks the Internet

Here’s a sign of the impending Exaflood, from the UK where the BBC’s iPlayer is breaking the Internet’s congestion controls:

The success of the BBC’s iPlayer is putting the internet under severe strain and threatening to bring the network to a halt, internet service providers claimed yesterday.

They want the corporation to share the cost of upgrading the network — estimated at £831 million — to cope with the increased workload. Viewers are now watching more than one million BBC programmes online each week.

The BBC said yesterday that its iPlayer service, an archive of programmes shown over the previous seven days, was accounting for between 3 and 5 per cent of all internet traffic in Britain, with the first episode of The Apprentice watched more than 100,000 times via a computer.

iPlayer is P2P, which is why the traffic it generates doesn’t bother BBC. And of course it has an impact on all the regional and core links in the UK, which are not provisioned with tonnes of idle capacity just in case a New Big Thing comes along that nobody anticipated. The impact of BBC’s P2P is comparable to half of America’s major networks offering P2P at the same time for all recent programming, for free. It’s considerable. But more is on the way, as it’s not unreasonable to imagine the day coming when IPTV is the primary delivery vehicle for video. How much capacity will that take?

Let’s do some rough math on the bandwidth needed to redirect 60 hours of TV viewing a week to the Internet: for SDTV, 2.5 Mb/s * 60 hours is 67.5 GB/week or 270 Gigabytes per month. For HDTV, we can multiply that by 4, to roughly a Terabyte per month. Consumers today probably use what, a Gigabyte per month?

To put it another way, let’s say America’s broadband customers all watched TV over their Internet connections at the same time, with a split of 50-50 between SD and HD. The typical cable modem installation would be able to support 5 users instead of the 150 that’s common today, so a 30 time node split. With DOCSIS 3.0, it could go to 20, so approximately an 8 time node split would be needed.

Verizon FiOS would be OK with 20 Mb/s per house at the first hop and second hops, but would need fatter pipes upstream and at the Tier 1 interface. They own the Tier 1 network, so they could expand that interface economically, while cable would have to pay Level 3 and their other suppliers.

How much capacity increase will we need in the core? It depends on caching. Without caching, we’re probably looking at a 1000 times increase, and with caching probably no more than 100 times, and probably much less. As disks continue to decline in price relative to wholesale BB connections, this is where the market action will be.

So we’re looking at a major sales opportunity for CMTS and cache companies, and P2P for the TV networks because they clearly don’t want to pay for their end of the pipes.

Technorati Tags:

The MAP Strikes Back

I called out Harold Feld of MAP, one of the FCC petitioners that started the FCC’s broadband circus, for his failure to respond to the BitTorrent/Comcast deal in my latest article in The Register, and he’s pretty upset about it:

There must be something in the air that has turned Comcast from a fighter to a lover. Apparently, Comcast and BitTorrent have kissed and made up, Brian Roberts has stood barefoot in the snow beneath Kevin Martin’s window at Canossa, and all is now supposed to be well in the world. Nothing to see here, move along, these aren’t the droids we’re looking for, and once again the magic of the market solves everything.

I would have written earlier, but I was having a flashback to when AOL Time Warner committed to creating an interoperable instant messenger. Then I was flashing on when AT&T Broadband and Earthlink “solved” the original open access problem by negotiating a contract and thus proving that “the market” would guarantee that independent ISPs would be able to resell cable modem service just like they were reselling DSL. Then I woke up vomiting. I always have a bad reaction to whatever folks smoke to conclude “the free market solves everything” especially when (a) this was the result of a regulatory two-by-four applied directly to Comcast’s scalp, repeatedly; and (b) nothing actually happened except for a real and sincere comitment to yack about stuff — at least until the regulators go away. Still, like Lucy and Charlie Brown, there are some folks for whom this just never gets old.

So while I’m glad to see Comcast forced to play the penitent, confess wrongdoing, and appear to give a full surrender, and while I generally like the idea of industry folks and ISPs getting together to actually do positive stuff on internet architecture issues, I think wild celebrations from the anti-regulators and the expectation that we can declare “Mission Accomplished” and go home is a shade premature. Indeed, the only people who believe this announcement actually solves anything are — by and large — those who didn’t believe there was a problem in the first place. I believe the technical term for such folks is “useful idiots.”

Harold has clearly been drinking the Vuze Kool-Aid, probably from the same cup as FCC chairman Kevin Martin. Chairman Martin may not mind Vuze exploiting the FCC petition process to score free public relations points, but I think it’s an abuse. Here’s my response, cross-posted from Harold’s comment section:

Given that you’ve disavowed any connection between your blog and Media Access Project, it’s interesting that this particular petitioner is still officially silent on the Comcast/BitTorrent deal. Was it just too stunning for MAP comment? I put the question here because I figure their Senior VP would have some insight.

But anyhow and to what you do say, there’s one big point that jumps out from the way you use the term “degrade” when talking about Internet access and the Internet in general. You don’t seem to appreciate that the Internet is a series of shared communication channels that rely on statistical multiplexing. In a system of this type (which is very different from the telecom networks the FCC is used to regulating) every packet “degrades” every other packet.

We don’t have dedicated end-to-end paths through the network so we all share with each other. So in the first analysis we all degrade each other, and the ISPs and NSPs are stuck with the chore of deciding whose traffic goes through immediately, whose waits at any given millisecond, and whose is discarded. And Internet switches drop lots and lots of packets as a part of routine operation. This may upset you (as it apparently upsets Kevin Martin) but it is the way the system was designed. We all hammer the switches as hard as we can and they take what they can and drop the rest. Sorry, the Internet is not a telephone.

So there’s no such thing as an ISP that doesn’t “degrade” traffic in the ways that you and Kevin Martin allege is a unique property of Comcast’s current management system.

And while I like the method Comcast CTO Tony Werner described to me as in development better than the one that’s currently in production, I don’t consider either to be an illegitimate approach to traffic management within the real-world constraints of businesses that have to return profit to their shareholders. The Sandvine system has the unfortunate side-effect of making original seeds slow to take root, but I don’t think that’s an intentional bug.

I also don’t buy the fiction that Vuze is a true competitor to Comcast and Verizon, and therefore don’t see an anti-competitive motive behind Comcast’s actions intended to affect Vuze. Given that Vuze has a business that relies on other people’s software (open source BitTorrent) moving other people’s content (Hollywood movies and TV) other still other people’s bandwidth (customers of Comcast, AT&T, Verizon, et. al.) their problems are much larger than one method of traffic management versus another.Given that Vuze purchases just enough bandwidth to start original seeds, they actually aren’t affected by Comcast’s treatment of robo-seeders in any significant way.

Apparently you have a long-standing beef with the Comcast TV service specifically and mistrust of capitalism generally. That’s fine, but it’s not immediately relevant to the question of what does and doesn’t make rational traffic management on the Internet and its access network. And frankly, it’s the invocation of animus of that tangential sort that makes me question whether you actually have a framework for deciding questions of this sort.

Comcast has correctly pointed out that the some commissioners have vowed to do rule-making on the fly, which won’t stand up to legal scrutiny because it grossly exceeds the Commission’s authority and bypasses formal rule-making. If such an action is taken, it will be struck down by the court to the embarrassment of the commissioners’ eventual private sector employers.

And finally, Om Malik is mistaken about the relative market shares of BitTorrent, Inc. and Vuze. BT owns uTorrent, the most populat BT client, while Vuze simply distributes a client based on the open source Python-language client that BitTorrent founder Bram Cohen wrote a long time ago.

Vuze filed their FCC complaint as a publicity stunt. And while it’s understandable that an under-funded startup would resort to this means of free publicity, only the truly credulous believe they have the standing they assert; it’s more like a case of delusions of grandeur.

Harold tries a little misdirection, but quickly gives up. We can have a technical solution to the P2P traffic glut, or we can have a government mandate, take your pick.

New York Wants Congestion Pricing

If the New York City Council can approve this plan, why can’t we have the same thing on our broadband networks?

Updated, 9:24 p.m. | Shortly before 7:30 p.m., the New York City Council approved a measure urging state lawmakers to vote in favor of Mayor Michael R. Bloomberg’s congestion pricing proposal. The vote was 30 to 20, with one member absent. (See the full vote tally below.) Mayor Bloomberg immediately scheduled a news conference for later this evening with the Council speaker, Christine C. Quinn, to celebrate the passing of his plan, which has left city and state lawmakers sharply divided.

It seems like common sense, more or less, that drivers would have to pay for the privilege of clogging up the downtown areas. Singapore was doing this in the 1980s when I lived there. And yes, I know that the little island nation isn’t a model of free speech or human rights, but it’s in nobodys interest to have so many cars downtown that none of them actually moves.

Technorati Tags:

Comcast’s and BitTorrent’s Rodney King Moment

Check my latest piece in The Register on the Comcast/BitTorrent detente.

Analysis So Comcast will stop shaping peer-to-peer seeding sessions with spoofed TCP RST commands. I caught up with the cable giant’s CTO Tony Werner on Thursday for more details.

The move should delight the company’s critics. These innocent control packets have been compared to identity theft, to collusion with dictatorial regimes, and outright hacking of customer systems. BitTorrent may not be an IETF-sanctioned RFC protocol in its own right, but Comcast has decided to bind itself to traffic management practices that don’t produce obviously non-standard packets. Instead, they’re going to install additional equipment that will do real-time traffic accounting by user, feeding back information to the cable infrastructure that will equitably distribute opportunities to offer upstream traffic. In essence, this system makes the cable standard DOCSIS much more sophisticated; now it will behave just like DSL, only faster.

In DSL systems, each customer typically has a dedicated physical cable to a DSLAM, a type of concentrator. The DSLAM aggregates customer data onto a common upstream link according to a fairness algorithm that picks frames for from buffers at the heads of these dedicated links in a manner that minimizes bandwidth hogging. In the new Comcast system, the fairness algorithm is deployed in the CMTS (the cable equivalent of a DSLAM), and acts on buffers in each customer’s cable modem. The CMTS is able to do this because DOCSIS data transfers from customer to network are preceded with brief requests for bandwidth. Armed with intelligence about each user’s recent traffic history and the state of the network generally, the newly-intelligent CMTS will schedule bandwidth by customer according to a fairness algorithm of its own, with the same range of choices that exist for DSLAMs.

This was fun to write, and the reaction has so far been very positive, which is unusual among the crowd that comments at The Reg.

New and Improved Traffic Shaping

Comcast’s CTO Tony Werner was kind enough to give me a few minutes today on the changes afoot in the cable giant’s Internet access network, and I like what I learned. I’ll do a longer post on this later with some diagrams, but for now I’d like to sketch out the high points. This is just from the Comcast side of the agreement, BitTorrent is also committed to making some changes on their end and I don’t have the details on those yet. BitTorrent will be making a presentation at the P4P Forum on its commitments.

Here’s what Comcast is going to do, pending how well it shakes out in the lab:

* Stop injecting TCP RSTs. This technique has been maligned way more than it deserves to be, because it has such a long history. Middleware devices (of which Sandvine is only one) have been doing this for at least a decade, drawing the ire of the IETF for it all along. It’s not necessary in a DOCSIS network for technical reasons, so they’re going to stop doing it. This should make the “Comcast is Impersonating You and Stealing Your Credit Card Numbers!!!” crowd happy.

* Start using CMTS scheduling to allocate bandwidth fairly among all users of a first-hop segment when the network is heavily loaded. The DOCSIS protocol permits packet scheduling, since every user has to request bandwidth for every upstream transfer, so all the CMTS has to do is implement Fair Scheduling to prevent bandwidth hogs from getting more than a fair share. There may be some limits to the delay the scheduler can impose (my conjecture, not Tony’s,) and that’s why field testing is important.

* Investigate longer-term solutions that will allow users to control how different traffic streams are handled. There are a number of IETF standards that relate to this problem, and their evaluation will be long-term work items for the industry forums.

CMTS scheduling puts Comcast on the same footing as the DSL providers. While Comcast customers share a first hop and DSL doesn’t (most of the time, they actually do if repeaters are used), all of them share a second hop, access to which is mediated by a fair queuing discipline. So Comcast is simply implementing their queuing discipline on the first hop, which makes good sense for their technology. So there’s no need to look at protocols and headers, it’s all just traffic and traffic opportunities can be managed with per-user fairness.

So the bottom line is this: the IETF protocols failed to deliver a scheme for per-user fairness, so Comcast will implement one on their first hop network. That’s what we call progress, and the only question is why it took them so long to do it.

Comcast & BitTorrent Announcement

You may have noticed that BitTorrent and Comcast have agreed to work together to improve co-existence. The Wall St. Journal is reporting the significant detail:

Rather than slow traffic by certain types of applications — such as file-sharing software or companies like BitTorrent — Comcast will slow traffic for those users who consume the most bandwidth, said Comcast’s Mr. Warner. Comcast hopes to be able to switch to a new policy based on this model as soon as the end of the year, he added. The company’s push to add additional data capacity to its network also will play a role, he said. Comcast will start with lab tests to determine if the model is feasible.

No details are out on the changes to be made on the BitTorrent side so far.

This is a huge announcement as it cuts off FCC chairman Kevin Martin at the knees, and does so in advance of his scheduled lynching in Palo Alto. If Comcast and BitTorrent can work together to resolve their problems with technical solutions (as I advocated at the FCC hearing at Harvard), there’ no predicate for new regulations or enforcement actions.

It’s a setback for the public interest lobby, and they’re taking it pretty hard. Public Knowledge in particular still calls for heavy regulation and increased FCC involvement in network management practices. It’s almost laughable:

Even in the best-case scenario for a Comcast/BitTorrent partnership, Comast is not the only one engaging in this sort of behavior. The FCC must make it clear that these types of practices are unlawful and against public policy, and that they will not be tolerated, now or in the future.

Free Press is also quite upset:

This agreement does nothing to protect the many other peer-to-peer companies from blocking, nor does it protect future innovative applications and services. Finally, it does nothing to prevent other phone and cable companies from blocking. Innovators should not have to negotiate side deals with phone and cable companies to operate without discrimination. The Internet has always been a level playing field, and we need to keep it that way.

You’d think BitTorrent had been bought off by Comcast, and no longer deserves to be adored as an “innovative new application.”

BitTorrent, Inc. is willing to modify their code to make it more manageable, but the pirates who use BitTorrent open source and Vuze won’t be, so the cat-and-mouse game between legal and illegal uses of P2P will continue. BitTorrent, Inc. won’t be affected, however, and that’s progress.

The agreement shows once again that technical solutions to technical problems are better than political ones. It’s unfortunate for the public interest lobby that this issue is no longer a cause for grandstanding, but they’ll recover. And in the meantime, the 60% of American broadband consumers who use cable to connect to the Internet will have a faster pipe that they’ll be able to use without being hogged out by their neighbors.

It’s a win for everybody.

Go to CNet for Declan McCullagh’s detailed interview with Comcast’s Joe Waz.

The FCC commissioners have all issued statements, the best of which is this one from Commissioner McDowell:

The private sector is the best forum to resolve such disputes. Today’s announcement obviates the need for any further government intrusion into this matter.

Amen.

UPDATE: See this post for some detail on what’s afoot.

Nagle’s Answer

Slashdot picked up George Ou’s latest piece on the problems with TCP and Peer-to-Peer congestion that I’ve been writing about lo these many months, attracting one interesting comment in a sea of chaff:

As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), I suppose should say something.

The way this was supposed to work is that TCP needs to be well-behaved because it is to the advantage of the endpoint to be well-behaved. What makes this work is enforcement of fair queuing at the first router entering the network. Fair queuing balances load by IP address, not TCP connection, and “weighted fair queueing” allows quality of service controls to be imposed at the entry router.

The problem now is that the DOCSIS approach to cable modems, at least in its earlier versions, doesn’t impose fair queuing at entry to the network from the subscriber side. So congestion occurs further upstream, near the cable headend, in the “middle” of the network. By then, there are too many flows through the routers to do anything intelligent on a per-flow basis.

We still don’t know how to handle congestion in the middle of an IP network. The best we have is “random early drop”, but that’s a hack. The whole Internet depends on stopping congestion near the entry point of the network. The cable guys didn’t get this right in the upstream direction, and now they’re hurting.

I’d argue for weighted fair queuing and QOS in the cable box. Try hard to push the congestion control out to the first router. DOCSIS 3 is a step in the right direction, if configured properly. But DOCSIS 3 is a huge collection of tuning parameters in search of a policy, and is likely to be grossly misconfigured.

The trick with quality of service is to offer either high-bandwidth or low latency service, but not both together. If you request low latency, your packets go into a per-IP queue with a high priority but a low queue length. Send too much and you lose packets. Send a little, and they get through fast. If you request high bandwidth, you get lower priority but a longer queue length, so you can fill up the pipe and wait for an ACK.

But I have no idea what to do about streaming video on demand, other than heavy buffering. Multicast works for broadcast (non-on-demand) video, but other than for sports fans who want to watch in real time, it doesn’t help much. (I’ve previously suggested, sort of as a joke, that when a stream runs low on buffered content, the player should insert a pre-stored commercial while allowing the stream to catch up. Someone will probably try that.)

John Nagle

.
I actually suggested the technique John proposes directly to Comcast engineering: drop packets before the first hop. They didn’t appear to have considered it before, but it actually is the answer. Unfortunately, the cable modem is not an IP device so it doesn’t understand when and how to do this presently, so it becomes a piece of housekeeping for the DOCSIS 3.0 upgrade.

Fine Kettle of Links

Some interesting reading for you:

George Ou tells the story of the unfairness of TCP and offers some suggestions.

Adam Thierer takes on Jonathon Zittrain’s paen to programmability The Future of the Internet: And How to Stop It.

And finally, Harold Feld explains the 700 MHz auction and what it does and doesn’t mean. Harold is a communist, but he’s a much smarter communist than his comrade Susan Crawford, who tries to read the same tea leaves and doesn’t say much.

FCC Hearing in Lessig Territory

Unsatisfied with the outcome of the FCC hearing on Comcast held in the maw of the Berkman Center, Kevin Martin turns to Larry Lessig for help: FCC Announces Stanford Hearing.

When you’re being investigated by Congress, anything to keep the eyes off the ball is helpful.

Seriously, this is happening is because the Commission lost the records of the Cambridge hearing; no, that’s not serious, but it might be.

Japan to Ban P2P Piracy

Net Neutrality folks like to tout Japan as the model of a fine and healthy Internet access ecosystem, despite the VoIP blocking. They’re going to have a major fit when they learn P2P piracy is about to be banned in Japan:

The nation’s four Internet provider organizations have agreed to forcibly cut the Internet connection of users found to repeatedly use Winny and other file-sharing programs to illegally copy gaming software and music, it was learned Friday.

The move aims to deal with the rise in illegal copying of music, gaming software and images that has resulted in huge infringements on the rights of copyright holders.

Resorting to cutting off the Internet connection of copyright violators has been considered before but never resorted to over fears the practice might involve violations of privacy rights and the freedom of use of telecommunications.

The Internet provider organizations have, however, judged it possible to disconnect specific users from the Internet or cancel provider contracts with them if they are identified as particularly flagrant transgressors in cooperation with copyright-related organizations, according to sources.

How can they do that, you ask? Well, it’s pretty easy. We can’t ban piracy in the US because critics can say “just upgrade the pipes like they’ve done in Japan and it’s not a problem.” That dodge obviously doesn’t fly over there.

Japan has a 100 Mb/s connection to the home that’s over 95% occupied at the busiest times of the time, a completely unacceptable situation. So they’re taking sensible action in the absence of a technical solution to bandwidth-hogging.

They’re not stupid, you see.

UPDATE: Count Sweden in as well:

STOCKHOLM, Sweden (AP) — Swedish courts will soon be able to force the country’s Internet providers to produce information on suspected file-sharers in a move to crackdown on piracy, the culture and justice ministers said Friday.

File-sharing can be traced by tracking the IP addresses of the computers that download or distribute a file.

…along with France and the UK. I’m sensing a trend here, where Kevin Martin is the only opposition.

UPDATE 2: George Ou comments on the story.