BBC Breaks the Internet

Here’s a sign of the impending Exaflood, from the UK where the BBC’s iPlayer is breaking the Internet’s congestion controls:

The success of the BBC’s iPlayer is putting the internet under severe strain and threatening to bring the network to a halt, internet service providers claimed yesterday.

They want the corporation to share the cost of upgrading the network — estimated at £831 million — to cope with the increased workload. Viewers are now watching more than one million BBC programmes online each week.

The BBC said yesterday that its iPlayer service, an archive of programmes shown over the previous seven days, was accounting for between 3 and 5 per cent of all internet traffic in Britain, with the first episode of The Apprentice watched more than 100,000 times via a computer.

iPlayer is P2P, which is why the traffic it generates doesn’t bother BBC. And of course it has an impact on all the regional and core links in the UK, which are not provisioned with tonnes of idle capacity just in case a New Big Thing comes along that nobody anticipated. The impact of BBC’s P2P is comparable to half of America’s major networks offering P2P at the same time for all recent programming, for free. It’s considerable. But more is on the way, as it’s not unreasonable to imagine the day coming when IPTV is the primary delivery vehicle for video. How much capacity will that take?

Let’s do some rough math on the bandwidth needed to redirect 60 hours of TV viewing a week to the Internet: for SDTV, 2.5 Mb/s * 60 hours is 67.5 GB/week or 270 Gigabytes per month. For HDTV, we can multiply that by 4, to roughly a Terabyte per month. Consumers today probably use what, a Gigabyte per month?

To put it another way, let’s say America’s broadband customers all watched TV over their Internet connections at the same time, with a split of 50-50 between SD and HD. The typical cable modem installation would be able to support 5 users instead of the 150 that’s common today, so a 30 time node split. With DOCSIS 3.0, it could go to 20, so approximately an 8 time node split would be needed.

Verizon FiOS would be OK with 20 Mb/s per house at the first hop and second hops, but would need fatter pipes upstream and at the Tier 1 interface. They own the Tier 1 network, so they could expand that interface economically, while cable would have to pay Level 3 and their other suppliers.

How much capacity increase will we need in the core? It depends on caching. Without caching, we’re probably looking at a 1000 times increase, and with caching probably no more than 100 times, and probably much less. As disks continue to decline in price relative to wholesale BB connections, this is where the market action will be.

So we’re looking at a major sales opportunity for CMTS and cache companies, and P2P for the TV networks because they clearly don’t want to pay for their end of the pipes.

Technorati Tags:

Google falling

The inability to retain key employees is the first clear sign of company in decline, so this news has to be disturbing to Google shareholders:

Facebook hires away Google’s top chef

Is it “poaching” when a company steals a rival’s chef? At Google, executive chef Josef Desimone scrambled cruelty-free eggs by the truckload. Now Facebook has hired him to replace steam-heated trays of takeout with the kind of free food Googlers are used to. For engineers, Facebook is the new dreamland, and a company cafeteria is the kind of perk they’ve come to expect.

The end is near for the search monopoly.

The MAP Strikes Back

I called out Harold Feld of MAP, one of the FCC petitioners that started the FCC’s broadband circus, for his failure to respond to the BitTorrent/Comcast deal in my latest article in The Register, and he’s pretty upset about it:

There must be something in the air that has turned Comcast from a fighter to a lover. Apparently, Comcast and BitTorrent have kissed and made up, Brian Roberts has stood barefoot in the snow beneath Kevin Martin’s window at Canossa, and all is now supposed to be well in the world. Nothing to see here, move along, these aren’t the droids we’re looking for, and once again the magic of the market solves everything.

I would have written earlier, but I was having a flashback to when AOL Time Warner committed to creating an interoperable instant messenger. Then I was flashing on when AT&T Broadband and Earthlink “solved” the original open access problem by negotiating a contract and thus proving that “the market” would guarantee that independent ISPs would be able to resell cable modem service just like they were reselling DSL. Then I woke up vomiting. I always have a bad reaction to whatever folks smoke to conclude “the free market solves everything” especially when (a) this was the result of a regulatory two-by-four applied directly to Comcast’s scalp, repeatedly; and (b) nothing actually happened except for a real and sincere comitment to yack about stuff — at least until the regulators go away. Still, like Lucy and Charlie Brown, there are some folks for whom this just never gets old.

So while I’m glad to see Comcast forced to play the penitent, confess wrongdoing, and appear to give a full surrender, and while I generally like the idea of industry folks and ISPs getting together to actually do positive stuff on internet architecture issues, I think wild celebrations from the anti-regulators and the expectation that we can declare “Mission Accomplished” and go home is a shade premature. Indeed, the only people who believe this announcement actually solves anything are — by and large — those who didn’t believe there was a problem in the first place. I believe the technical term for such folks is “useful idiots.”

Harold has clearly been drinking the Vuze Kool-Aid, probably from the same cup as FCC chairman Kevin Martin. Chairman Martin may not mind Vuze exploiting the FCC petition process to score free public relations points, but I think it’s an abuse. Here’s my response, cross-posted from Harold’s comment section:

Given that you’ve disavowed any connection between your blog and Media Access Project, it’s interesting that this particular petitioner is still officially silent on the Comcast/BitTorrent deal. Was it just too stunning for MAP comment? I put the question here because I figure their Senior VP would have some insight.

But anyhow and to what you do say, there’s one big point that jumps out from the way you use the term “degrade” when talking about Internet access and the Internet in general. You don’t seem to appreciate that the Internet is a series of shared communication channels that rely on statistical multiplexing. In a system of this type (which is very different from the telecom networks the FCC is used to regulating) every packet “degrades” every other packet.

We don’t have dedicated end-to-end paths through the network so we all share with each other. So in the first analysis we all degrade each other, and the ISPs and NSPs are stuck with the chore of deciding whose traffic goes through immediately, whose waits at any given millisecond, and whose is discarded. And Internet switches drop lots and lots of packets as a part of routine operation. This may upset you (as it apparently upsets Kevin Martin) but it is the way the system was designed. We all hammer the switches as hard as we can and they take what they can and drop the rest. Sorry, the Internet is not a telephone.

So there’s no such thing as an ISP that doesn’t “degrade” traffic in the ways that you and Kevin Martin allege is a unique property of Comcast’s current management system.

And while I like the method Comcast CTO Tony Werner described to me as in development better than the one that’s currently in production, I don’t consider either to be an illegitimate approach to traffic management within the real-world constraints of businesses that have to return profit to their shareholders. The Sandvine system has the unfortunate side-effect of making original seeds slow to take root, but I don’t think that’s an intentional bug.

I also don’t buy the fiction that Vuze is a true competitor to Comcast and Verizon, and therefore don’t see an anti-competitive motive behind Comcast’s actions intended to affect Vuze. Given that Vuze has a business that relies on other people’s software (open source BitTorrent) moving other people’s content (Hollywood movies and TV) other still other people’s bandwidth (customers of Comcast, AT&T, Verizon, et. al.) their problems are much larger than one method of traffic management versus another.Given that Vuze purchases just enough bandwidth to start original seeds, they actually aren’t affected by Comcast’s treatment of robo-seeders in any significant way.

Apparently you have a long-standing beef with the Comcast TV service specifically and mistrust of capitalism generally. That’s fine, but it’s not immediately relevant to the question of what does and doesn’t make rational traffic management on the Internet and its access network. And frankly, it’s the invocation of animus of that tangential sort that makes me question whether you actually have a framework for deciding questions of this sort.

Comcast has correctly pointed out that the some commissioners have vowed to do rule-making on the fly, which won’t stand up to legal scrutiny because it grossly exceeds the Commission’s authority and bypasses formal rule-making. If such an action is taken, it will be struck down by the court to the embarrassment of the commissioners’ eventual private sector employers.

And finally, Om Malik is mistaken about the relative market shares of BitTorrent, Inc. and Vuze. BT owns uTorrent, the most populat BT client, while Vuze simply distributes a client based on the open source Python-language client that BitTorrent founder Bram Cohen wrote a long time ago.

Vuze filed their FCC complaint as a publicity stunt. And while it’s understandable that an under-funded startup would resort to this means of free publicity, only the truly credulous believe they have the standing they assert; it’s more like a case of delusions of grandeur.

Harold tries a little misdirection, but quickly gives up. We can have a technical solution to the P2P traffic glut, or we can have a government mandate, take your pick.

New York Wants Congestion Pricing

If the New York City Council can approve this plan, why can’t we have the same thing on our broadband networks?

Updated, 9:24 p.m. | Shortly before 7:30 p.m., the New York City Council approved a measure urging state lawmakers to vote in favor of Mayor Michael R. Bloomberg’s congestion pricing proposal. The vote was 30 to 20, with one member absent. (See the full vote tally below.) Mayor Bloomberg immediately scheduled a news conference for later this evening with the Council speaker, Christine C. Quinn, to celebrate the passing of his plan, which has left city and state lawmakers sharply divided.

It seems like common sense, more or less, that drivers would have to pay for the privilege of clogging up the downtown areas. Singapore was doing this in the 1980s when I lived there. And yes, I know that the little island nation isn’t a model of free speech or human rights, but it’s in nobodys interest to have so many cars downtown that none of them actually moves.

Technorati Tags:

Comcast’s and BitTorrent’s Rodney King Moment

Check my latest piece in The Register on the Comcast/BitTorrent detente.

Analysis So Comcast will stop shaping peer-to-peer seeding sessions with spoofed TCP RST commands. I caught up with the cable giant’s CTO Tony Werner on Thursday for more details.

The move should delight the company’s critics. These innocent control packets have been compared to identity theft, to collusion with dictatorial regimes, and outright hacking of customer systems. BitTorrent may not be an IETF-sanctioned RFC protocol in its own right, but Comcast has decided to bind itself to traffic management practices that don’t produce obviously non-standard packets. Instead, they’re going to install additional equipment that will do real-time traffic accounting by user, feeding back information to the cable infrastructure that will equitably distribute opportunities to offer upstream traffic. In essence, this system makes the cable standard DOCSIS much more sophisticated; now it will behave just like DSL, only faster.

In DSL systems, each customer typically has a dedicated physical cable to a DSLAM, a type of concentrator. The DSLAM aggregates customer data onto a common upstream link according to a fairness algorithm that picks frames for from buffers at the heads of these dedicated links in a manner that minimizes bandwidth hogging. In the new Comcast system, the fairness algorithm is deployed in the CMTS (the cable equivalent of a DSLAM), and acts on buffers in each customer’s cable modem. The CMTS is able to do this because DOCSIS data transfers from customer to network are preceded with brief requests for bandwidth. Armed with intelligence about each user’s recent traffic history and the state of the network generally, the newly-intelligent CMTS will schedule bandwidth by customer according to a fairness algorithm of its own, with the same range of choices that exist for DSLAMs.

This was fun to write, and the reaction has so far been very positive, which is unusual among the crowd that comments at The Reg.

WordPress 2.3.3

This is for all you bloggers who use WordPress

WordPress 2.3.3 is an urgent security release. If you have registration enabled a flaw was found in the XML-RPC implementation such that a specially crafted request would allow a user to edit posts of other users on that blog. In addition to fixing this security flaw, 2.3.3 fixes a few minor bugs. If you are interested only in the security fix, download the fixed version of xmlrpc.php and copy it over your existing xmlrpc.php. Otherwise, you can get the entire release here.

The security hole allows spammers to infect your site with their crappy ads. When doing my backup I found 40 directories full of images and spam pages in a directory called “img” in my pictures directory and in another place within by wp-content. These freeloaders are a scourge.

On the plus side, my code’s up-to-date.

UPDATE: WordPress 2.5 is now released, and it’s very pretty, but it seems to be much slower.

New and Improved Traffic Shaping

Comcast’s CTO Tony Werner was kind enough to give me a few minutes today on the changes afoot in the cable giant’s Internet access network, and I like what I learned. I’ll do a longer post on this later with some diagrams, but for now I’d like to sketch out the high points. This is just from the Comcast side of the agreement, BitTorrent is also committed to making some changes on their end and I don’t have the details on those yet. BitTorrent will be making a presentation at the P4P Forum on its commitments.

Here’s what Comcast is going to do, pending how well it shakes out in the lab:

* Stop injecting TCP RSTs. This technique has been maligned way more than it deserves to be, because it has such a long history. Middleware devices (of which Sandvine is only one) have been doing this for at least a decade, drawing the ire of the IETF for it all along. It’s not necessary in a DOCSIS network for technical reasons, so they’re going to stop doing it. This should make the “Comcast is Impersonating You and Stealing Your Credit Card Numbers!!!” crowd happy.

* Start using CMTS scheduling to allocate bandwidth fairly among all users of a first-hop segment when the network is heavily loaded. The DOCSIS protocol permits packet scheduling, since every user has to request bandwidth for every upstream transfer, so all the CMTS has to do is implement Fair Scheduling to prevent bandwidth hogs from getting more than a fair share. There may be some limits to the delay the scheduler can impose (my conjecture, not Tony’s,) and that’s why field testing is important.

* Investigate longer-term solutions that will allow users to control how different traffic streams are handled. There are a number of IETF standards that relate to this problem, and their evaluation will be long-term work items for the industry forums.

CMTS scheduling puts Comcast on the same footing as the DSL providers. While Comcast customers share a first hop and DSL doesn’t (most of the time, they actually do if repeaters are used), all of them share a second hop, access to which is mediated by a fair queuing discipline. So Comcast is simply implementing their queuing discipline on the first hop, which makes good sense for their technology. So there’s no need to look at protocols and headers, it’s all just traffic and traffic opportunities can be managed with per-user fairness.

So the bottom line is this: the IETF protocols failed to deliver a scheme for per-user fairness, so Comcast will implement one on their first hop network. That’s what we call progress, and the only question is why it took them so long to do it.

Comcast & BitTorrent Announcement

You may have noticed that BitTorrent and Comcast have agreed to work together to improve co-existence. The Wall St. Journal is reporting the significant detail:

Rather than slow traffic by certain types of applications — such as file-sharing software or companies like BitTorrent — Comcast will slow traffic for those users who consume the most bandwidth, said Comcast’s Mr. Warner. Comcast hopes to be able to switch to a new policy based on this model as soon as the end of the year, he added. The company’s push to add additional data capacity to its network also will play a role, he said. Comcast will start with lab tests to determine if the model is feasible.

No details are out on the changes to be made on the BitTorrent side so far.

This is a huge announcement as it cuts off FCC chairman Kevin Martin at the knees, and does so in advance of his scheduled lynching in Palo Alto. If Comcast and BitTorrent can work together to resolve their problems with technical solutions (as I advocated at the FCC hearing at Harvard), there’ no predicate for new regulations or enforcement actions.

It’s a setback for the public interest lobby, and they’re taking it pretty hard. Public Knowledge in particular still calls for heavy regulation and increased FCC involvement in network management practices. It’s almost laughable:

Even in the best-case scenario for a Comcast/BitTorrent partnership, Comast is not the only one engaging in this sort of behavior. The FCC must make it clear that these types of practices are unlawful and against public policy, and that they will not be tolerated, now or in the future.

Free Press is also quite upset:

This agreement does nothing to protect the many other peer-to-peer companies from blocking, nor does it protect future innovative applications and services. Finally, it does nothing to prevent other phone and cable companies from blocking. Innovators should not have to negotiate side deals with phone and cable companies to operate without discrimination. The Internet has always been a level playing field, and we need to keep it that way.

You’d think BitTorrent had been bought off by Comcast, and no longer deserves to be adored as an “innovative new application.”

BitTorrent, Inc. is willing to modify their code to make it more manageable, but the pirates who use BitTorrent open source and Vuze won’t be, so the cat-and-mouse game between legal and illegal uses of P2P will continue. BitTorrent, Inc. won’t be affected, however, and that’s progress.

The agreement shows once again that technical solutions to technical problems are better than political ones. It’s unfortunate for the public interest lobby that this issue is no longer a cause for grandstanding, but they’ll recover. And in the meantime, the 60% of American broadband consumers who use cable to connect to the Internet will have a faster pipe that they’ll be able to use without being hogged out by their neighbors.

It’s a win for everybody.

Go to CNet for Declan McCullagh’s detailed interview with Comcast’s Joe Waz.

The FCC commissioners have all issued statements, the best of which is this one from Commissioner McDowell:

The private sector is the best forum to resolve such disputes. Today’s announcement obviates the need for any further government intrusion into this matter.

Amen.

UPDATE: See this post for some detail on what’s afoot.

Harden and Brown Crush the Sox

Game 2 of the budding major league season was quite an enjoyable affair, as the Mighty A’s sent the Bosnia Red Sox home in disgrace. Harden no-hit the pretenders through 3, and struck ’em out at will. Emil Brown redeemed himself with a game-winning three-run homer and Crosby continued his hitting tear.

The A’s aren’t a bad team this year, but if I were a member of Bosnia Nation I’d be worried about the pitching. The Sox look a lot like some Yankee teams of recent memory, all bats and no curveballs.

Beckett’s injured, Schilling’s out for the season, Matsuzaka is still erratic, and Lester looks like a batting practice patsy. The middle of the Sox lineup is as strong as anyones, but it’s going to take a lot of runs to overcome the help Sox pitching will give opposing teams this year. The A’s aren’t a strong hitting team, but they scored 5 runs in each of the two games in the Tokyo Dome.

The Yankees are dealing with their pitching woes with a youth movement, and the Sox response is something like an injury movement. Not good, Boston Nation.