The People’s Movement (for Google)

I did a podcast with the folks at The Technology Liberation Front on Google’s edge-caching system:

This week we saw a new kerfuffle of sorts develop over the revelation in a Monday front-page Wall Street Journal story that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content. What exactly is it that Google is proposing, and does it mean – as the Wall Street Journal and some others have suggested – that Google is somehow going back on their support for Net neutrality principles and regulation? More importantly, what does it all mean for the future of the Internet, network management, and consumers. That’s what we discussed on the TLF’s latest “Tech Policy Weekly” podcast.

Google’s genius at creating a citizen’s movement to boost their bottom line needs more praise, but this is a start.

Virgin Media serves the people, not the pirates

The Register broke a story today about the plan by the UK’s cable company, Virgin Media, to dump neutrality and target BitTorrent users

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The move will represent a major policy shift for the cable monopoly and is likely to anger advocates of “net neutrality”, who say all internet traffic should be treated equally. Virgin Media currently temporarily throttles the bandwidth of its heaviest downloaders across all applications at peak times, rather than targeting and “shaping” specific types of traffic.

Virgin Media’s CEO Neil Berkett has previously described net neutrality as “a load of bollocks*,” a sentiment that I can relate to if not specifically endorse.

UPDATE: Wired Blogs reports Virgin is denying the veracity of El Reg’s story, but read the world’s finest tech pub tomorrow for the real story. In the meantime, a quick perusal of Virgin’s traffic policy indicates that they already reserve extensive traffic shaping powers.

Blogger Tom Evslin has jumped on the story with some instant analysis. The problem this story causes for American Liberals is cognitive dissonance: Britain is a virtuous European nation with a National Health Service, a leftwing government, and a commitment to the Kyoto Protocol, yet they permit more traffic shaping than the FCC will allow Comcast; this sort of contradiction causes my friends on the left to drink heavily, or to blog obsessively.

*American translation: BS.

My Google piece in The Register

Thanks to the miracle of trans-Atlantic collaborative journalism, here’s my quick take on Google’s caching scheme:

Network Neutrality, the public policy unicorn that’s been the rallying cry for so many many on the American left for the last three years, took a body blow on Sunday with the Wall Street Journal’s disclosure that the movement’s sugar-daddy has been playing both sides of the fence.

The Journal reports that Google “has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content.”

Google claims that it’s doing nothing wrong, and predictably accuses the Journal of writing a hyperbolic piece that has the facts all wrong. It’s essentially correct. Google is doing nothing that Akamai doesn’t already do, and nothing that the ISPs and carriers don’t plan to do to reduce the load that P2P puts on their transit connections.

A lot of questions remain about Google’s public policy flexibility and how wise their server farm strategy has been, and we’ll deal with them as Google answers our questions.

Google Gambles in Casablanca

I’m shocked.

Google has been caught red-handed negotiating deals with ISPs to host servers inside the building, just like Akamai does. The semi-technical press thinks this is some sort of a game-changing event:

The celebrated openness of the Internet — network providers are not supposed to give preferential treatment to any traffic — is quietly losing powerful defenders.

Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers.

At risk is a principal [sic] known as network neutrality: Cable and phone companies that operate the data pipelines are supposed to treat all traffic the same — nobody is supposed to jump the line.

Oh my goodness, where do I begin? Google already has a fast lane to most Internet users today thanks to their network of mega-data centers around the world, which I’ve written and spoken about at some length. These systems are wired directly to public Internet exchange points in high population areas and effectively move Google’s traffic to a higher priority than all but three competing routes: Akamai’s servers hosted inside ISP premises, Limelight’s private network wired directly to ISP networks, and the ISPs’ own content. Google’s desire to host servers (or routers, it could be either) inside ISP networks is a move calculated to improve on the ROI on the existing network of server farms and to blunt the Akamai advantage. It makes more sense to wire directly to the ISPs through private arrangements than to stress the public Internet infrastructure any further.

One thing that this deal doesn’t do is change the Internet infrastructure. Arrangements like this already exist, predating the kerfuffle over fast lanes created out of thin air by public interest advocates three years ago.

The Internet is not a network, it’s a complex set of agreements to interconnect independently owned and operated networks in various ways. There is no standard agreement, and this story doesn’t report on a new one. What it simply shows is that money buys performance in the technology space, and that should come as no surprise to anyone. Google has to do something like this to avoid being clobbered by ISP-friendly P4P as well as by Akamai.

Yes, Virginia, network neutrality is a myth, and it always has been.

UPDATE: Google’s response to the WSJ piece does nothing but muddy the waters. Net Neutrality advocates have insisted on a wall of separation between content and infrastructure, and this deal, if it happens, brings down that wall. I’m happy with that, because I don’t see the prohibition on expedited delivery as a good thing. But Google should admit they’ve come around to my way of thinking about the Internet instead of insisting nothing has changed. See my write-up in The Register.

UPDATE 2: The spin that Google’s supporters are producing around this issue is a marvel for those of us who appreciate the major league curveball. This subtle piece of nuanced distinction by Dave Isenberg deserves some sort of prize:

The concern of Network Neutrality advocates is not with access but with delivery. The fear is that Internet connection providers would charge for expedited delivery of certain content to the end user, and in so doing would put themselves in the business of classifying which content gets enhanced delivery.

Wow. Caching speeds up delivery, otherwise there would be no reason to do it. Google has paid for expedited delivery of its content in effect, regardless of the spin. What counts is bits on the wire, and Google is out to ensure theirs are better situated than yours are.

Don’t be fooled by the spin, this is a distinction without a difference.

Technorati Tags: ,

Internet routing around US

Here’s one more reason not to believe the claims of slow growth in Internet traffic in the US: the rest of the world is optimizing its routes.

America is losing its position at the centre of the internet, according to a new study.

The survey by communications analysts TeleGeography Research, based in Washington DC, shows a rapid growth in internet capacity around the rest of the world over the past year – particularly in Latin America and Asia.

As a result, America’s traditional role as the internet’s traffic policeman is drifting away as other parts of the world become less reliant on it.

“The US used to be a primary hub for many regions,” said Eric Schoonover, a senior analyst at TeleGeography. “A lot of data still comes through the US, and a lot of content there is served out to other countries … but its importance is declining, though it has by no means gone away.”

On aggregate, Internet traffic is probably growing faster than it has in years. You read it here.

BitTorrent net meltdown delayed

See The Register for my follow-up on the BitTorrent meltdown story:

The internet’s TCP/IP protocol doesn’t work very well. As the internet’s traffic cop, it’s supposed to prevent applications from overloading the network, but it’s at a loss when it comes to managing P2P applications. This deficiency, generally known to network engineers but denied by net neutrality advocates, has been a central issue in the net neutrality debate. BitTorrent Inc has now weighed in on the side of the TCP/IP critics.

The next official release of the uTorrent client – currently in alpha test – replaces TCP with a custom-built transport protocol called uTP, layered over the same UDP protocol used by VoIP and gaming. According to BitTorrent marketing manager Simon Morris, the motivation for this switch (which I incorrectly characterized in The Register earlier this week as merely another attempt to escape traffic shaping) is to better detect and avoid network congestion.

Morris also told the media this week that TCP only reduces its sending rate in response to packet loss, a common but erroneous belief. Like uTP, Microsoft’s Compound TCP begins to slow down when it detects latency increases. Even though TCP is capable of being just as polite as BitTorrent wants uTP to be, the fact that it hides its delay measurements from applications makes it troublesome for P2P clients with many paths to choose from. But it’s sensible to explore alternatives to TCP, as we’ve said on these pages many times, and we’re glad BitTorrent finally agrees.

We strive to be fair and balanced. The nut is that we don’t actually know whether BitTorrent’s new protocol is going to work any better than TCP, as there’s no hard data available on it.

Technorati Tags: ,

Note about UDP

One of the more amusing criticisms of my article on BitTorrent over UDP is that I’m a clueless dork for saying UDP was designed for real-time applications since there was no such thing as VoIP back in the day. This is generally accompanied by the charge that I don’t know the first thing about the Internet, etc. So for the record, here’s a statement of the design goals for UDP by one of the people involved, the lovable David Reed:

A group of us, interested in a mix of real-time telephony, local area networks, distributed operating systems, and communications security, argued for several years for a datagram based network, rather than a virtual circuit based network…[UDP] was a placeholder that enabled all the non-virtual-circuit protocols since then to be invented, including encapsulation, RTP, DNS, …, without having to negotiate for permission either to define a new protocol or to extend TCP by adding “features”.

Any questions?

Reaction to BitTorrent story

My article in The Register yesterday about BitTorrent and UDP got some attention. It was a primary article on Techmeme and was Slash-dotted. Here’s the Techmeme link set: Slyck, DSLreports, TorrentFreak, Ars Technica, Icrontic, Joho the Blog, TMCnet, GigaOM, Industry Standard, TechSpot.

While most of the discussion went to questions of motivation – I’m alleged to be a telco shill for criticizing a system the telcos are OK with – some was actually quite substantial. It’s good to get these issues under the microscope.

More links: Canadian Broadcasting Company, Slashdot, Tales of the Sausage Factory; a few hundred more at Google.

I talked to a couple of the BitTorrent guys today – chief engineer and nemesis Stanislav Shalunov not among them, unfortunately – and they vehemently denied they had any intention of evading the Bell Canada traffic shaping system. Reports from Canada that motivated me to write the piece say the system actually does in fact evade Bell Can’s filters, which will have to be updated as the use of uTorrent 1.9 becomes more widespread, or replaced with more capable equipment.

It remains to be seen whether that upgrade will also catch VoIP and gamers in the throttling net. It’s interesting that the author of the reports on Canada, Karl Bode, is now playing dumb, all the better to be left out of the counter-PR campaign.

Alarming Title: BitTorrent declares war on the Internet

See The Register for my analysis of the latest tweak in Bittorrent:

Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches faced by ISPs so far look like a party game. So what’s happened, and why does it matter?

Upset about Bell Canada’s system for allocating bandwidth fairly among internet users, the developers of the uTorrent P2P application have decided to make the UDP protocol the default transport protocol for file transfers. BitTorrent implementations have long used UDP to exchange tracker information – the addresses of the computers where files could be found – but the new release uses it in preference to TCP for the actual transfer of files. The implications of this change are enormous.

As BitTorrent implementations follow uTorrent’s lead – and they will, since uTorrent is owned by BitTorrent Inc, and is regarded as the canonical implementation – the burden of reducing network load during periods of congestion will shift to the remaining TCP uses, the most important of which are web browsing and video streaming.

Several commentors are upset with the article, mostly because ISPs don’t provide them with unlimited bandwidth. There’s not much I can do for those folks.

A few others claim that BitTorrent over UDP has a congestion control algorithm which they feel is in some way equivalent to the TCP algorithm, but this argument is flawed on a couple of levels. For one, many routers have tweaks in their discard logic that prefers UDP over TCP. This is a key problem with the widespread use of UDP for purposes other than those for which it was intended.

UPDATE: The explicit goal of this new tweak is actually to create a more friendly variant of the existing TCP congestion avoidance algorithm, and it was the pirates at Broadband Reports who said otherwise.

What remains to be seen, however, is what the actual effects will be in large-scale deployment, and whether the genie can be forced back in the bottle if they’re undesirable.

UPDATE 2: I’ve added “Alarming Title” to the title. This piece is getting a lot of people excited.

Technorati Tags: ,

A good synopsis of the Internet

Catherine Rosenberg, a professor with the University of Waterloo’s Department of Electrical and Computer Engineering, has written a great synopsis of the Internet for our cousins to the North:

The founding principle of the Internet is resource sharing and hence to deliver an appropriate end-to-end service, some level of co-ordination and traffic control is needed to ensure network performance does not collapse. This is even more true now as the last few years have seen massive increases in Internet traffic due in large part to the proliferation of “bandwidth hungry” applications such as games, peer-to-peer file transfers and increasingly complex, enriched web pages. Added to this is the “all you can eat” economic model promoted by the ISPs, an approach that entices users to always consume more, and of course the fact that the number of Internet users keeps on increasing.

So what does controlling the traffic mean? It means keeping the traffic entering the network under a certain threshold to avoid performance collapses that would affect everyone. And this is what traffic shaping does, by, for example, limiting the bandwidth available for certain types of applications that are less time sensitive in order to keep more bandwidth available for other applications that are more time sensitive, and used by the greater number of subscribers.

While some would argue that this is done “naturally” with Transmission Control Protocol, the reality is that TCP alone is not enough to avoid congestion and spread the burden of congestion as fairly as possible to all those using the congested area.

It’s so refreshing to read something like this after slogging through all the nonsense that our law professors have written about the Internet for our net neutrality debate. I highly recommend you read the Whole Thing.

H/T Brett Glass.

Technorati Tags: , ,