Digital Britain and Hokey Tools

It’s helpful to see how other countries deal with the typically over-excited accusations of our colleagues regarding ISP management practices. Case in point is the Digital Britain Interim Report from the UK’s Department for Culture, Media and Sport and Department for Business, Enterprise and Regulatory Reform, which says (p. 27):

Internet Service Providers can take action to manage the flow of data – the traffic – on their networks to retain levels of service to users or for other reasons. The concept of so-called ‘net neutrality’, requires those managing a network to refrain from taking action to manage traffic on that network. It also prevents giving to the delivery of any one service preference over the delivery of others. Net neutrality is sometimes cited by various parties in defence of internet freedom, innovation and consumer choice. The debate over possible legislation in pursuit of this goal has been stronger in the US than in the UK. Ofcom has in the past acknowledged the claims in the debate but have also acknowledged that ISPs might in future wish to offer guaranteed service levels to content providers in exchange for increased fees. In turn this could lead to differentiation of offers and promote investment in higher-speed access networks. Net neutrality regulation might prevent this sort of innovation.

Ofcom has stated that provided consumers are properly informed, such new business models could be an important part of the investment case for Next Generation Access, provided consumers are properly informed.

On the same basis, the Government has yet to see a case for legislation in favour of net neutrality. In consequence, unless Ofcom find network operators or ISPs to have Significant Market Power and justify intervention on competition grounds, traffic management will not be prevented.

(Ofcom is the UK’s FCC). Net neutrality is, in essence, a movement driven by fears of hypothetical harm that might be visited upon the Internet given a highly unlikely set of circumstances. Given the fact that 1.4 billion people use the Internet every day, and the actual instances of harmful discrimination by ISPs can be counted on one hand (and pales in comparison to harm caused by malicious software and deliberate bandwidth hogging in any case,) Ofcom’s stance is the only one that makes any sense: keep an eye on things, and don’t act without provocation. This position would have kept us out of Iraq, BTW.

Yet we have lawmakers in the US drafting bills full of nebulous language and undefined terms aimed at stemming this invisible menace.

Are Americans that much less educated than Brits, or are we just stupid? In fact, we have a net neutrality movement in the US simply because we have some well-funded interests manipulating a gullible public and a system of government that responds to emotion.

A good example of these forces at work is the freshly released suite of network test tools on some of Google’s servers. Measurement Lab checks how quickly interested users can reach Google’s complex in Mountain View, breaking down the process into hops. As far as I can tell, this is essentially a dolled-up version of the Unix “traceroute” which speculates about link congestion and takes a very long time to run.

The speed, latency, and consistency of access to Google is certainly an important part of the Internet experience, but it’s hardly definitive regarding who’s doing what to whom. But the tech press loves this sort of thing because it’s just mysterious enough in its operation to invite speculation and sweeping enough in its conclusions to get users excited. It’s early days for Measurement Lab, but I don’t have high expectations for its validity.

Doubts about Broadband Stimulus

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett:

Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably just as bad an outcome given that the most immediate goal of the stimulus measure is to pump new spending into the economy.

An “open access” requirement in the bill might discourage some companies from applying for grants because any investments in broadband infrastructure could benefit competitors who would gain access to the network down the line.

Meeting minimum speed requirements set forth in the House version could force overly costly investments by essentially providing Cadillac service where an economy car would be just as useful. And some worry that government may pay for technology that will be obsolete even before the work is completed.

“Really the devil is in the details,” Mr. Glass said. “Yes, there is $9 billion worth of good that we can do, but the bill doesn’t target the funds toward those needs.”

The bill is still very rough. Some critics cite the bill’s preference for grants to large incumbents, others highlight the amorphous “open access” provisions and the arbitrary speed provisions as weaknesses. The only interest groups that appear altogether happy with it are Google’s boosters, such as Ben Scott of Free Press. This is a flip-flop for Free Press, who only last week was urging members to call Congress and ask that bill be killed.

A particularly odd reaction comes from friend of the blog Jeff Jarvis, who took time out from pitching his love letter to Google What Would Google Do? to tear into the article’s sourcing:

I found myself irritated by today’s story in the New York Times that asks whether putting money from the bailout toward broadband would be a waste. The question was its own answer. So was the placement of the story atop page one. The reporter creates generic groups of experts to say what the he wants to say (I know the trick; I used to be a reporter): “But experts warn…. Other critics say…. Other supporters said…”

I wish that every time he did that, the words “experts,” “critics,” and “supporters” were hyperlinked to a page that listed three of each.

It’s an obvious case of a story with an agenda: ‘I’m going to set out to poke a hole in this.’

The odd bit is that five people are named and quoted, and the terms “expert” and “critic” clearly refer to these named sources. It’s boring to repeat names over and over, so the writer simply uses these terms to avoid the tedium. It’s clear that Brett and Craig Settles are the critics and experts. Jeff seems not to have read the article carefully and simply goes off on his defensive tirade without any basis.

It’s a given in Google’s world that massive government subsidies for broadband are a good thing because they will inevitably lead to more searches, more ad sales, and more revenue for the Big G. But while that’s clearly the case, it doesn’t automatically follow that what’s good for Google is good for America, so it behooves our policy makers to ensure that the money is spent wisely, without too many gimmicks in favor of one technology over another or too many strings attached that don’t benefit the average citizen.

Raising questions about pending legislation and trying to improve it is as American as baseball, and the article in the Times is a step in the right direction. It may not be what Google would do, but it’s good journalism.

I want to make sure that the broadband money is spent efficiently, so I would bag the open access requirement (nobody knows what it means anyway) and give credit all improvements in infrastructure that increase speed and reduce latency.

The bill needs to support all technologies that have utility in the Internet access space, wireless, coax, and fiber, but should encourage the laying of new fiber where it’s appropriate, and high-speed wireless in less-populated areas. Eventually, homes and businesses are pretty much all going to have fiber at the doorstep, but that doesn’t need to happen overnight.

What recession?

So here’s your recession-proof business, ladies and gentlemen:

Netflix, the company which mails out DVD rentals and also offers streamed programming via the internet, saw a 45% jump in profits and 26% rise in consumers to 9.4 million in the fourth quarter.

This was the quarter in which Netflix released Watch Instantly on non-PC platforms. It’s so ubiquitous now I have it on three platforms: a home theater PC, TivoHD, and a Samsung BD-P2500 Blu-Ray player. It looks best on the Samsung, thanks to its HQV video enhancement chip.

Internet Myths

Among my missions in this life is the chore of explaining networking in general and the Internet in particular to policy makers and other citizens who don’t build network technology for a living. This is enjoyable because it combines so many of the things that make me feel good: gadgetry, technology, public policy, writing, talking, and education. It’s not easy, of course, because there are a lot of things to know and many ways to frame the issues. But it’s possible to simplify the subject matter in a way that doesn’t do too much violence to the truth.

As I see it, the Internet is different from the other networks that we’re accustomed to in a couple of important ways: for one, it allows a machine to connect simultaneously to a number of other machines. This is useful for web surfing, because it makes it possible to build a web page that draws information from other sources. So a blog can reference pictures, video streams, and even text from around the Internet and put it in one place where it can be updated in more-or-less real time. It enables aggregation, in other words. Another thing that’s unique about the Internet is that the underlying transport system can deliver information at very high speed for short periods of time. The connection between a machine and the Internet’s infrastructure is idle most of the time, but when it’s active it can get its information transferred very, very quickly. This is a big contrast to the telephone network, where information is constrained by call setup delays and a very narrow pipe.
Continue reading “Internet Myths”

Briscoe explains Re-ECN in plain English

See the current issue of IEEE Spectrum for a nice description of Bob Briscoe’s Re-ECN, A Fairer, Faster Internet Protocol:

Refeedback introduces a second type of packet marking—think of these as credits and the original [ECN] congestion markings as debits. The sender must add sufficient credits to packets entering the network to cover the debit marks that are introduced as packets squeeze through congested Internet pipes. If any subsequent network node detects insufficient credits relative to debits, it can discard packets from the offending stream.

To keep out of such trouble, every time the receiver gets a congestion (debit) mark, it returns feedback to the sender. Then the sender marks the next packet with a credit. This reinserted feedback, or refeedback, can then be used at the entrance to the Internet to limit congestion—you do have to reveal everything that may be used as evidence against you.

Refeedback sticks to the Internet principle that the computers on the edge of the network detect and manage congestion. But it enables the middle of the network to punish them for providing misinformation.

The limits and checks on congestion at the borders of the Internet are trivial for a network operator to add. Otherwise, the refeedback scheme does not require that any new code be added to the network’s equipment; all it needs is that standard congestion notification be turned on. But packets need somewhere to carry the second mark in the “IP” part of the TCP/IP formula. Fortuitously, this mark can be made, because there is one last unused bit in the header of every IP packet.

This is a plan that will allow interactive uses of the Internet to co-exist happily with bulk data transfer. It’s quite brilliant and I recommend it as an alternative to a lot of nonsense that’s been floated around this space.

Technorati Tags: ,

Internet routing around US

Here’s one more reason not to believe the claims of slow growth in Internet traffic in the US: the rest of the world is optimizing its routes.

America is losing its position at the centre of the internet, according to a new study.

The survey by communications analysts TeleGeography Research, based in Washington DC, shows a rapid growth in internet capacity around the rest of the world over the past year – particularly in Latin America and Asia.

As a result, America’s traditional role as the internet’s traffic policeman is drifting away as other parts of the world become less reliant on it.

“The US used to be a primary hub for many regions,” said Eric Schoonover, a senior analyst at TeleGeography. “A lot of data still comes through the US, and a lot of content there is served out to other countries … but its importance is declining, though it has by no means gone away.”

On aggregate, Internet traffic is probably growing faster than it has in years. You read it here.

BitTorrent net meltdown delayed

See The Register for my follow-up on the BitTorrent meltdown story:

The internet’s TCP/IP protocol doesn’t work very well. As the internet’s traffic cop, it’s supposed to prevent applications from overloading the network, but it’s at a loss when it comes to managing P2P applications. This deficiency, generally known to network engineers but denied by net neutrality advocates, has been a central issue in the net neutrality debate. BitTorrent Inc has now weighed in on the side of the TCP/IP critics.

The next official release of the uTorrent client – currently in alpha test – replaces TCP with a custom-built transport protocol called uTP, layered over the same UDP protocol used by VoIP and gaming. According to BitTorrent marketing manager Simon Morris, the motivation for this switch (which I incorrectly characterized in The Register earlier this week as merely another attempt to escape traffic shaping) is to better detect and avoid network congestion.

Morris also told the media this week that TCP only reduces its sending rate in response to packet loss, a common but erroneous belief. Like uTP, Microsoft’s Compound TCP begins to slow down when it detects latency increases. Even though TCP is capable of being just as polite as BitTorrent wants uTP to be, the fact that it hides its delay measurements from applications makes it troublesome for P2P clients with many paths to choose from. But it’s sensible to explore alternatives to TCP, as we’ve said on these pages many times, and we’re glad BitTorrent finally agrees.

We strive to be fair and balanced. The nut is that we don’t actually know whether BitTorrent’s new protocol is going to work any better than TCP, as there’s no hard data available on it.

Technorati Tags: ,

Note about UDP

One of the more amusing criticisms of my article on BitTorrent over UDP is that I’m a clueless dork for saying UDP was designed for real-time applications since there was no such thing as VoIP back in the day. This is generally accompanied by the charge that I don’t know the first thing about the Internet, etc. So for the record, here’s a statement of the design goals for UDP by one of the people involved, the lovable David Reed:

A group of us, interested in a mix of real-time telephony, local area networks, distributed operating systems, and communications security, argued for several years for a datagram based network, rather than a virtual circuit based network…[UDP] was a placeholder that enabled all the non-virtual-circuit protocols since then to be invented, including encapsulation, RTP, DNS, …, without having to negotiate for permission either to define a new protocol or to extend TCP by adding “features”.

Any questions?

Reaction to BitTorrent story

My article in The Register yesterday about BitTorrent and UDP got some attention. It was a primary article on Techmeme and was Slash-dotted. Here’s the Techmeme link set: Slyck, DSLreports, TorrentFreak, Ars Technica, Icrontic, Joho the Blog, TMCnet, GigaOM, Industry Standard, TechSpot.

While most of the discussion went to questions of motivation – I’m alleged to be a telco shill for criticizing a system the telcos are OK with – some was actually quite substantial. It’s good to get these issues under the microscope.

More links: Canadian Broadcasting Company, Slashdot, Tales of the Sausage Factory; a few hundred more at Google.

I talked to a couple of the BitTorrent guys today – chief engineer and nemesis Stanislav Shalunov not among them, unfortunately – and they vehemently denied they had any intention of evading the Bell Canada traffic shaping system. Reports from Canada that motivated me to write the piece say the system actually does in fact evade Bell Can’s filters, which will have to be updated as the use of uTorrent 1.9 becomes more widespread, or replaced with more capable equipment.

It remains to be seen whether that upgrade will also catch VoIP and gamers in the throttling net. It’s interesting that the author of the reports on Canada, Karl Bode, is now playing dumb, all the better to be left out of the counter-PR campaign.

Alarming Title: BitTorrent declares war on the Internet

See The Register for my analysis of the latest tweak in Bittorrent:

Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches faced by ISPs so far look like a party game. So what’s happened, and why does it matter?

Upset about Bell Canada’s system for allocating bandwidth fairly among internet users, the developers of the uTorrent P2P application have decided to make the UDP protocol the default transport protocol for file transfers. BitTorrent implementations have long used UDP to exchange tracker information – the addresses of the computers where files could be found – but the new release uses it in preference to TCP for the actual transfer of files. The implications of this change are enormous.

As BitTorrent implementations follow uTorrent’s lead – and they will, since uTorrent is owned by BitTorrent Inc, and is regarded as the canonical implementation – the burden of reducing network load during periods of congestion will shift to the remaining TCP uses, the most important of which are web browsing and video streaming.

Several commentors are upset with the article, mostly because ISPs don’t provide them with unlimited bandwidth. There’s not much I can do for those folks.

A few others claim that BitTorrent over UDP has a congestion control algorithm which they feel is in some way equivalent to the TCP algorithm, but this argument is flawed on a couple of levels. For one, many routers have tweaks in their discard logic that prefers UDP over TCP. This is a key problem with the widespread use of UDP for purposes other than those for which it was intended.

UPDATE: The explicit goal of this new tweak is actually to create a more friendly variant of the existing TCP congestion avoidance algorithm, and it was the pirates at Broadband Reports who said otherwise.

What remains to be seen, however, is what the actual effects will be in large-scale deployment, and whether the genie can be forced back in the bottle if they’re undesirable.

UPDATE 2: I’ve added “Alarming Title” to the title. This piece is getting a lot of people excited.

Technorati Tags: ,