Lessons from Internet 2

David Isenberg tried to explain his dramatic turn-around on net neutrality recently:

In June of 1997, when I wrote the essay, it seemed reasonable (to a Bell Labs guy steeped in telco tradition) that a stupid network might incorporate low level behaviors analogous to taxis or tropism to automatically adapt to the needs of the data. But research results from Internet2 [.pdf] now show that network upgrades to accommodate even extremely demanding applications, such as Hi-Def video conferences, can be achieved more effectively, cheaply and reliably by simply adding more capacity.

The research results he cites were delivered to Congress by this eminent researcher:

Gary Bachula is the Vice President for External Relations for Internet2. Gary has substantial government and not-for-profit experience, with an extensive history of leadership in technology development. Most recently, Gary served as Acting Under Secretary of Commerce for Technology at the US Department of Commerce where he led the formation of government-industry partnerships around programs such as GPS and the Partnership for a New Generation of Vehicles. As Vice President for the Consortium for International Earth Science Information Network (CIESIN) from 1991 to 1993, Gary managed strategic planning and program development for the organization designated to build a distributed information network as part of NASA’s Mission to Planet Earth. From 1986 to 1990, he chaired the Michigan Governor’s Cabinet Council, and from 1974 to 1986 Gary served as Chief of Staff to U.S. Representative Bob Traxler of Michigan where he advised on appropriations for NASA, EPA, the National Science Foundation and other federal R&D agencies. Gary holds undergraduate and law (J.D.) degrees from Harvard University. A native of Saginaw, Michigan, Bachula served at the Pentagon in the U.S. Army during the Vietnam war.

So now we have a new definition for net neutrality: in the past, networks were designed by engineers, but under net neutrality they’ll be designed by lawyers and lobbyists. Great.

But to be fair, there was an actual study performed by two guys using the Internet2 Abilene network from 1998-2001 which determined that QoS wasn’t practical to implement with routers of that era, primarily because they had to use software to figure out how to distinguish high-priority and low-priority packets. As these routers performed most packet forwarding operations in hardware, this was a big slow-down:

Some router vendors have elected to include complex QoS functionality in microcode running on their interface cards, rather than in custom ASICs that add to the power consumption and cost of a card. This is a non-starter. Our experience has been that this approach can result in a drop of maximum packet-per-second forwarding rates by 50% or more. Such a CPU cycle shortage hurts all traffic, including Premium, making deployment very hard to justify.

The trend among newer, higher-speed routers seems to be towards less QoS functionality, not more. As circuit costs are responsible for an ever decreasing portion of network capital expenditures, and interface costs are responsible for an ever increasing share of network capital expenditures, the market pressure for dumb, fast, and cheap router interfaces is ever greater.

Are we prepared to pay the price difference for extra QoS features? If so, is there enough of a customer base for feature-rich routers to make their development worthwhile for router vendors?

Contrary to the Internet2 predictions, modern routers do in fact have more QoS functionality in hardware. So if Internet2 were a serious research organization they’d repeat the study with up-to-date systems and abandon the rank speculation. But they won’t, of course.

To those of us who’ve been around a while, the idea that you can learn ultimate lessons about the Internet from academic experiments is laughable. Early experiments with the Internet in academic settings taught us, for example, that there was no need for spam protection, virus removal, and the control of bandwidth hogging. In the academy, social systems control this behavior, but in the real world it’s up to the network. We haven’t always known that, but we do now.

My question for Isenberg is this: what kind of engineer abandons well-known findings on the say-so of a lobbyist touting one ancient experiment conducted without control or follow-up? We know that an over-built network doesn’t need much in the way of QoS to support moderately demanding applications like VoIP.

The problem with over-provisioning the entire Internet is that applications will emerge to consume the excess bandwidth, putting the user back on square one.

Oops.

UPDATE: To clarify the Internet2 “research” just a bit, bear this in mind: the fears that NN advocates have about web site slowdowns and censorship presume that routers are capable of discriminating for and against all the traffic they pass without slowing down Premium customers. The Internet2 “research” says this is impossible.

So who are you going to believe, Internet2 or Internet2?

Defining away the uncertainty

David Isenberg is jumping on the cluetrain to defend Dr. David Weinberger from charges of fuzziness. According to the Davids, there’s no uncertainty about Network Neutrality:

As a proponent of Network Neutrality, I cringe when I hear, “We do not even know what Network Neutrality means.” We DO know. Such statements are true ONLY in the sense that we don’t know the precise dividing line between a shelf and a table, or that we can’t say precisely how a tree grows, or that there’s sometimes fuzziness in whether a death is a murder.

It is in the telcos’ and cablecos’ interest to keep Network Neutrality amorphous and undefinable. If we don’t even know what it is, we can’t pass a law against it, right?

We DO know what Net Neutrality is. There are several excellent definitions of Network Neutrality, e.g., by the Annenberg Center, by savetheinternet [.pdf] [disclosure: I work as an unpaid volunteer with the savetheinternet folks], and, perhaps the clearest statement of all, since it is stated as proposed legislation, by Congressman Ed Markey (D-MA) [actual 2006 Bill here .pdf].

The unifying element is the prohibition of deliberate discrimination.

Isenberg should read his own writing.

Net neutrality proponents are sharply divided about what constitutes “discrimination”, and one can find this rift in Isenberg’s book “The Rise of the Stupid Network,” where he describes something called “idiot savant behaviors” which allow the network to tailor transport services to the needs of applications:

[In] the Stupid Network, because the data is the boss, it can tell the network, in real time, what kind of service it needs. And the Stupid Network would have a small repertoire of idiot-savant behaviors to treat different data types appropriately. If the data identified itself as financial data, the Stupid Network would deliver it accurately, no matter how many milliseconds of delay the error checking would take. If the data were two-way voice or video, the Stupid Network would provide low delay, even at the price of an occasional flipped bit. If the data were entertainment audio or video, the Stupid Network would provide wider bandwidth, but would not necessarily give low delay or absolute accuracy. And if there were a need for unique transmission characteristics, the data would tell the Stupid Network in more detail how to treat it, and the Stupid Network would do what it was told.

This is a sort of “discrimination” and a violation of Strict Stupidity. But not to do this, which would be to simply treat all packets the same, is also a form of discrimination because it favors applications that care more about bulk data transfer over those that need timely service.

So either way, we can wave our hands about “discrimination” or we can address application realities, and we all have to choose. It’s not that clear at all, and Isenberg knows it.

In the US we practice a form of social discrimination called “Affirmative Action” which many advocates claim is not discrimination at all because it’s for a good reason. In the UK, they’re a lot more honest, calling it “Positive Discrimination.” Network neutrality either bans or endorses positive discrimination depending on whose definition you use.

That’s reality. On the Internet, we have a number of different behaviors that require “discrimination” on the part of the carrier: fraud, theft, and bandwidth hogging among them. To require strict passivity on the part of the carrier when we know these things go on is simply to hide ones head in the sand.

Weinberger’s Net Neutrality Gaffe

A gaffe is when a politician accidentally says what he really thinks. Net neutrality advocate David Weinberger committed one recently when he wrote:

…I recently spent a day�sponsored by an activist think tank�with a dozen people who understand Net tech deeply, going through exactly which of the 496 permutations would constitute a violation of Net neutrality. Caching packets within a particular application area but not according to source? Caching application-based non-cached application-based packets? Saying “Hi” to all passing packets, but adding, “Howya doin’?” to only the ones you like? Patting all packets on the back but refusing to buy some lunch? The whole thing makes my brain hurt

I put that quote into Wikipedia’s Net Neutrality entry, and now Weinberger’s crying foul:

FWIW, I agree that the paragraph that cites me should be edited out. It is unencyclopedic. It also is used to make a point that it in fact does not support. The fact that it’s challenging to work out the precise application of NN in some instances doesn’t mean that the meaning of the principle itself is unclear. It’s tough to figure out exactly how to apply, say, affirmative action, gay rights, or the end-to-end principle, but it’d be highly misleading to start an article on them by saying the principles are unclear. It’s the nature of principles to require thought, argument and politics in their application. So, I hope someone removes that paragraph.

The fact of the matter is that nobody knows what net neutrality is, how to detect it, and how to regulate it, so the whole matter of laws protecting it is premature. Weinberger accidentally told the truth, and now he doesn’t want it to get out. There are plenty of people who think net neutrality is a great thing, but that we don’t know enough about it to regulate it; Doc Searls and Tom Evslin, for example. But the Wikipedia crowd isn’t real keen on sharing that point of view with the public.

That’s the way it goes.

Recruited again

This is funny:

Hi Richard,

My name is XXX XXXXXX and I am a Recruiter for the Google.com Engineering team. While searching the internet, I came across your name. We currently have positions available at Google that may be a good match for you. If you are open to exploring these opportunities further, please send an updated version of your resume in Word, HTML, or PDF to me as soon as possible.

All positions involve working in our infrastructure team, know as Google.com Engineering (which is different from our Operations group). Our Google.com engineers hold the beating heart of Google and are very well respected. They are responsible for keeping the google.com website up and running as well as building new automation infrastructure. We are seeking Extraordinary Developers, UNIX (Linux) System Administrators, and Managers/Directors to add to our exciting team and growing organization.

*****We have multiple openings located in various places in the US (Mountain View, CA, Kirkland, WA, Santa Monica, CA, Mountain View, CA, New York, NY ) and Internationally (Dublin, Ireland and Zurich, Switzerland).****

I hope you are not bothered by my networking attempt. If you are not interested or available, but would like to forward my name and contact information to your friends or colleagues, I would be most delighted.

For more information, go to:

http://www.google.com/support /jobs/bin/answer.py?answer =23594 (see various locations)
http://www.google.com/support/jobs/bin/answer.py?answer=23591 (see various locations)

Thank you and hope to hear from you soon.

XXX

P.S. If this is not a good time or if you are not interested, please reply and let us know. We will update our database and you will not be contacted again in the future.

I know very few people read this blog, but it surprises me that despite all the mean things I’ve said about Google ( their collusion with the government of China and their evil attempt to stifle the development of the Internet through Net Neutrality legislation, their lack of originality, etc.) I’m still getting such queries on a regular basis. I guess they’re less organized than I thought.

Sigh.

Every time a moron dies, two more take his place

The most idiotic analysis of net neutrality you’ll ever want to read has been prepared by obscure consultancy Ramp^Rate:

With the permanent barriers that the removal of net neutrality will erect for [gamers], the worst-case scenario includes three waves of change:

* One or more mainstream ISPs will introduce excessive lag that will effectively prohibit their users from participating in online games. The move will not be aimed at restricting usage per se, but rather to extract a fee from the game operator…

* Hardcore users will write strongly worded messages to their ISPs, who will classify them as unreasonable malcontents using more than their share of bandwidth.

For those who think this cannot happen, here�s a recent example: For years before the Web as we know it existed, Usenet was a core part of the Internet landscape. It was a factory for online discussion, exchange of ideas, and, ultimately, one of the better bulletin boards for content of all shapes and forms. However, as the Internet became mainstream, Usenet users were marginalized (typically with �cease and desist� letters citing excessive use of �unlimited� internet packages). Their Usenet services were then unceremoniously dumped by their providers (AOL and Comcast being two of the more notorious).

Where there was a substitute for Usenet through services such as Google or BitTorrent, there is no close substitute for online gaming.

Wow, that’s heavy. Let’s take on the history part first. Usenet is a bandwidth hog for ISPs even if none of their customers use it, because maintaining a Usenet (NNTP) server requires the ISP to process all the new posts on all the Usenet groups as they’re made. As Usenet reached the end of its useful life, it became a vehicle for copyright theft and the distribution of malicious code. So at a certain point, AOL decided not to carry it any more. Comcast still provides Usenet service, so that part of the article is simply false.

As to the paranoid conspiracy claim about ISPs introducing latency in order to extract fees, it’s hardly going to be necessary. Latency and jitter will increase on any packet network as load increases, that’s how the networks are designed. So if more people are downloading video files while their neighbors are seeking The Sword of a Thousand Truths, and the ISP isn’t willy-nilly adding more bandwidth the accommodate them, everybody’s latency and jitter will increase automatically, no conspiracy required.

The best way around this is some sort of usage-sensitive pricing that enables users who place heavy loads on the network of paying their neighbors enough to increase network bandwidth. Make no mistake about it, every network has finite capacity, and heavy users aren’t just taking bandwidth away from the ISP, they’re taking it away from the other people who use the network. So those who need low latency should be able to pay for it, and those who need massive file transfers should also be able to pay for that, and the average, normal, garden-variety web surfer shouldn’t have to subsidize them.

But net neutrality legislation forbids usage-sensitive pricing. The common provision in the five NN bills is a ban on service plans that provide packet prioritization for a fee, and that ban itself is the main threat to gaming. Anyone who understands how we ensure QoS for quirky applications like VoIP, gaming, and yes, real-time video streaming and conferencing, knows that prioritization is the key element.

The best solution to the dilemma that gamers pose to ISPs is to allow the ISPs to charge them more than normal web surfers and in return to provide them with the appropriate QoS. It�s ridiculous to demand a wholesale upgrade to the entire Internet access network to support this one application and to refuse to allow broadband carriers to recoup their investment in what upgrades are actually needed.

This report didn�t advance the debate on NN, it simply reinforced the ignorance and mendacity that�s motivated it so far.

UPDATE: See Adam Thierer at the PFF Blog for a more detailed economic analysis of gaming, and read the comment by reader MnZ:

Napster made the problem of network jitter go through the roof. My roommates called Time Warner to complain several times. The Time Warner representatives said that they were trying to add more capacity, but Napster was filling it up capacity as quickly as Time Warner could add it.

One overly honest representative at Time Warner told them something interesting. While online gamers were some of the first adopters of cable modems, they were relatively a small fraction of total cable modem subscribers. Moreover, online gamers were some of the most difficult subscribers for Time Warner satisfy. Finally, online gamers paid no more than any other subscriber for cable modem service. In other words, Time Warner had no incentive to fix the network jitter that online gamers were experiencing.

That’s the size of it.

Netroots Legislative Agenda

I like a good fight, no matter who’s fighting. Matt Stoller, the MyDD blogger who’s wasted so many electrons on the dubious cause of net neutrality, wrote a post immediately after the recent election in which he declared that the “netroots” legislative agenda begins and ends with his pet cause. A somewhat more serious thinker, Bob Fertik, quickly listed 140 agenda items and asked his readers to vote on them; his list includes things like raising the minimum wage, signing Kyoto, restoring habeas corpus, and all that sort of trivia. Net neutrality came in at number 14. Here’s the explanation:

Bloggers who work mainly with text and photos (and that’s most political blogs) could blog without net neutrality; it would mainly affect video bloggers since they consume far more bandwidth, and that’s what the monopoly gatekeepers want to tax.

But Bloggers couldn’t do what we do without the First Amendment…

Now that seems awfully sensible, especially for somebody who drinks the Kool-Aid. Why is it that Stoller has such a hard time keeping things in perspective?

Oh joy

The Citizen Journalist meets the Citizen Engineer and soon we’ll be drowning in data:

The new NewAssignment.net site launches today and Tom Evslin writes about a very real networked journalism project to find whether there are the smoking guns of network (non)neutrality lurking in our ISP wires.

We’ve already seen network neutrality discrimination claims made by Craig Newmark that turned out to be caused by the odd configuration of his equipment, discrimination claims that turned out to be temporary service outages, and in Canada discrimination claims that turned out to be service offerings. When the citizen engineer/jour-analyst starts looking at packet delay data, no doubt every traffic-related variation in delivery times will be linked to the latest Evangelical gay sex scandal, Saddam’s WMD program, Ed Whitacres sexual preferences, and the price of soybean futures.

The trouble with citizen efforts at skilled professions isn’t a dearth of data, it’s the inability to interpret the data according to rational standards.

This is going to be fun to watch.

Deregulator’s Essay

The Progress and Freedom Foundation has published an essay based on the comment that the great Alfred Kahn originally left on their blog. It’s eminently worth reading, as we’ve said before, and here’s the conclusion:

Why all the hysteria? There is nothing “liberal” about the government rushing in to regulate these wonderfully promising turbulent developments. Liberals of both 18th and 20th–and I hope 21st–century varieties should and will put their trust in competition, reinforced by the antitrust laws–and direct regulation only when those institutions prove inadequate to protect the public.

There is no need to rush in and start regulating the Internet based on nothing but suspicion that bad things are in the offing. When and if we see some actual bad practices on the part of the telcos (or on the part of Google and Yahoo, let’s be fair) Congress can take appropriate action, whatever that is. Acting on the basis of suspicion, and with a heavy regulatory hand, will only harm the Internet. And we don’t want to do that, right? So chill, people.

The great deregulator speaks on net neut

Alfred Kahn deregulated airlines and trucking in the US, and he’s not feeling the love for net neutrality regulations:

Some 25 years ago, I thought it was logical to try to prevent cable television companies, as beneficiaries of exclusive territorial franchises, from discriminating against unaffiliated suppliers of programming in favor of their own by prohibiting broadcasters holding a financial interest in the programs they carried. I eventually recognized, however, the public benefits from the especial incentives of the several broadcasters to produce programming of their own, as well as to bid for independent programming, in competition with one another; and that that competition sufficiently protects independent providers from discrimination or exploitation. If Google and eBay depend upon the telephone and cable companies for reaching their audiences, that dependence is mutual: what would happen to the willingness of subscribers to sign up for DSL or cable modem service if one or the other of those suppliers decided not to carry Google or eBay?

Demonstrably, those broadband facilities have to be created by investments — especially huge ones by the telephone companies — and applications requiring priority transmission can entail lower priority transmission of others. Except as broadband service is subsidized by governments — a possibility I do not exclude — those costs must be collected from users — subscribers to broadband services, on the one side, providers of programming or content on the other, or some combination of the two — just as in the case of newspapers or television stations.

Why all the hysteria? There is nothing “liberal” about the government rushing in to regulate these wonderfully promising turbulent developments.

If you’re interested in the Internet’s future, read the whole thing, it’s a comment on the Progress and Freedom Foundation’s blog.

Microsoft out of It’s Our Net, for now

Broadcasting and Cable has this statement from Microsoft about that company’s dropping out of the ironically named “It’s Our Net, Not Yours” regulatory coalition:

“Microsoft has withdrawn its name from the It’s Our Net website for the pendency of the AT&T-Bellsouth merger proceeding based on a company decision not to engage the proceeding,” the company said in a statement. “However, we continue to support and will pursue other opportunities to obtain meaningful Network Neutrality policies.”

Google and its minions are trying to use the Justice Department to advance their anti-democratic net neutrality program, and even for Microsoft that’s going too far. Let’s hope they never re-join.

Scott Cleland and PFF had noticed Microsoft’s name was gone from the It’s Our Net website, and this is why.