Net Neutrality is Intelligent Design for the Left

Traditional values are under attack. The old ways are in decline, people insist on more freedom of choice than their grandparents had. The received wisdom about the very structure and organization of our world has been taken apart by science, and the only way to put it back together is to use the language of science to prop it up. So obscure terms and concepts enter public discourse with new meanings, and a alternative history is created. Fear runs rampant, men of science are assaulted for their moral bankruptcy, and an alternative science is injected into the schools.

While this paragraph clearly fits the “intelligent design” attack on the biological sciences in general and evolution by natural selection in particular, it also applies to “network neutrality”, the idea that only the government can be trusted to dictate policies embedded in Internet access networks. One of the most stark presentations of this viewpoint was written by Jeff Chester for the left’s holiest organ, The Nation magazine:

Absent net neutrality and other safeguards…[b]roadband connections would be governed by ever-vigilant network software engaged in “traffic policing” to insure each user couldn’t exceed the “granted resources” supervised by “admission control” technologies. Mechanisms are being put in place so our monopoly providers can “differentiate charging in real time for a wide range of applications and events.” Among the services that can form the basis of new revenues, notes Alcatel, is online content related to “community, forums, Internet access, information, news, find your way (navigation), marketing push, and health monitoring.”

Missing from the current legislative debate on communications is how the plans of cable and phone companies threaten civic participation, the free flow of information and meaningful competition. (ed: emphasis added)

Note the use of the terms in scare quotes, traffic policing, granted resources, and admission control. These are technical terms that come from the world of network engineering, and we can sure Chester doesn’t use them because he wishes to illuminate their importance in the engineering context. He doesn’t bother to define or explain, but takes it as given that any such words can only be destructive to “civic participation.”

The juxtaposition of network engineering language with social policy language is deceptive and inane. It’s like arguing that an electrical grid that provides alternating current to the home is responsible for politicians who flip-flop between positions depending on what audience they’re addressing. There’s a superficial similarity, indeed, but that’s where the connection ends.

We use admission control and policing on WiFi networks with something called WiFi multi-media (WMM) so that telephone calls and live video streams can happily coexist with web surfing on the same wireless network. Wireless networks don’t have unlimited bandwidth, so we have to use some finesse to provide a satisfactory experience to as many people as possible over a common network channel. Network engineering doesn’t do this in order to stifle democracy and curb “the free flow of information;” on the contrary, enabling as many people as possible to use the network as they wish to use it has the effect of enhancing free information flow. When we use “admission control” and its related priority system on a WiFi network, we can handle four times as many phone conversations as could without them. Doesn’t a phone call to a politician count as “civic participation” any more?

Have we now come to a state where the aesthetics of engineering language guide public policy more than the effects of engineering practice? I hope not, but we’re getting closer.

Chester admits that he wants to government to own Internet access networks, or failing that, that it should control them:

That means we would become owners of the “last mile” of fiber wire, the key link to the emerging broadband world. For about $17 a month, over ten years, the high-speed connections coming to our homes would be ours–not in perpetual hock to phone or cable monopolists.

Chester says, in essence, that the government is more likely to control the Internet in such a way that the public can engage in criticism of government actions and policies than anyone else. I see no empirical evidence that would encourage me to accept this article of faith, and plenty that argue for its rejection.

In our recent experience, we got to see head-to-head competition between a highly-regulated DSL service and a minimally-regulated Internet access system using cable TV systems, and the less-regulated option emerged as the clear technical winner.

Chester uses the language of science to urge us to ignore experimental data, and that’s as fundamentally unscientific and irrational as it gets.

For extra bonus points, his Center for Democracy and Technology minion Mike Godwin urges network neutrality regulation on the premise that the New York city taxi system is a transportation utopia. That’s just silly, and Matt Sherman explains why.

Wikiality of Net Neutrality

The current Wikipedia entry for Net Neutrality has a pretty good intro, but you never know how long these things will last:

Columbia University law professor Tim Wu popularized the phrase network neutrality as a term designating a network that does not favor one application (for example the World Wide Web) over another (such as online gaming or Voice over IP).[1] Wu claims that the Internet is not neutral “as among all applications” as it favors file transfer over real-time communication.

The concept of network neutrality has since taken the form of various regulations proposed to govern Internet communications, including commercial interconnection agreements between Internet Service Providers (ISPs), carriers, on-line service providers, and broadband users, usually on the basis of principles of public service obligations associated with special access to public rights of way. In this sense, network neutrality means a state in which Internet providers provide interconnection services on a uniform basis, or “without discrimination”, although there is considerable disagreement about how this principle applies to applications with different needs.

Network neutrality is sometimes used as a technical term, although it has no history in the design documents (RFCs) describing the Internet protocols. In this usage, it is claimed to represent a property of protocol layering in which higher-layer protocols may not communicate service requirements to lower-layer protocols, a highly idiosyncratic interpretation of protocol engineering. (In conventional network engineering practice, each protocol in a layered system exposes Service Access Points to higher layers that can be used to request a level of service appropriate to the needs of higher-layer protocols.

Network neutrality also designates a contemporary controversy local to the United States regarding the role that government should take relative to Internet access providers providing multiple levels of service for different fees. This controversy, which emerged following regulatory developments in the United States, is extremely complex, as it mixes technical, economic, ideological and legal arguments. In essence, network neutrality regulations proposed by Senators Snowe and Dorgan and Representative Markey bar ISPs from offering Quality of Service enhancements for a fee.

This framing is showing up in some of the recent essays on the subject, such as this one from Tech News and this one by Susan Davis for Hosting News.

High-traffic articles in Wikipedia tend to degrade over time and require reformulation as entropy increases.

UPDATE: That didn’t take long. A Google sympathizer going by the name “Wolfkeeper” tried to erase the summary and replace it with a pithy personal opinion. See the history page.

How much bandwidth is enough?

Reader Dave Johnson argues that QoS isn’t necessary on big packet networks because the carrier can simply provision the network adequately to carry all possible traffic at once:

If an ISP has ample internal and outgoing bandwidth, as in more than enough to provide for the sum total of their customers allocations, then where does that leave the latency issue? The only way packets are delayed or refused is because there is not enough capacity on the destination wire. If there is enough capacity then *where* is the problem? Customers are by definition limited in the amount of data (under any protocol) that they can send, so all quantities are known.

As idiotic as this sounds, it’s a common Urban Myth, originally expressed in the works of David Isenberg, if not elsewhere. Bandwidth is free, you see, so just hook me up and don’t worry about it.

OK, let’s play along. How much bandwidth does an ISP have to have on its internal network to allow all of its customers to use all the bandwidth in their hookups all the time? Verizon’s FIOS customers have 100 megabit/sec connections, and there are 375,000 of them. So all Verizon needs for “ample” bandwidth inside its own network is a 37.5 terabit/sec (terabit being a million million) switch, and a similar sized connection to the public Internet.

Of course, that kind of bandwidth doesn’t exist.

Will it in the future? Maybe, but by then instead of worrying about 375,000 customers on the Verizon network, we’ll be worrying about 200 million Americans with 100 megabit/sec each. That adds up to 20,000 terabits/sec. I don’t see any switches capable of handing that load on the horizon, of course. This is a ridiculous exercise, and I only do it because the argument from hyper-regulation side is so lame.

Now lets assume that ISPs can cap bandwidth for each user to some level of transport per day, week, or month. Does that alter the arithmetic above? Actually, no, because you still have to design for peak load. If everybody wants to download American Idol at the same time, you need to accommodate that, so that’s where we are.

The fastest datalinks we have today are 40 gigabits/sec. So let’s take a bunch of them and bond them together to get a 20,000 terabit/sec pipe. We only need 500,000 of them. Supposing we can build a switch that handles 20 such pipes (not really practical today, because of limitations on bus speeds, but let’s be generous) you need 25,000 of them. But now how do we interconnect these switches to each other? Well, we just need to interconnect them to each other in a big mesh, but we’re playing with probabilities again, betting that no combination of users will over-use the path between one switch and anohter. So we’ll have to add another level of switches to enable each end-user to reach each end-user through any intermediate switch, and there will be a lot of these. Somebody has to pay for all these switches, because even if they were cheap (and they aren’t), they’re not free.

This is why QoS is needed: “more bandwidth” only works up to the economic and physical constraints on bandwidth, both of which are real.

So here’s the lab problem in summary: the fastest pipes we have are 40 gigabits/second. How many of them, and in what topology, do you need in order for 100 million users to transmit 100 megabits/second of traffic with no delay?

Carriers Seek IP QOS Peers

Ha ha hee hee ho ho, here we go on our way to a better Internet:

Peering isn’t just for VOIP anymore.

Carriers are beginning to form peering arrangements by which they mutually honor each other’s QOS requirements at the transport layer. (See VOIP Cuts Out Middlemen.)

That’s one of the findings of a new Heavy Reading report titled “VOIP Peering & the Future of Telecom Network Interconnection.”

“They need to go beyond VOIP peering to support their customers, principally their enterprise customers,” says John Longo, the Heavy Reading analyst who wrote the report. “They want to provide voice services as well as their own advanced applications end-to-end for their customers.”

See, it wasn’t so hard after all.

H/T Matt Sherman.

The Rise of the Self-Contradictory Network

Re-reading my Berkman Center slanderer David Isenberg’s seminal paper The Rise of the Stupid Network, I was struck by the contradictory nature of the two paragraphs at the heart of the polemic.

First, he says his “Stupid Network” is aware of the types of messages presented to it, and handles each with appropriate service:

[In] the Stupid Network, because the data is the boss, it can tell the network, in real time, what kind of service it needs. And the Stupid Network would have a small repertoire of idiot-savant behaviors to treat different data types appropriately. If the data identified itself as financial data, the Stupid Network would deliver it accurately, no matter how many milliseconds of delay the error checking would take. If the data were two-way voice or video, the Stupid Network would provide low delay, even at the price of an occasional flipped bit. If the data were entertainment audio or video, the Stupid Network would provide wider bandwidth, but would not necessarily give low delay or absolute accuracy. And if there were a need for unique transmission characteristics, the data would tell the Stupid Network in more detail how to treat it, and the Stupid Network would do what it was told.

Yet this is the very behavior that Net Neutrality laws would ban carriers from embedding in their fee agreements, and moreover it contradicts the very next paragraph:

You would not have to ask your Stupid Network provider for any special network modifications – its only function would be to, “Deliver the Bits, Stupid.”

Network neutrality advocates say true neutrality is simply delivering the bits, first-come, first-served. But delivering the bits in ways that are sensitive to application needs is blasphemy, monopolistic, price-gouging and extortion. See Susan Crawford for an example of the “bits is bits” point of view:

There are lots of people out there saying “we need to treat all VoIP alike, all video alike, and all blogs alike.” For them, that’s network neutrality. That’s not what I hope we’ll end up meaning by net neutrality. That would require a heavy-handed regulator enforcing a provider’s determination of what packets are “like” other packets. I am not in favor of that approach. I have a different vision. I hope, someday, we’ll treat broadband access like the utility it is. That would mean separating transport from other activities, and separating access from backbone and backhaul transport. That doesn’t require a great deal of discretion to repose in any particular actor.

Yesterday’s debate at PDF seemed to be focused on the fuzzier definition of network neutrality (“treat all VoIP alike”). That definition plays directly into the arguments of the telcos. It would give the FCC an enormous amount of discretion and power.

(UPDATE) In a subsequent re-write of this article, Isenberg came over to the Crawford side and abandoned the special treatment idea:

Intelligent Network advocates point out that networks need to treat different data types differently. Right now, they’re absolutely correct. There is a network for telephony, another network for TV, and proprietary leased-line networks for financial transactions – and none of these are ideal for public Internet traffic. You need to have low delay for voice telephony, the ability to handle megabit data streams with ease for TV, and low error rates and strong security for financial transactions.

Quality of Service (QOS) is an intermediate step in the journey from separate networks to a single, simple Stupid Network…

But suppose technology improves so much that the worst QOS is perfectly fine for all kinds of traffic, without a repertoire of different data handling techniques. Suppose, for example, that everyday normal latency becomes low enough to support voice telephony, while at the same time allowing enough capacity for video, plus data integrity strong enough for financial transactions. This would be a true Stupid Network – one treatment for all kinds of traffic.

Why should anybody build a network to transport raw bits without packet inspection? None that Isenberg can see:

One thing about the Stupid Network is clear – the physical elements that comprise the network would be neither expensive nor scarce. There would be little profit margin in shipping dumb bits. There would be lots of high value Business Ideas supported by the Stupid Network, above and beyond transport.

As I read that, he’s justifying the Telco program to pay for the network by selling services. That’s a “high value Business Idea” instead of a low-profit transport business.

And indeed, Isenberg has come to recognize that nobody will build a high-speed, stupid network simply to carry bits, as there would be no money in it:

The best network is the hardest one to make money running.

So this realization ultimately leads to the real end-goal of network neutrality: broadband Internet access networks should not be built by private companies, they should be built by government and maintained as public utilities. The goal of network neutrality legislation, then, should be to discourage private investment in broadband networks, the quicker to energize local governments to jump into the networking business.

The end of Evslin’s talk was all about doing that in Santa Barbara, CA, where Doc Searls is on the case.

How to do VoIP over the Internet

Tom Evslin’s talk yesterday at the Harvard Law School’s Berkman Center (download MP3 here) on net neutrality was interesting for a couple of reasons. I’m apparently Public Enemy Number 1 among the champions of freedom. No sooner did Tom mention my name (the first part of his talk was an attempt to rebut my concerns about the Snowe-Dorgan regulations) than some yahoo jumped up and started screaming that I’m nothing but a troll, not even an engineer, and just a puppet of the phone company. I believe said yahoo was David Isenberg, the creator of the “Stupid Network” meme and a genius at self-promotion. He interrupted Tom several more times with some fairly crazy ideas, and Tom finally had to shut him up.

I’ve been critical of Isenberg and this “stupid network” idea for at least three years, so this whole “tool of the Telcos” thing is actually quite hilarious. Am I an engineer? Well, I have a philosophy degree and some graduate work in network engineering, have my name on several networking standards and patents, have started, contributed to, and lead several networking vendor groups, and my job title has the word “engineer” in it, so I’d have to say actually, yes, I am an engineer. (I wonder how many engineers were in that audience.) As to the “troll” charge, that’s in the eye of the beholder. I don’t tend to follow the herd, if that’s what it means, but I believe I offer substantial arguments most of the time. I’d certainly compare the corpus of my engineering work to Isenberg’s any old time.

In the course of the talk, Tom maintained that his success with ITXC, the first VoIP wholesaler, proved that the Internet doesn’t need any special mechanisms to provide high-quality voice delivery. And indeed, this is sometimes true.

While the Internet is not a neutral network, having been designed to transfer files, at the margins it can do some real-time carriage, sometimes reliably, especially if only one or two companies are exploiting it. But according to ITXC’s own claims, the general user can’t do what they do, you need their patented BestValue Routing:

ITXC achieves high quality of service through the use of its patented BestValue Routing applications. These applications were specifically designed and developed by ITXC to deliver consistent, high quality call completion over the Internet. Without BestValue Routing applications, carrier class quality of service over the Internet would not be possible.

Using proprietary routing and re-routing algorithms, equipment placed throughout the Internet, and sophisticated network overview software, ITXC is able to maintain high quality on the public Internet.

The use of BestValue Routing applications differentiates ITXC from clearinghouses and telecom commodity brokers that are merely financial intermediaries between network providers. Committed to quality, ITXC does not treat its customers’ minutes as commodities, but instead as a precious resource.

And what happens to you if you try and use these special methods that put intelligence into the network? I’m very glad you asked:

PRINCETON, N.J. –(Business Wire)– May 7, 2004 — ITXC Corp. (NASDAQ:ITXC) today announced that it has filed a lawsuit against Acceris Communications Technologies, Inc. and Acceris Communications, Inc. for infringement of a number of ITXC’s patents relating to voice over internet protocol (“VoIP”) technology. The suit was commenced in the United States District Court for the District of New Jersey, and alleges infringement of five of ITXC’s United States Patents, numbers 5,889,774, 6,026,087, 6,404,864, 6,628,760 and 6,661,878, directed generally to the transmission of telephone calls over the Internet and the completion of telephone calls by switching them off the Internet and onto a public switched telephone network (PSTN).

That’s right, you’re busted. So don’t try this at home.

The Internet is not a neutral network, it was designed to do one thing well. We now know how to build networks capable of supporting more than one type of application well. So if you think the Internet generated great innovation, just imagine what you could get from four Internets in one. But we can’t deploy multi-services technology if the Snowe-Dorgan bill passes, and that’s why I fight it.

And yes, I also think David Isenberg is an asshole and a moron, so beating him is part of the fun, but only a small part.

UPDATE: Jim Lippard points out that Acceris fired the first shot to defend their patent on VoIP. Here’s that patent in all its glory.

Tom Evslin answers my questions

I left some questions for Tom Evslin on the talk he gave today at Berkman on neutrality regulations. Tom has a technical background, and while he’s not come out in favor of new regulations, he appears sympathetic to the arguments for them. Check his responses at Fractals of Change.

I post some reactions later, but suffice to say his first comment is the most interesting. It goes into what seems to me like a very gray area for end-to-end: selecting routes through the public Internet for low latency. In general, the end-to-end network structure doesn’t allow that as routes are supposed to be the network’s business. We need to understand that and what it implies.

UPDATE: I’ve responded to Tom’s take on “application neutrality”. He gave a VoIP service he started as an example of using “the stupid network” to accomplish different things than what its architects envisioned, but it seems to me it proves the opposite case: a multi-service network promotes innovation better than a single-service one does. My response is here.

The great debacle

The Register has published a number of reader comments on the Net neutrality debacle, er, debate, one of whom takes me to task:

TCP/IP’s ‘end-to-end’ nature (what used to be called a “connectionless network layer”) – far from being the consensual fashion of the time – was ferociously denounced by communications experts. They basically said it was just the sort of rubbish you’d expect with a communication system invented by computer people: it obviously wouldn’t scale.

Indeed, there have always been protocol wars, and they’re generally pitted computer people against networking people. The TCP Internet was a radical departure from networking principles as they were understood in the 1970s, and it turns out most of the criticisms from networking people were sound. The TCP Internet is great for handling one type of traffic – store and forward file transfers – and not so great for handling other types of traffic, such as real-time voice and video. As the latter is now more important than the former, the Internet needs to be tweaked again, and not for the first time.

Read the comments at The Reg, some are quite interesting.

Subsidizing Google

Net Neutrality, as conceived in the Snowe-Dorgan bill and similar measures, is a subsidy to large bandwidth consumers such as Google and Yahoo. This is not to say to that these companies don’t pay anything for Internet bandwidth today as much as it is to point out that whatever they pay (and it’s far below market rates that you and I pay) doesn’t cover Quality of Service. Let me back up a little and explain what this is about.

The Telcos – principally Verizon, but to a much lesser extent AT&T and Bell South – are in the process of transitioning from old-fashioned DSL to systems that will enable them to offer cable-rivaling triple-play service. This means TV, phone, and Internet over a common wire. This can be done as the cable companies have done it, by partitioning the cable into slices of frequency for the different services, it can be done using strict time-division multiplexing a la the old days, or even better, by running IP across the entire bandwidth of the cable and segregating services with packet priorities. The latter means is the best way because it means in principle that you can use your entire cable for Internet downloads when you aren’t using higher-priority services.

The natural assumption is that they would be entitled to charge fees based on the service level a customer chooses, just the cable company does today. I can buy phone, TV, and Internet from Comcast separately or together. If the technology that Comcast uses to segregate these services changes, the economics don’t, inherently, so they should be allowed to continue pricing these services separately even if they’re all delivered over some form of Internet Protocol.

Google doesn’t think so, because they want their video and phone service to perform just as well as Comcast’s or Verizon’s without their having to pay anything more than base level Internet connection service prices.

So net neutrality amounts to this: even though Comcast or Verizon may charge their customers more for TV and phone service than they do for basic Internet service, Google should never have to pay more than a basic Internet connection fee for accessing Comcast customers at the highest level of service. Google wants to pay for basic cable and get HBO for free.

That’s what “net neutrality” really means. Mike McCurry wrote an op-ed for the Baltimore Sun that expresses this idea in a less-than-clear fashion, implying that Google doesn’t even pay for a basic cable, and the boyz at Techdirt whacked him around for it:

[McCurry’s] written up an editorial for the Baltimore Sun that doesn’t bother to mention his lobbying duties, or who has funded them. McCurry tries to make it seem as though the whole net neutrality thing is simply a ploy by Google to get “free” bandwidth. He notes, derisively, that “a $117 billion company like Google wants legislation that would drive Internet prices higher.” Of course, he doesn’t happen to mention that his viewpoint is funded by AT&T, who at close of business on Monday appears to be worth (oh, look at that) $117 billion as well.

Leaving aside the “grass-roots purity” angle, which is silly in a fight where Google’s side is represented by paid bloggers at Save the Internet, Techdirt has at best a technical point, not a substantial one. Google is seeking a subsidy, make no mistake about it.

And if that subsidy becomes law, we can more or less forget about any significant upgrades in the last mile, because there will be nobody to pay for them.

UPDATE: For the record, here’s the relevant part of Snowe-Dorgan:

(5) only prioritize content, applications, or services accessed by a user that is made available via the Internet within the network of such broadband service provider based on the type of content, applications, or services and the level of service purchased by the user, without charge for such prioritization;

Emphasis added.