An article of faith

The Net Neutrality crowd is motivated by a desire to promote innovation. They firmly believe that such things as blogs, blog aggregators, on-line auctions and dating services like My Space are innovative. While it’s not fruitful to argue with this point of view, its background is questionable.

The early Internet, you see, was justified as a playpen for experimentation with packet switching. Kahn and Cerf argued that keeping traffic control out of the network interior would enable researchers to experiment with protocols that could be easily implemented outside the network. They were motivated by the belief that Ethernet made traffic management moot, so the only field where they could hope to distinguish themselves was in the protocols and applications that used the Ethernet. They wanted the Internet to be Ethernet on a larger scale.

This dubious assumption gave rise to a religious dogma: “The “smarter” the network, the less innovation on top of it.” I recently encountered this on the Lessig Blog, a veritable wonderland of Kool-Aid consumption, and you can find similar sentiments on David Isenberg’s blog, where he promotes his idea of “Stupid Networks” as great stimulators of innovation. Isenberg once worked for a phone company, but not in a technical capacity.

This article of religious faith isn’t provable. Let me show you why with an example.

The old Ethernet was a simple, dumb network. It provided a single speed and single priority, making end-users control access through a totally distributed CSMA/CD scheme. As I said, it was the architectural model that Kahn and Cerf borrowed for TCP/IP.

But the marketplace was offered a choice with the New Ethernet, the one that used active switches, twisted-pair and fiber optic cable, and multiple speeds. It centralized access to the network inside network switches instead of in end-user nodes. It offered VLAN overlays. The New Ethernet killed the Old Ethernet, completely and utterly.

Along comes WiFi, offering still more intelligent network services than even the New Ethernet. It does things that make the Internet shudder, such as mobility, and uses obscure features of the IP suite to prioritize traffic.

Does WiFi nurture innovation? Clearly it does, as it makes the entire realm of mobility-enabled applications possible and does cool things for voice and video.

So a careful look at the historical record says, no, dumb networks don’t promote innovation, they circumscribe it to the class of applications they can support. Like anything else in engineering, we should place network controls where they can do the most good, not where they can do the least harm.

Religion is not a good guide to engineering, folks, logic and evidence work much better.

Architecture of the Internet

Fans of network regulation wax teary-eyed over the perfection of Internet architecture as they understand it, but the inventors themselves are much less sentimental. Here’s RFC1958, Architectural Principles of the Internet, an argument for constant change:

In searching for Internet architectural principles, we must remember that technical change is continuous in the information technology industry. The Internet reflects this. Over the 25 years since the ARPANET started, various measures of the size of the Internet have increased by factors between 1000 (backbone speed) and 1000000 (number of hosts). In this environment, some architectural principles inevitably change. Principles that seemed inviolable a few years ago are deprecated today. Principles that seem sacred today will be deprecated tomorrow. The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely.

The purpose of this document is not, therefore, to lay down dogma about how Internet protocols should be designed, or even about how they should fit together. Rather, it is to convey various guidelines that have been found useful in the past, and that may be useful to those designing new protocols or evaluating such designs.

A good analogy for the development of the Internet is that of constantly renewing the individual streets and buildings of a city, rather than razing the city and rebuilding it. The architectural principles therefore aim to provide a framework for creating cooperation and standards, as a small “spanning set” of rules that generates a large, varied and evolving space of technology.

Some current technical triggers for change include the limits to the scaling of IPv4, the fact that gigabit/second networks and multimedia present fundamentally new challenges, and the need for quality of service and security guarantees in the commercial Internet.

As Lord Kelvin stated in 1895, “Heavier-than-air flying machines are impossible.” We would be foolish to imagine that the principles listed below are more than a snapshot of our current understanding.

Damn that must make Alyssa Milano’s head hurt.

Yoo vs. Crawford on Regulation

PBS’s smarmy NOW program has a mini-debate on its web site between law professors Christopher Yoo and Susan Crawford, in which they were asked five questions about Internet regulation. It’s pretty interesting to note that Crawford’s answers are highly emotional while Yoo’s are factual and well-reasoned. Some might say that’s the feminine style vs. the masculine style, but I think it’s a question of competence. Crawford is bluffing because she doesn’t understand the technical issues, while Yoo is telling it like it is.

Rethinking the design of the Internet

Rethinking the design of the Internet: The end to end arguments vs. the brave new world by David D. Clark and Marjory S. Blumenthal deals with the end-to-end concept of network architecture and its relevance to future networks.

The simple service model of the Internet (called “best effort delivery”) makes no guarantee about the throughput that any particular application will achieve at any moment. Applications such as file transfer, Web access, or e-mail are tolerant of fluctuations in rate—while a user may be frustrated by a slow delivery, the application still “works.” Today, a new set of applications is emerging, typified by streaming audio and video, that appear to demand a more sophisticated Internet service that can assure each data stream a specified throughput, an assurance that the best effort service cannot provide. Different approaches are possible, beginning with (re)design of applications to operate using only the current best effort service, perhaps by dynamically adjusting the fidelity of the transmitted information as the network throughput varies. At least some application designers reject this limitation on what they could design. Another approach would be to add new data transport services in the core of the network that provide predictable throughput and bounded delays, and there have been proposals along these lines. However, the Internet Service Providers (see below) have not so far been willing to provide these new services. As a result, application builders have adopted the strategy of installing intermediate storage sites that position the streaming content close to the recipient, to increase the chance of successful delivery. Thus, unlike a simple end to end structure, the design of these new applications depends on a two-stage delivery via these intermediate servers.

Yeah, those pesky real-time applications “appear” to have special needs, and the lazy old ISPs haven’t been willing to meet them.

Ahem.

Open Letter to the Neutrality Regulators

Dear Neuts,

Telco abuse of Internet users is hypothetical, not real. There has only been one documented instance of arbitrary service blockage in the USA, the Madison River case which the FCC promptly stopped. In response to this case, they devised the “four freedoms”.

These principles are part of the COPE Act, which also gives the FCC the authority to levy fines up to $500,000 per infraction. The Stevens Senate bill directs the FCC to do a study of provider abuses and report back.

The sensible approach to regulation, in this sphere or in others, is to draft broad principles, set up an enforcement regime, and let case law evolve.

Once we see some actual abuse that isn’t covered by these provisions, Congress will still be in the business of drafting laws and we’ll be able to go ask for whatever approach is necessary to solve the real problems. What you people are proposing is pre-emptive legislation that will most likely do to the Internet what Bush’s pre-emptive war did to Iraq.

I appreciate your sentiments, and firmly believe that you have good intentions. But I’ve worked with legislative bodies before and have seen the unintended consequences that can flow from legislation that’s guided by too much emotion and not enough information.

There’s no immediate crisis here so the best thing course of action is to simply gather information. We all want a “neutral” network that enables innovation to flourish, and the fact that we’ve never really had one shouldn’t discourage us.

Networks are technical artifacts that improve with time, and even the Internet is not so perfect that we should freeze it.

In fact, the architects of the Internet made several design mistakes because of the model they chose to imitate, the early Ethernet. That system tried to do bandwidth management in a fully distributed manner with a clunky scheme of medium sensing, collision detection, and backoff. The Internet analogies are slow start, congestion, and backoff for TCP.

The early Ethernet model doesn’t work well under high load, and was abandoned in the late 80s after my colleagues and I on the IEEE 802.3 standards committee devised a scheme that ran Ethernet over twisted pair wiring into a hub or switch. It turns out that you can manage bandwidth better from a central point that knows who wants to do what when than you can in a totally random, distributed system. The system we devised is the Ethernet that we all use today.

When we re-designed the Ethernet, we faced the same aesthetic criticisms that the neutrality people are heaping on the phone companies today: our system wasn’t democratic, it wasn’t reliable, it couldn’t ensure fairness, and it wasn’t going to be cool with the FCC. But all those criticisms turned out to be groundless, and we now have 40 Gigabit Ethernet running on switch-based systems.

We fought the same battle when we designed the WiFi system. One faction wanted an Access Point-based system and another wanted an Aloha system that was fully distributed and all that. Once again, the network engineering work showed that an Access Point provided better performance to everyone than a distributed system that was constantly flailing for access to the network.

It’s about time that the architecture of the Internet was updated to reflect modern practices in network design where traffic is classified and moderated in points of intelligence that are distributed around the network. This sort of re-design, which was started by the MPLS and DiffServ people, will ultimately produce a network that can do more things better for more applications than the dated Vint Cerf design where the guy with the fattest pipe controls the network.

The original Interment was a fine piece of work given the limited knowledge of packet-switched networks in 1980, but we’ve learned a lot since then and consumers will benefit if some of this learning can be passed on the form of new networks with better performance and richer choices.

I think my networks are better than the old collision-based Ethernet that was the model for the TCP Internet, and a new Internet based on new LANs and WLANs would be better than the old one.

I’m biased, but so are that Cerf character and his fellow-traveler Sir Berners-Lee.

Whether you agree with me or not, I think the moral and decent thing to do is to offer consumers the opportunity to see which one they like better. That’s what we did with Ethernet, and the market responded strongly. There’s no shame in designing a system that works pretty well in its day but is ultimately replaced by a better one.

That’s the whole story of technology, so let’s not pre-judge the outcome.

Best wishes,

The Network Bunny

PS: Go read the Heritage Foundation’s paper on this question. It’s very edifying.

This is what started it all

FYI, here’s the article that but a bee in the bonnet of the special interests who’re trying to shackle the Internet with their so-called net-neutrality regulations:

William L. Smith, chief technology officer for Atlanta-based BellSouth Corp., told reporters and analysts that an Internet service provider such as his firm should be able, for example, to charge Yahoo Inc. for the opportunity to have its search site load faster than that of Google Inc.

Or, Smith said, his company should be allowed to charge a rival voice-over-Internet firm so that its service can operate with the same quality as BellSouth’s offering.

Network operators can identify the digital “packets” of content moving through their wires from sites and services and can block some or put others at the head of the stream.

But Smith was quick to say that Internet service providers should not be able to block or discriminate against Web content or services by degrading their performance.

The complaint from regulation-happy special interests is that the telcos want to make the Internet like cable TV, to which I will simply say that the Internet has already internalized that model, and quite successfully. The dominant web services companies make their money the same way TV does, from a combination of ads, subscriptions, and pay-per-view. Google sells ads just like ABC does, iTunes is pay-per-view, and the New York Times sells subscriptions to Tom Friedman and Maureen Dowd. So where does all this high-and-mighty “we’re so much better than TV” crap come from, some delusion that Internet access is a gift from the Easter Bunny?

There’s a real public benefit to an accelerated service offering inasmuch as its a cheap way to level the playing field between rich incumbents like Google with fat pipes and server farms all over the net and startups with lean budgets. If people like Gigi Sohn were really the friend of the entrepreneur — and had even a basic knowledge of how the Internet actually worked and not just a sentimental fantasy about a democratic net — they would applaud Bell South’s ideas.

The 21st Century Network

Adam Thierer has a first person account of the Verizon’s new network, the one the hippies are screaming about:

Folks, this is serious broadband competition. For those critics who say that the rivalry between two competitors will not be intense, I say come visit my neighborhood. You’ll see Verizon showering people with free gifts (flowers for women, baseballs for kids, and even free gas at local gas stations!) to try to build name recognition and win new subscribers. And you’ll see Cox responding with flyers and e-mails about new services that are coming in an effort head-off this threat. And then you’ll see Verizon flyers and ads responding to those Cox flyers and ads. And you’ll see both of them cutting prices left and right to get customers or win them back.

(Meanwhile, wireless lurks as an alternative that could decimate both cable and telco wireline providers if they can just get the broadband part of the puzzle solved. Rupert and the boys over at DirecTV are in the process of rolling out an ambitious HDTV plan. Can more robust, reliable satellite broadband services be far behind? If DirecTV ever merges with EchoStar and combines all that satellite capacity, look out. That’s when things will get really exciting.)

The real question now is not whether broadband competition works, it is whether or not it is sustainable among more than two players per region. I am one of just 3 or 4 people in my neighborhood who have signed up for FIOS so far. Verizon is going to need to get A LOT more subscribers AND SOON. If they get caught up in a price war with cable in the short term they could be in serious trouble because the fixed deployment and installation costs associated with FIOS are killing them. They need customers and they need them now. At a minimum, Congress needs to enact local franchising relief and make sure that burdensome state or local regulation does not stand in the way of Verizon and other telcos rolling out these exciting new services. The market challenge they face is stiff enough that they don’t need such artificial regulatory impediments to success standing in their way.

It sounds like Verizon doesn’t have their act together on customer service, with all those union workers, but the service is something I’d buy if I had the chance. Unfortunately, fascist forces want to deprive me of the opportunity.

Neutral Net? Who Are You Kidding?

Wired has a half-good article on Internet regulation today, Neutral Net? Who Are You Kidding?

“Net neutrality” has many meanings, but in the broadest sense refers to a cooperative principle whereby everyone on the net is supposed to make the same effort to help deliver everyone else’s traffic. In fact, pushing bits through the network-of-networks that makes up the internet is an anarchic business and frequently an ugly one. ISPs must often fight to get their data carried on neighboring networks, and those who are willing to pay extra reap immediate benefits in the form of faster and better service. Vast amounts of traffic are rerouted and blocked every day. The system, while successful overall, seems to ride on the very edge of chaos, insiders say.

They ultimately get around to discussing the priority thing, but they more or less mangle it, so here we go again:

A truly neutral network provides each application with the type of service it requires, which is low latency for voice, low jitter for video, and high throughput for the downloads/file transfers/web browsing. The Internet can only accomplish this by assigning priorities to these different traffic types. These priority assignments don’t have the effect of making some web sites faster than others, as all web traffic has the same priority.

This does not have anything to do with blocking or degrading specific web sites, really it don’t, and it’s quite consistent with the meaning of “net neutrality” orginally used by Prof. Tim Wu in the paper cited below.

Priority traffic is not the end of the world, in fact it’s a step toward a more application-neutral Intenet.

Saving the Internet from Web tedium

What’s really going on with “net neutrality” regulations is that the Big Content companies are afraid the Internet will become more a medium of communication than a means of delivering canned content, and that will eat away at their profits. So they’ve concocted this whole FUD campaign that dishonestly makes Quality of Service enhancements for communications applications a threat to web access.

The Internet is more than the Web, and beefing up the communications side makes the content less compelling. Instead of whining about the Telcos, Big Content should try to be more interesting.

John Carroll is not a moron

He’s pretty clear on why “net neutrality” as imagined by leftwing blogs and pandering politicians is big, steaming load of crap:

That’s a bit of an exaggeration. Broadband providers, most notably AT&T, aren’t suggesting that they will “control what we access over the Internet.” Rather, they are saying that some content may be given “fast track” access into the home, access to which is contingent on a fee paid either by the provider of content or the consumer.

To consider why this might be useful, imagine 5-10 years down the road when the average size of the datapipe into the home is in the 25-50 megabit range (placing us still behind the South Koreans, but oh well). VoIP service might start to get enhanced by video that is not just the simple 320 by 240 image we are used to in the IM world, but in the Standard Definition, or even the High Definition range.

Given the time criticality of such data, not to mention the fact that such usage will vastly outweigh, from a network load standpoint, the simple text and image traffic that constitutes a large percentage of the current Internet, a fast track might not be just useful, but fair to boot. We charge ten ton trucks more for access to a toll road than standard-sized automobiles because of the extra wear and tear they cause on these roads. Why shouldn’t we make “bandwidth hogs” pay more?

Net neutrality = shifting communications costs from content retailers to consumers. Brilliant.