Regulation and the Internet

Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea.

The Internet is a global network, and regulating it properly is a matter of global concern. I’d like to share a view of the technical underpinnings of the question, to better inform the legal and political discussion that follows and to point out some of the pitfalls that lie in wait.

Why manage network traffic?

Network management, or more properly network traffic management, is a central focus of the current controversy. The consumer-friendly statements of policy, such as the Four Freedoms crafted by Senator McCain’s technology adviser Mike Powell, represent lofty goals, but they’re constrained by the all-important exception for network management. In fact, you could easily simplify the Four Freedoms as “you can do anything you want except break the law or break the network.” Network management prevents you from breaking the network, which you principally do by using up network resources.

Every networking technology has to deal with the fact that the demand for resources often exceeds supply. On the circuit-switched PSTN, resources are allocated when a call is setup, and if they aren’t available your call doesn’t get connected. This is a very inefficient technology that allocates bandwidth in fixed amounts, regardless of the consumer’s need or his usage once the call is connected. A modem connected over the PSTN sends and receives at the same time, but people talking generally take turns. This network doesn’t allow you to save up bandwidth and to use it later, for example. Telecom regulations are based on the PSTN and its unique properties. In network engineering, we call it an “isochronous network” to distinguish it from technologies like the old Ethernet that was the model link layer technology when the DoD protocol suite was designed.

The Internet uses packet switching technology, where users share communications facilities and bandwidth is allocated dynamically. Dynamic bandwidth allocation, wire-sharing, and asynchrony mean that congestion appears and disappears on random, sub-second intervals. Packets don’t always arrive at switching points at the most convenient times, just as cars don’t run on the same rigorous schedules as trains.

Packet switching technology is radically different from circuit-switching, and the differences create the political friction that’s ignited the network neutrality debate. What network neutrality tries to do is impose telecom regulations on packet networks.

Mapping the telecom model to the Internet

The first attempt to do this used an “all bits are created equal” standard, hearkening to my friend David Isenberg’s concept that the Internet is a “Stupid Network” that knows nothing about the packets it delivers, but this simplification doesn’t go very far. It hasn’t been literally true on the Internet since the late 80s, when the Jacobson Algorithm was applied, making TCP users responsible for Internet congestion management, while other protocols, such as UDP, have been exempted from congestion management.

Bits are parts of packets, which are created by applications, which don’t all have the same requirements. So it’s obviously sensible for networks to prioritize some packets over others. VoIP has more stringent delivery-time requirements than file sharing, for example. So the more subtle definition is that all packets of the same type have to be treated equally, which is more in line with notions of “positive discrimination” than pure neutrality. But this isn’t much better.

For example, the Internet Freedom Preservation Act of 2008, HR 5353 by Markey and Pickering, prohibits unreasonable discriminatory favoritism for, or degradation of, content by network operators based upon its source, ownership, or destination on the Internet. This standard, which does not actually appear anywhere in US law, was applied to Comcast by the FCC to ban their method of managing peer-to-peer traffic.

The standard doesn’t actually fit, however, since Comcast’s method applied a quota system across applications, which doesn’t actually violate the “source, ownership, or destination” clause. And there are technical reasons why the Comcast system was sensible, both for peer-to-peer users and for those (in the majority, by the way) who don’t use it.

So network neutrality is not a principle of network management or of free expression, it’s a prohibition of the sale of a particular service, high-priority transport. It’s reasonable to ask why anyone would want to do that, given that applications aren’t created equal.

The Network Neutrality Controversy

Contrary to popular belief, the Internet is not a level playing field. Content Delivery Networks, such as Akamai and Limelight are overlaid on it, and they provide a fast lane for content. They do this by placing content close to the consumer and substantially bypassing the public Internet core. This doesn’t simply benefit the content supplier by reducing the intrinsic delay of beams of light moving across shorter distances, it exploits a quirk in the design of TCP that allocates more bandwidth to sessions with short Round-Trip Times than to those with longer Round-Trip Times, running over the same wires. The rate of TCP throughput is inversely proportional to Round Trip Time, other things being equal.

One of the principal investors in CDNs is Google. They’re built some 40 data centers – server farms – around the world, like this one in Oregon:

Google Server Farm in Oregon

That’s the Columbia River, home of a complex of dams that generate tremendous amount of hydropower. These dams used to supply the electricity that runs the Bay Area Rapid Transit system, but the contract has expired. This is a location with an abundance of optical fiber, built after the 1996 Telecommunications Act was passed, in anticipation of hosting CLECs. The center houses some 6000 CPUs, and speculation has it that Google has a half-million CPUs world-wide in complexes like this one. Google spent $1.5B on such facilities in 2006 alone. See Markoff and Hansell’s article in the New York Times from June 14th, 2006.

When Bill Smith, the former CEO of the former Bell South, speculated that his company could provide fast-lane access to its customers for a fee, back in 2005, Google took notice. A fast lane service at the ISP would nullify the advantage they’d secured for themselves with their multi-billion-dollar investment, and they had capitalistic reasons to be concerned. So they ramped up their lobbying effort, hired Washington lobbyist Rick Whitt, and sent Vint Cerf and others out on the public relations trail to protect the Internet’s traditions.

Perhaps the most notable net neutrality advocate is law professor Larry Lessig, whose protege, Tim Wu, coined the term “net neutrality.” Wu, also a law professor, imported it from European telecommunications law. You won’t find this term in any of the 5000 RFCs that define the Internet protocols, of course.

Lessig’s contribution to the debate included this image that he showed to the FCC at their Stanford public hearing:

Lessig's Argument

It’s meant to portray Comcast and the other telecom and cable carriers. From this you can appreciate the level of the debate in the US.

So why is it legitimate for companies like Google to buy fast-lane access to consumers, but not legitimate for ISPs to sell a similar service to smaller companies who don’t have Google’s budget? Policy needs to treat all means of producing a given result as equivalent, side-effects included.

Content Delivery Networks

NBC used a fast-lane system to deliver the Olympics over the Internet in the US this year. They contracted with Limelight, a content delivery network that has 800 private-wire connections between its regional data centers and ISP networks around the world. These private wires are 10 Gigabit/second optical Ethernet links that aren’t open to non-Limelight traffic. This arrangement prevented the HDTV content delivered to ISP networks from stressing the Internet core. But it’s an arrangement that would certainly run afoul of a non-discrimination rule. It’s fast lane to the front porch of the ISP that bypasses the core of the public Internet, and for a very good reason – to deliver a service deterministically, with bounded delay, something the public Internet can’t do in its present state.

Limelight

The Akamai CDN takes the fast lane concept a step further, locating content inside ISP networks, on 34,000 servers inside ISP borders. Akamai claims to deliver 20% of all Internet traffic, and they do it in way that’s friendly to the core, and responsive to consumers. But they’re in the fast lane, and their traffic has effective priority over generic Internet traffic.

Peer-to-Peer takes the fast lane concept to the limit, making every copy of a piece of shared content available to the most closely located consumers who want it. Instead of 40 or 34,000 data servers, all 1.3 billion Internet-connected devices can be peer-to-peer servers. P2P doesn’t do this efficiently today – it lacks a standard content locator service. Its overwhelming use for sharing pirated content discourages the development of such a system, which would amount to an admission of guilt for piracy. But the IETF is defining one anyway, with full support by the ISPs and carriers.

The point is simply this: there are numerous examples of services and technologies that deliver content and communications faster, cheaper, or more reliably than generic Internet service allows. ISPs compete with each on speed, service, and price, and should not be discouraged from such competition by irrational bans on randomly-selected forms of acceleration. When the government picks technology winners and losers, the people suffer.

Don’t worry about preserving the historical Internet

My remarks may lead you to believe that I think the net neutrality debate is simply commercially-motivated. But it’s actually much larger than a spat between Google and AT&T. Some of the most emotional advocacy stresses values that were supposedly engineered into the Internet by its creators, through an End-to-End principle of practically mythic significance. The Internet’s success as the greatest engine of innovation, freedom, democracy, and progress is alleged to be a direct consequence of E2E. The net neutrality movement positions itself as the preserver of E2E and the protector of the Internet’s purity from corruption by vulgar tradespeople.

This is an exaggeration. The Internet, like all the packet network architectures that were bandied about in the 70s and 80s, has a structure that can be decomposed into a series of layers, each of which relates to a particular category of engineering problem.

The Internet runs across wires and radio beams of various types, each of which directly connects two devices. So there is a “data link layer” that deals with direct connection problems. It’s a mesh, where each packet is relayed by a number of routers between its source and destination, so there’s a network layer that deals with choosing paths. Across the mesh, there are endpoints where communication begins and ends, so there’s an end-to-end layer that deals with their interaction. Properly understood, none of these layers is any more important than any other. At most, the Internet can be said to have an end-to-end bias in this sense: if a given function can be performed at multiple levels, as many can, the bias of Internet designers is to provide it at the end-to-end layer. Hence, TCP, the most widely used end-to-end protocol, is responsible for Internet congestion control. This choice has serious consequences for the performance of the system as a whole.

But it’s actually a fallacy to assert that the Internet adheres to an “end-to-end principle” or any inherent values. At the end of the day, the Internet is a machine; machines don’t have values, just uses. We can and should build our networks to do what we want them to do, not what they were constrained to do in some romantic past by technology limits that have long since been overcome.

Regulating the Internet

I’m in favor of government regulation of the Internet as long as it’s done in sensible, clear-headed, and rational way. This means that we have to largely abandon telecom regulatory models and begin to create new models of regulation that conform to the new networking technologies. We can’t very well go on speaking out of both sides of our mouths. If the Internet is a great engine of innovation, free expression, and democracy that’s unprecedented in human history, which I think it is, then it can’t very well be stuffed inside a regulatory paradigm that was devised for ox carts, the telegraph system, and analog telephony. We need to stop dragging the corpse of the old analog telephone system behind us.

A new approach is required, one that will stimulate technical innovation, consumer awareness, and investment. This is the way to abundance and progress. Government should accept a humble role in the development and application of this new framework, deferring to industry groups and experts to draft working principles and specific rules. The IETF, the Wi-Fi Alliance, the ISO, the DCIA and many others conduct ongoing standards activities that can be used to substantially inform government regulatory activities.

Standard and More Standards

But the technical experts are not enough. Consumers need the ability to access information about their accounts, for comparative purposes as well as operational ones. This information is hard to convey at the level of detail that’s satisfactory to expert analysts as well as to ordinary people who don’t have specialized knowledge of networking. Bridging the gap between consumer knowledge and the complexity of networking is a huge challenge, and should be first on any government regulatory agenda.

Proceed with Caution on QoS

It’s my understanding that the EU is pondering a regulation that demands a minimum QoS guarantee and also bans most forms of packet discrimination. This is an interesting idea, but it’s more an engineering problem than low-hanging fruit ripe for the regulator. QoS implies signaling of some sort between applications and the network, and the best ways we have to do that today involve marking packets with a requested QoS level, and measuring the delay from end-to-end. Non-deterministic networks can’t really “guarantee” much of anything, and applications that work best with QoS also have to work without it. Internet QoS, in the current state of the protocols, is a statistical exercise. To make it more than this means that all the business relationships between ISPs and Internet core transport providers will have to change. Perhaps they should, but the transition will take time and money.

A QoS guarantee would in any case need to be limited to a fairly small fraction of the gross capacity of the subscriber’s connection, owing to the math of shared asynchronous pipes, and it would require discrimination. We can provide statistical QoS only if we can distribute delay.

This is even more true for wireless networks than for wireline, because roaming seriously complicates the math on QoS.

You can have some form of QoS if you allow packet discrimination, but you can’t have both. Done correctly, packet discrimination is pro-consumer in any event, because Consumers multi-task. The old model of the solitary computer connected to the ISP network has given way to the reality that consumers run home networks that connect to other networks. The Internet is more than ever a “network of networks” where management of the first and last network is in the hands of consumers. Over-broad regulation limits consumer choice.

Conclusion

The Internet has survived a number of calamities in its 35 years – Internet meltdown, the World Wide Wait, and the rise of P2P. We’ve been able to engineer around them all. But overzealous regulation, informed by pie-in-the-sky ideals, is the most serious threat we’ve seen so far, and it’s the reason I take time out from my duties as a network engineer to speak to groups like this one. I hope you proceed with caution and let well-informed consumers make their choices about service plans and features.

Thanks again for coming and I welcome your questions and comments.

Leave a Reply

Your email address will not be published. Required fields are marked *