The anguish of regulation

Note: This post isn’t clear. I’m trying to say that the notion of “layering” in network protocol design doesn’t mean there’s some kind of firewall of ignorance between layers. In layered architectures, protocol layers advertise services to their higher-layer consumers, and notions of regulation built on the notion of layering have to take that fact into account. Crawford misunderstands protocol layering and attempts to build a regulatory framework on the back of her mistaken idea.

Some of the fans of network neutrality regulations are sincere but misguided, such as law professor Susan Crawford. She’s in a lot of anguish about how to sell the regulators’ agenda*:

If the only economic and cultural justifications you have for the need for a layered approach to internet regulation (an approach that treats transport differently from applications) are (1) the explosive innovation that competition among applications would produce and (2) the appropriate mapping between the “actual” architecture of the internet and the regulatory approach to be taken to it, you’ll lose.

But she never questions whether the “layered approach to regulation” is a good thing or even a well-understood thing. I see this a lot among the legal academics, who seem to base most of their regulatory model on a defective model of protocol layering. Lessig is the prototype for this misunderstanding, as he wants to extract architectural features from the Internet of the Past and use them to constrain the development of the Internet of the Future.

I work with layered protocols, and have for more years than I can remember, so please allow me to explain what layering means in real network systems. We divide network functions between abstract layers (physical, link, network, session, application) so we can mix and match combinations for real systems. So the IP network layer can run on the Ethernet link layer or the WiFi link layer, and work pretty much the same. And we can run Ethernet over a fiber-optic physical layer or a copper pair physical layer, and have it work pretty much the same. ]

The key here is understanding what “pretty much the same” means. Each protocol at each layer has its own constraints, and higher layers have to be able to accommodate them. For example, Ethernet packets can’t be more than 1500 bytes long, but WiFi packets are bigger and ATM packets (cells) are smaller. So IP needs to know what the size constraints of the link layer are so it can adjust to them and operate efficiently.

The way this is done is through a service interface between the network layer and the link layer that allows the higher layer protocol to discover the capabilities of the lower layer protocol and behave accordingly. So while these two layers are defined and built separately, they’re intimately connected through a shared interface that allows them to operate together smoothly.

At the link layer, many protocols have the ability to offer different services, each appropriate to a different set of applications. WiFi, for example, has a voice service that handles short packets that need to be transmitted and received at regular intervals differently than long packets that are less sensitive to delay but more sensitive to corruption and loss. The network lingo for this selection of services is Quality of Service or QoS. Note that it’s not really correct to say that Voice QoS is “better” than the bulk data QoS called “Best Effort,” it’s simply different. It would not be in your interest to use Voice grade QoS for downloading files from Netflix, even if those files contained movies, because it actually constrains total bandwidth. You essentially trade off moving a lot of data for moving a little very quickly.

The tragedy of the Internet is that the IP layer doesn’t have good facilities for selecting QoS options from the layers below it, and his makes it difficult for applications to get the service they need from the network top-to-bottom and end-to-end. So we bypass IP in real systems through something called a “Control Plane” and tell the Link Layer how to fit QoS around the data that need it.

But the main point is that the segregation of functions into protocol layers doesn’t mean that each layer doesn’t know what the other layers are doing. In fact, the layers must know what their options are and how to use them, even though they don’t need to know how the other layers make these options available. So the layered approach to protocol design doesn’t preclude diversity of services, it in fact facilitates it by sharing the important information and hiding the unimportant details.

In the real world, a layered approach to regulation would begin by identifying service options and the means for requesting them. The neuts don’t get this and begin by banning service level communication between layers. That’s what “just move the bits, stupid” means. It’s bad network design and it’s bad regulation.

*Crawford blocks referrals from this blog. She’s at: http://scrawford.blogware.com/blog.

3 thoughts on “The anguish of regulation”

  1. This is the same post where she also wrote:

    By the way, watch out for people who want to say that “access” (the last mile) is separate from “transport” (the backbone), or that there’s a “control plane” that is separate from “applications.” All of these distinctions are dangerous.

    Apparently distinctions that reflect current reality are “dangerous.”

  2. That statement of hers was so jaw-droppingly stupid it made me chuckle for hours. Some other dangerous ideas: the Earth moves around the sun, and all life has a common ancestor. Don’t tell the children!

  3. I read Ms. Crawford’s article on NN, now buried under a lot of other blather about returning to teaching, and how her students are struggling to learn to run a blog site. Her ridiculous comments on internet security further expose how clueless she really is about the future of the internet. I digress.

    First she tells everyone that argues various points against NN that ‘you will lose’, but doesn’t back up her assertions about why those arguments against NN are somehow wrong. Then she winds up her article by saying that the future of the internet is really about ‘human relationships’ (oh … my … gawad), and not about the profits of content producers, carriers, and service providers.

    If there isn’t going to be profit in it, then where is all the new bandwidth going to come from for all that peer-to-peer video she is advocating to support those ‘human relationships’? I can’t imagine a ‘human relationship’ without video these days. Nobody has the patience to tap, tap, tap that keyboard all day long on e-mail or IM when there are video cameras embedded in every computer monitor and mobile phone. People will pay to see and hear each other, even if it’s only screwing around at “work” on the company’s dime.

    If her NN dream comes true, there will not be enough bandwidth for her ‘human relationships’, because no content producer, carrier, or service provider will be able to make a buck on premium services. And, what little bandwidth there is will be mostly used up by spammers, who LOVE the idea of an internet that won’t prioritize anybody else’s packets over theirs.

Leave a Reply

Your email address will not be published. Required fields are marked *