Note: This post isn’t clear. I’m trying to say that the notion of “layering” in network protocol design doesn’t mean there’s some kind of firewall of ignorance between layers. In layered architectures, protocol layers advertise services to their higher-layer consumers, and notions of regulation built on the notion of layering have to take that fact into account. Crawford misunderstands protocol layering and attempts to build a regulatory framework on the back of her mistaken idea.
Some of the fans of network neutrality regulations are sincere but misguided, such as law professor Susan Crawford. She’s in a lot of anguish about how to sell the regulators’ agenda*:
If the only economic and cultural justifications you have for the need for a layered approach to internet regulation (an approach that treats transport differently from applications) are (1) the explosive innovation that competition among applications would produce and (2) the appropriate mapping between the â€œactualâ€ architecture of the internet and the regulatory approach to be taken to it, you’ll lose.
But she never questions whether the “layered approach to regulation” is a good thing or even a well-understood thing. I see this a lot among the legal academics, who seem to base most of their regulatory model on a defective model of protocol layering. Lessig is the prototype for this misunderstanding, as he wants to extract architectural features from the Internet of the Past and use them to constrain the development of the Internet of the Future.
I work with layered protocols, and have for more years than I can remember, so please allow me to explain what layering means in real network systems. We divide network functions between abstract layers (physical, link, network, session, application) so we can mix and match combinations for real systems. So the IP network layer can run on the Ethernet link layer or the WiFi link layer, and work pretty much the same. And we can run Ethernet over a fiber-optic physical layer or a copper pair physical layer, and have it work pretty much the same. ]
The key here is understanding what “pretty much the same” means. Each protocol at each layer has its own constraints, and higher layers have to be able to accommodate them. For example, Ethernet packets can’t be more than 1500 bytes long, but WiFi packets are bigger and ATM packets (cells) are smaller. So IP needs to know what the size constraints of the link layer are so it can adjust to them and operate efficiently.
The way this is done is through a service interface between the network layer and the link layer that allows the higher layer protocol to discover the capabilities of the lower layer protocol and behave accordingly. So while these two layers are defined and built separately, they’re intimately connected through a shared interface that allows them to operate together smoothly.
At the link layer, many protocols have the ability to offer different services, each appropriate to a different set of applications. WiFi, for example, has a voice service that handles short packets that need to be transmitted and received at regular intervals differently than long packets that are less sensitive to delay but more sensitive to corruption and loss. The network lingo for this selection of services is Quality of Service or QoS. Note that it’s not really correct to say that Voice QoS is “better” than the bulk data QoS called “Best Effort,” it’s simply different. It would not be in your interest to use Voice grade QoS for downloading files from Netflix, even if those files contained movies, because it actually constrains total bandwidth. You essentially trade off moving a lot of data for moving a little very quickly.
The tragedy of the Internet is that the IP layer doesn’t have good facilities for selecting QoS options from the layers below it, and his makes it difficult for applications to get the service they need from the network top-to-bottom and end-to-end. So we bypass IP in real systems through something called a “Control Plane” and tell the Link Layer how to fit QoS around the data that need it.
But the main point is that the segregation of functions into protocol layers doesn’t mean that each layer doesn’t know what the other layers are doing. In fact, the layers must know what their options are and how to use them, even though they don’t need to know how the other layers make these options available. So the layered approach to protocol design doesn’t preclude diversity of services, it in fact facilitates it by sharing the important information and hiding the unimportant details.
In the real world, a layered approach to regulation would begin by identifying service options and the means for requesting them. The neuts don’t get this and begin by banning service level communication between layers. That’s what “just move the bits, stupid” means. It’s bad network design and it’s bad regulation.
*Crawford blocks referrals from this blog. She’s at: http://scrawford.blogware.com/blog.