Symmetry, Control, and Progress

A friend asked me what I thought about Doc Searls’ latest essay on the evolution of the Internet and as I happened to be reading it already, I’ve written a few disjointed notes. The short version of my reaction is that it’s sad that everybody with an axe to grind about technology, politics, or business these days seems to think that the Internet has an immutable, Platonic form that’s somehow mystically responsible for all that’s good in the technology business for the past twenty years, and any alteration of this form will screw it up. According to this way of thinking, stuff like Napster that exists solely for the purpose of illegal activity is good (even though new), but DRM (which isn’t really a Net deal anyhow) would be inscrutably bad.

This is sort of a “natural law” argument that’s supposed to persuade business and government to turn a blind eye to abuses of the Net, leaving its regulation to self-appointed do-gooders free of commercial interest. It’s a flawed argument that ignores the fact that the Internet is actually a tool and not a spiritual essence from a higher reality, which like all tools adapts to human needs or is discarded. The strongest proponent of this view is Larry Lessig, whose book “The Future of Ideas” I’ve just read, and the others who argue this line (Searls, Weinberger, Gillmor) take their lead from him. I’ll write a review of Lessig’s book in the next few days, and it’s not going to be pretty. But back to Searls, and the theory of immaculate conception:

The Internet is not simply a network, it’s a means of interconnecting networks. It won out over competing technologies because it was heavily subsidized by the government and more simple than the alternative, the ISO/OSI protocol suite. OSI was a complicated set of international standards devised by committees with membership as diverse as the UN but in some ways even less rational. It contains a myriad of options, many non-usable, and is hard to understand, let alone to implement. In the heyday of OSI, we had a series of “OSI Implementors’ Workshops” to hash out subsets of the protocols to implement for purposes of demonstration, and even that was very painful. Internet protocols weren’t designed by committees, but by individuals paid by ARPA to keep things simple. OSI was intended to take the place of proprietary protocols from IBM, Xerox, and DEC, providing end-to-end applications, whereas the Internet was simply intended to interconnect diverse networks with a basic level of end-to-end capability.

Make a side-by-side comparison of any early Internet protocol with the competing ISO candidate and you see that the Internet offering can be implemented in tinier memory and fewer CPU cycles and with less man-hours of programming effort than the alternative. As if that weren’t enough to ensure victory, the government paid contractors to write reference implementations of Internet protocols and then gave them away for free.
Continue reading “Symmetry, Control, and Progress”

Misunderstanding the Internet

The mistaken idea that Internet architecture is “End-to-End” has cropped up again, this time on the Doc Searls blog, with a reference to some orders to the FCC from Larry Lessig, who’s not especially empowered to make them.

While there are many problems with using the FCC to impose this view (like, um, the fact that they can’t), and with Searls’ desire to make a political litmus test out of it, the most important is that it’s simply not true. While it may be argued that the Internet has an “end-and-end” architecture that concentrates as much intelligence as possible in the endpoints and has precious little in the middle, a truly “end-to-end” architecture would allow the ends to control the flow of messages through the middle, and the current architecture can’t do that.

An end-to-end architecture, in other words, would allow a voice application to tell the network “I need a narrow stream of bandwidth connecting me to this other end, but I need that stream to be free of jitter. I don’t need retransmission of packets dropped to relieve congestion, but I do need to know I’m getting through, and I’m willing to pay 25 cents a minute for the that.” Or it would allow a caching media application to say “I need lots of bandwidth for a 4 gigabyte transfer, but I don’t want to pay a lot for it and you can work it around other applications that need small chunks because I don’t care about jitter.” Or it would allow an email application to say “Send this the cheapest way, period.” And it would allow a teleconferencing application to say “send this to my group of these 14 end points without jitter and with moderately high bandwidth and we’ll pay you a reasonable fee.”

The network would then deal with congestion by dropping the spam and the e-mail until conditions improve, and by delaying the honking media files, but it would endeavor to deliver as many of the voice and real-time media packets as possible. It therefore wouldn’t allow spam to step on VoIP, as it does now. Most of us are able to see that this would be progress, but we see the Internet as a tool, and not as a socio-political metaphor.

There are a number of kludges that have been adopted in TCP to approximate a truly end-to-end capability, but none of them really make it a reality because there’s not enough smarts in IP and its various kludgy cousins (ICMP, IGMP) to make this work. So freezing the architecture at this stage would be a serious mistake, which is why you never see network architects arguing for the things that Searls (a Public Relations man), Lessig (a law professor) or Dave Weinberger (a philosophy professor) want.

The story of how the Internet came by its odd architecture, which it doesn’t share with the much better-designed ARPANET, coherent architectures like SNA and DECNet, and extant PDNs, is a story of ambitious professors, government grants, and turf wars among contractors that’s not at all a tale of the best design winning out, but more on that later. This “end-to-end” fantasy is simply historical revisionism, and we need to nip it in the bud before it does any more damage.

UPDATE: Weinberger gets defensive about his creds at the wanky Supernova conference:

Up now, David Weinberger brings the Cluetrain ethos to the new areas of digital identity and DRM, professing his end-user ignorance as his unique qualification for speaking for normal users and articulating the rights they would want to protect.

Heh heh heh.