Symmetry, Control, and Progress

A friend asked me what I thought about Doc Searls’ latest essay on the evolution of the Internet and as I happened to be reading it already, I’ve written a few disjointed notes. The short version of my reaction is that it’s sad that everybody with an axe to grind about technology, politics, or business these days seems to think that the Internet has an immutable, Platonic form that’s somehow mystically responsible for all that’s good in the technology business for the past twenty years, and any alteration of this form will screw it up. According to this way of thinking, stuff like Napster that exists solely for the purpose of illegal activity is good (even though new), but DRM (which isn’t really a Net deal anyhow) would be inscrutably bad.

This is sort of a “natural law” argument that’s supposed to persuade business and government to turn a blind eye to abuses of the Net, leaving its regulation to self-appointed do-gooders free of commercial interest. It’s a flawed argument that ignores the fact that the Internet is actually a tool and not a spiritual essence from a higher reality, which like all tools adapts to human needs or is discarded. The strongest proponent of this view is Larry Lessig, whose book “The Future of Ideas” I’ve just read, and the others who argue this line (Searls, Weinberger, Gillmor) take their lead from him. I’ll write a review of Lessig’s book in the next few days, and it’s not going to be pretty. But back to Searls, and the theory of immaculate conception:

The Internet is not simply a network, it’s a means of interconnecting networks. It won out over competing technologies because it was heavily subsidized by the government and more simple than the alternative, the ISO/OSI protocol suite. OSI was a complicated set of international standards devised by committees with membership as diverse as the UN but in some ways even less rational. It contains a myriad of options, many non-usable, and is hard to understand, let alone to implement. In the heyday of OSI, we had a series of “OSI Implementors’ Workshops” to hash out subsets of the protocols to implement for purposes of demonstration, and even that was very painful. Internet protocols weren’t designed by committees, but by individuals paid by ARPA to keep things simple. OSI was intended to take the place of proprietary protocols from IBM, Xerox, and DEC, providing end-to-end applications, whereas the Internet was simply intended to interconnect diverse networks with a basic level of end-to-end capability.

Make a side-by-side comparison of any early Internet protocol with the competing ISO candidate and you see that the Internet offering can be implemented in tinier memory and fewer CPU cycles and with less man-hours of programming effort than the alternative. As if that weren’t enough to ensure victory, the government paid contractors to write reference implementations of Internet protocols and then gave them away for free.
Continue reading “Symmetry, Control, and Progress”

Broadband hearings

The House Energy and Commerce committee is holding hearings today on broadband regulation aimed at harmonizing DSL and cable access policies:

While these two forms of broadband directly compete in many cities, they are currently regulated in different ways. Phone companies are required to provide Internet service providers non-discriminatory access to their networks, while cable companies can pick and choose among ISPs.

There is an urban legend that these regulations have something to do with a nefarious plot by the cablecos and telcos to highjack Google results and steer customers to the provider’s preferred sites. This is not true, of course, and the issue is under what conditions independent ISPs can use DSL and cable plants to reach new customers.

Independent ISPs, of course, want to be able to serve customers across the cable company’s lines for a minimal price, but cable companies want to be able to continue dictating terms of such access as they see fit. Telcos would like the same degree of flexibility in their business models that cable companies have.

The public has two interests here, and they don’t necessarily harmonize all that well. On the one hand, we want the choice between DSL and cable Internet to as many homes as possible, which is to say, all of them. We want these services to continue improving over time, which would require the companies to buy more gear from Cisco and friends. And we want the prices low.

At the same time, we want to be able to use ISPs that are more competent and less restrictive than SBC and Comcast, and we want to be able to do that for a reasonable price.

So if we set public policy that cablecos and telcos are only allowed to sell use of their lines for basic packet switching, and that all ISP functions (assigning IP addresses, handling e-mail and Usenet, and providing DNS) have to be unbundled, their profit margin may not be enough to encourage them to buy lots of gear.

On the other hand, how much does it take?

Synergy

This is cute:

TiVo Inc. subscribers will be able to program their digital video recorders remotely by logging on to America Online under a new service TiVo and AOL plan to announce today.

One of the coolest things about Replay is the ability it gives its customers to program recording lists on the web, so it’s nice to see TiVo catching-up, and it’s also smart of them to cultivate their AOL connection:

So maybe what AOL/Time-Warner needs to do is forget about the Internet and broadband, and get themselves some nice Tivo-type property to really make the synergy work. Then they can upgrade the book value of their “good will” instead of sending out bad vibes and like, bumming everybody out, you know.

…as somebody said.

The Open Internet

Joe Lieberman has a moderately interesting paper on his web site called Growing the Innovation Economy: A New Strategy For A New Prosperity that deals with Internet openness in broad terms:

Ensure that the Internet continues to provide an open platform for innovation: The Internet is different from the phone network and radio and broadcast television in important ways. It is easier for individuals and small organizations to be producers as well as consumers of information. The Internet allows for “many to many” communication as opposed to the “one to many” communication of broadcast television. Innovation can occur at the edge of the network. A student, an independent software developer, or a small high-tech company can come up with an idea for a new application, protocol, or kind of content. If enough people find it useful or worthwhile, this idea can spread like wildfire. Even as the Internet evolves, it important to ensure that it continues to provide an open platform for rapid and decentralized innovation, and for the exchange of ideas.

It seems sensible enough, praising the Internet, entrepreneurship, and openness, and there’s nothing to which I would take exception in any of it.

Unfortunately, the tortuously confused Larry Lessig posted this excerpt to his blog, somehow managing to read it as an endorsement of the dubious “end-to-end” architecture that’s spawned a whole cargo cult of misguided followers:

End to End has gone presidential.

So now I’m getting email from people wanting me to comment on Lessig’s reasoning in his book The Future of Ideas. I haven’t read Lessig’s tome, and I doubt I will unless somebody pays me to review it, but I nonetheless tried to accommodate my correspondent by posting this explanation:

I don?t doubt that Lessig means well, but he frankly doesn?t know what he?s talking about. There is a legitimate, if obscure, fear in some quarters that ISPs may someday censor specific types of content, either in their customer?s interest or in their own economic interests. Porn filters, for example, discriminate based on content, but many customers would consider this a value-added service and it?s not worth getting excited about.

But let?s take it a step further, and suppose that an ISP filters video packets, ostensibly because it wants to control your video experience through its ownership of your cable TV franchise. This would be a bad thing, of course, and I don?t argue otherwise.

But the question we have to ask as network architects is whether there?s any relationship between the Internet?s present or future architecture and this sort of censorship, and the answer to that is clearly no. Video packets are easy to identify on the net because they?re carried by a limited set of protocols and clearly marked; a censor doesn?t care whether the marking is at the IP layer or at the RTP layer or at the UDP layer; they each have a signature, and unless they?re encrypted, they can be found.

Now the question has to be asked as to whether the Internet?s current architecture can hope to compete with cable TV and DBS as a practical alternative for carrying audio and video data, and whether this should be a goal. In the early days of IP, it clearly wasn?t a goal and therefore an architecture was developed that blocked transport layer access to the isochronous services in the data link and medium access control protocols that would make it practical. This architecture now has the effect of keeping the voice and data networks separate, to the advantage of telcos and cablecos who would like to bill you extra for providing voice and video services.

So far from advocating an architecture that frees the consumer from the big media and telephone companies, the end-to-end cargo cultists are promoting the exact thing that keeps them dependent, and they do so out of ignorance of the technical issues in network architecture.

To put it simply, you wouldn?t trust me to explain constitutional law to you, so why would you trust someone with Lessig?s background to explain my business, network architecture, to you?

If you want a robust Internet that’s capable of carrying voice and video as well as data, you have to abandon end-to-end architecture and go with a smarter network layer; this doesn’t mean you have to abandon openness, because openness and end-to-end aren’t related.

OK?

Misunderstanding the Internet

The mistaken idea that Internet architecture is “End-to-End” has cropped up again, this time on the Doc Searls blog, with a reference to some orders to the FCC from Larry Lessig, who’s not especially empowered to make them.

While there are many problems with using the FCC to impose this view (like, um, the fact that they can’t), and with Searls’ desire to make a political litmus test out of it, the most important is that it’s simply not true. While it may be argued that the Internet has an “end-and-end” architecture that concentrates as much intelligence as possible in the endpoints and has precious little in the middle, a truly “end-to-end” architecture would allow the ends to control the flow of messages through the middle, and the current architecture can’t do that.

An end-to-end architecture, in other words, would allow a voice application to tell the network “I need a narrow stream of bandwidth connecting me to this other end, but I need that stream to be free of jitter. I don’t need retransmission of packets dropped to relieve congestion, but I do need to know I’m getting through, and I’m willing to pay 25 cents a minute for the that.” Or it would allow a caching media application to say “I need lots of bandwidth for a 4 gigabyte transfer, but I don’t want to pay a lot for it and you can work it around other applications that need small chunks because I don’t care about jitter.” Or it would allow an email application to say “Send this the cheapest way, period.” And it would allow a teleconferencing application to say “send this to my group of these 14 end points without jitter and with moderately high bandwidth and we’ll pay you a reasonable fee.”

The network would then deal with congestion by dropping the spam and the e-mail until conditions improve, and by delaying the honking media files, but it would endeavor to deliver as many of the voice and real-time media packets as possible. It therefore wouldn’t allow spam to step on VoIP, as it does now. Most of us are able to see that this would be progress, but we see the Internet as a tool, and not as a socio-political metaphor.

There are a number of kludges that have been adopted in TCP to approximate a truly end-to-end capability, but none of them really make it a reality because there’s not enough smarts in IP and its various kludgy cousins (ICMP, IGMP) to make this work. So freezing the architecture at this stage would be a serious mistake, which is why you never see network architects arguing for the things that Searls (a Public Relations man), Lessig (a law professor) or Dave Weinberger (a philosophy professor) want.

The story of how the Internet came by its odd architecture, which it doesn’t share with the much better-designed ARPANET, coherent architectures like SNA and DECNet, and extant PDNs, is a story of ambitious professors, government grants, and turf wars among contractors that’s not at all a tale of the best design winning out, but more on that later. This “end-to-end” fantasy is simply historical revisionism, and we need to nip it in the bud before it does any more damage.

UPDATE: Weinberger gets defensive about his creds at the wanky Supernova conference:

Up now, David Weinberger brings the Cluetrain ethos to the new areas of digital identity and DRM, professing his end-user ignorance as his unique qualification for speaking for normal users and articulating the rights they would want to protect.

Heh heh heh.