Misunderstanding the Internet

The mistaken idea that Internet architecture is “End-to-End” has cropped up again, this time on the Doc Searls blog, with a reference to some orders to the FCC from Larry Lessig, who’s not especially empowered to make them.

While there are many problems with using the FCC to impose this view (like, um, the fact that they can’t), and with Searls’ desire to make a political litmus test out of it, the most important is that it’s simply not true. While it may be argued that the Internet has an “end-and-end” architecture that concentrates as much intelligence as possible in the endpoints and has precious little in the middle, a truly “end-to-end” architecture would allow the ends to control the flow of messages through the middle, and the current architecture can’t do that.

An end-to-end architecture, in other words, would allow a voice application to tell the network “I need a narrow stream of bandwidth connecting me to this other end, but I need that stream to be free of jitter. I don’t need retransmission of packets dropped to relieve congestion, but I do need to know I’m getting through, and I’m willing to pay 25 cents a minute for the that.” Or it would allow a caching media application to say “I need lots of bandwidth for a 4 gigabyte transfer, but I don’t want to pay a lot for it and you can work it around other applications that need small chunks because I don’t care about jitter.” Or it would allow an email application to say “Send this the cheapest way, period.” And it would allow a teleconferencing application to say “send this to my group of these 14 end points without jitter and with moderately high bandwidth and we’ll pay you a reasonable fee.”

The network would then deal with congestion by dropping the spam and the e-mail until conditions improve, and by delaying the honking media files, but it would endeavor to deliver as many of the voice and real-time media packets as possible. It therefore wouldn’t allow spam to step on VoIP, as it does now. Most of us are able to see that this would be progress, but we see the Internet as a tool, and not as a socio-political metaphor.

There are a number of kludges that have been adopted in TCP to approximate a truly end-to-end capability, but none of them really make it a reality because there’s not enough smarts in IP and its various kludgy cousins (ICMP, IGMP) to make this work. So freezing the architecture at this stage would be a serious mistake, which is why you never see network architects arguing for the things that Searls (a Public Relations man), Lessig (a law professor) or Dave Weinberger (a philosophy professor) want.

The story of how the Internet came by its odd architecture, which it doesn’t share with the much better-designed ARPANET, coherent architectures like SNA and DECNet, and extant PDNs, is a story of ambitious professors, government grants, and turf wars among contractors that’s not at all a tale of the best design winning out, but more on that later. This “end-to-end” fantasy is simply historical revisionism, and we need to nip it in the bud before it does any more damage.

UPDATE: Weinberger gets defensive about his creds at the wanky Supernova conference:

Up now, David Weinberger brings the Cluetrain ethos to the new areas of digital identity and DRM, professing his end-user ignorance as his unique qualification for speaking for normal users and articulating the rights they would want to protect.

Heh heh heh.

30 thoughts on “Misunderstanding the Internet”

  1. Damn if you didn’t just describe ATM, circa 1997. Ah, the things we could be doing.

    PS Your link goes to some post about a hotel internet center.

  2. the way I see it, network architects ARE and HAVE argued for similar things, and is the origin of all the hubbub:

    http://citeseer.nj.nec.com/83743.html

    note that you imply that Searls says that Internet architecture *IS* “End-to-End”…I don’t see that. what I do see is Searls asking for support for an “end-to-end” Internet. big difference.

  3. it also sounds like you are making a huge jump in faith that somehow network devices will be able to correctly identify spam in real-time, and push it out of the way or not. even doing it 99.99% correctly is not good enough. I’m not sure you should hold your breath on that happening anytime soon.

  4. Joe,

    I think it’s pretty cleat from Richard’s post that, in his “end-to-end” system, the spammer would be expected to pay a higher price for sending e-mails that were not postponed or interrupted. Choosing the “uninterrupted” option would destroy the spammer’s business.

    There would not be the need to distinguish between spam and any other data traveling over this “end-to-end” architecture; all data (regardless of content) would be subject to the same pricing structure.

    And charging just 1 cent for every e-mail sent (or for every non-time-critical data transmission where jitter was irrelevant) would bankrupt most spammers.

    If I’m wrong about this stuff, Richard, please correct me.

    Of course, Joe, as to how this “end-to-end” architecture gets built and how soon, you’re probably right to be skeptical of rapid results. (I say probabaly because I’m not much of a techie.)

  5. it’s really the term ‘spam’ that I wanted to be clear on. If what he means by ‘spam’ is email or more specifically, any network service (protocol or traffic) that *isn’t* video (or whatever is requiring lower latency) then fine, but that’s not how it reads.

    I think that the main problem in many people’s minds about deviating from what is known as ‘end-to-end’ (many people will argue on the definition) is to give priority to the content of communications based on some metric (cost, etc.)…all of Richard’s arguments against e2e (the way I hear it) seem to be about providing a way for media that need resources (i.e. video without jitter) to have those resources, and to diminish other media due to a finite amount of bandwidth to be shared, if need be.

    The people who love the idea of e2e like the idea that my email to my cousin about baked beans has the same priority as that of an encrypted email sealing a large financial deal, which might have the same priority as an email from a Texan to his Senator urging for legislation. Another example would be that the video of a barbeque streaming from a friend’s house would have the same “priority” on the Internet as Oracle’s monthly management video conference.

  6. Joe, the professors who wrote the “End-to-End Considerations in Systems Design” weren’t network architects, they were systems guys who wanted to make sure that they got as much network code as possible inside the system, the better to get grants to tinker with it.

    There’s no evidence at all in that paper or anywhere else that they were the least bit cognizant of what it takes to run a wide-area network with the richness of services and the robustness that’s needed to support a broad range of applications (where broad ranges from email to voice).

    I’m suggesting, and this isn’t hard to do, that the IP layer needs to have a service interface whereby user applications can request a class of service. While the protocol supports such an interface already, it doesn’t have the ability to honor it in any meaningful way.

    Spammers would be out of business immediately if they had to pay for the resources they use, of course, which brings us to another problem with Internet architecture: the assumption that use of the net is always going to be free. It would be great progress is we could move from bulk pricing (like you have with your ISP today) to a per-packet pricing model, even if the cost remained about the same for most people. The reasons for this are something I’ll explain later.

  7. some points:

    the assumption that use of the net is always going to be free has nothing to do with the architecture of the internet. we could have the same architecture and still have to charge per packet/service/QoS. adding pricing to what we have is trivial.

    I’m not going to argue whether those authors were ‘network architects’ with you…the fact of the matter is that they all (in addition to many others) participated in the dicussions that resulted in what we see today, if not all of the ideas in that paper, some of them. when networks are seen as resources, then it’s not just network architects who should be making decisions about it on their own.

    I’m well aware of what you’re suggesting. You are not commenting at all on my last and main point…that the people who believe in the ideas that ‘e2e’ commonly mean only like it because it reflects the concept that my data transmission is no more important (NOTE: not faster, not slower, not more continuous, not less jittery, just no more important) than yours is. The basis of some arguments against e2e is that there is a finite amount of bandwidth, and priorities need to be given. well one of these days that limitation will go away, the speed of light being the only limit.

    that’s all.

  8. The Internet that we have today, Joe, is the result of a network that was created and funded for the express purpose of experimentation being rushed into production status without the benefit of the results of the experiment being folded into the design. What’s happened since is some 3000 RFCs, 95% of which were lame attempts at fixing the defects in the experimental architecture.

    The people who talk about traffic priorities today as if they translated into “content importance”, are clearly not competant to discuss network architecture, let alone dictate it.

  9. indeed this history is true. and it’s SO terrible, huh ? I think that we can talk all day long about the shortcomings of the Internet and its protocols, and you will get no argument from me on many of them, but there *is* value in what we have right now, lame or not. You yourself may or may not experience this value, but it’s there. I’m not one to be romantic about some idealized utopia of the Internet, but I’m not so naive as to ignore the fact that a garage band from Fresno can distribute and stream their mp3s just the same as Madonna (given the same bandwidth), and a Comp/Lit student at Tufts can publish/distribute (i.e. make available) her new book just the same as Larry Lessig.

    again, you’re only criticizing *people* who are expressing their fears (Lessig, Searls) about losing what feels like a very fair system in getting content across, where priority is not just given to the people with the most money. instead, can you comment on what is being said ? the fears themselves ? and not who is saying it ? Thus far, you’re not being clear on your opinion.

    Do you think their fears are unfounded? and that there will always be a ‘democratic’ systems of getting content thru the Internet ?

    Or do you think that their fears are valid, and you just don’t care about any of that ?

    again…to make sure I’m clear on what I’m getting at:

    you talk about the technical means of bringing a medium (video) to the Internet that demands a certain level of continuity/reliability. Let’s say for argument that it’s done. Will all video have the same price ? or, as in my previous post, will Oracle’s monthly company meeting cost the same as my friend’s backyard barbeque video, and both have the same quality ?

  10. Let’s pretend, just for the sake of argument, that the telcos and cablecos all go away and the last mile is owned and managed by consumer co-ops, like rural phone companies. So you and all your neighbors have to pay for the router or the switch or the voice mux at the CO to connect all your homes to the voice network, the Internet, and the cable tv network. Now you happen to have in your neighborhood some folks who only use voice, some who use a mix, and a couple of guys with a big porn site that consume ten times as much bandwidth as the other neighbors combined.

    Does everybody pay the same amount of money for the service, and does the lady across the street who needs to call 911 have to wait for a dial tone until the porn site finishes servicing its showering dorm girls customers? Does the equipment needed to give everybody a dial tone have to go on the back-burner in order for the co-op to buy more equipment for porn uploads?

    If nothing changes in Internet architecture, we have huge problems going to the next level of services, and the E2E boys want the architecture frozen where it is today.

    The architecture and the protocols of the Internet need to progress and advance, so I’m simply arguing against stagnation.

  11. I’m not sure what point your analogy is intended to make, since there is such a mixture of hypotheticals and media. (if a design were to have 911 calls be in network contention at all, then it’s a flawed design) There would not be such a situation, for one…and second, you’re misunderstanding my point.

    My point is simple:

    given enough bandwidth, every service/protocol can (and will) have enough of its essential *acceptable* characteristics satisfied. this means no jitter for video and audio, and real-time voice/email will have speed and reliability, limited only by physics. when that happens (and it will, no matter how we get there) all I care about is insuring that no one else gets a better priority than I do because of actual content.

    I do know that I don’t see anyone that is “pro-e2e” who wants to ‘freeze’ the architecture; they only mean to keep some of the values that create a “level-playing-field” intact.

    If all you are arguing is that the protocols of the Internet should progress, and not stagnate, then non-network architect people like Searls and Lessig would agree with that, as would people like Lawrence Roberts and Radia Perlman. (who *were* involved with arpanet and decnet, and who have spoken about e2e)

    It’s not the actual e2e design philosophy that is important, it’s the values that it’s supposed to bring that is important. E2e supporters are only opposed to, at the base level, a hierarchical system of communication based on cost or caste. Nothing more. That is not too much to ask for, IMHO, because it’s possible.

    personally, I don’t care if it’s tcp/ip. or decnet. or f*king banyan vines. I don’t care if there is a separate network for video, another for voice, another for email. I want the Internet(s) of the future to pay as much attention to my child’s birthday party photos as it does anything else, and I can make that demand because at some point (maybe not now) there will be bandwidth enough to satisfy certain levels of services, at an acceptable level of quality. period.

    if that’s called end-to-end, then sure. if it’s not, that’s fine, too.

  12. …given enough bandwidth, every service/protocol can (and will) have enough of its essential *acceptable* characteristics satisfied

    Bandwidth isn’t free, and that’s why I brought up the thought experiment with the co-op. Assuming that it’s not free, and that it’s finite, the question of how to manage it arises. This is a question that the World of Ends people don’t want to deal with.

    Radia Perlman, BTW, has had one nice if somewhat obvious idea in her long career, so I don’t know that I’d put all that much stock by her assessments, which certainly helped spawn 15 years of useless discussion about switches vs. routers.

  13. re: Perlman…just wanted to prove that just because someone is a network architect doesn’t mean that they aren’t pro-e2e. (or pro- the values e2e is _supposed_ to provide)

    true, bandwidth isn’t free. I also don’t see proponents of e2e expecting the management of bandwidth and its associated costs as either 1-something to ignore, or 2-something that is/will be easy. I assume that if you think that they “don’t want to deal with” that, then you’re not reading far enough and might be stuck on being angry-guy.

    but… you have still managed to avoid answering my original question above…which I’ll try to make clearer:

    Do you, or do you *not*, value the ability to either:
    -send an email
    -ftp a file somewhere
    -serve your blog from a webserver
    -etc.

    with (basically) the same priority as someone who has more money to spend on doing those things than you do, or someone whose emails/ftp’d files/blog are of wildy different content than yours ?

    I’ll check back for any answer/comment on it.

  14. Your question makes no sense because it assumes so many things that aren’t true.

    People who have more money to spend on bandwidth than I do can ensure that their emails, ftps, and https move faster than mine through the Internet. They do this by purchasing high-speed connections from a major network company, like MCI. Their network company, or ISP if you prefer, has peering arrangements with other major networks to ensure that traffic moves faster through their portions of the Internet than traffic coming from discount ISPs like mine. That’s how things are today, and it has nothing to do with content.

    Content-neutrality simply guarantees that my audio over IP has no more priority than spam, and only an idiot sees that as a good thing.

  15. EXACTLY…thank god it has nothing to do with content!

    sorry for not being clear. I’m talking about discrimation based on content. that is (from what I can see) is the MAIN concern of people who support e2e. At this point, I thought you’d understand that.

    Content-neutrality would guarantee that MY audio is just the same as YOURS, and yes, just the same as spam. but spam is different than other content, and we can fix spam other ways. To throw out content neutrality to fix spam would be like using an H bomb to solve a termite problem. Don’t make this discussion about spam, it’s not. Baby/bath water is a perfect cliche for what you are suggesting.

    ‘Content neutrality’ HAS value, as well as raises questions. Those questions can be answered while still maintaining as much of that value as possible. The people supporting e2e are not idiots, and they are not blind to any arguments about the issue, either. If you think they are, then I think I’m understanding just how strong your mind wants to be ‘angry guy’. look, maybe it’s more fun for you to complain about it and ‘famous’ people on your blog than to understand what and why they hold these beliefs. I won’t rain on the parade, this will be my last comment on the matter.

    Again: we can (and will) have audio and video with no jitter, non-spam emails, and other Internet traffic and still have no content discrimination. The only way that fact won’t become true will be due to the paranoid, spoiled, unimaginative, and selfish people who don’t fully understand what content neutrality is meant to be.

    thanks for the go-round.

  16. > a truly “end-to-end” architecture would allow the ends to control the flow of messages through the middle
    >

    Perhaps, but that is not what the end-to-end says. Rather, its simply guides the placement of functionality. Sometimes functionality is better placed in the network if it can be correctly implemented there. It turns out that often much of the functionality is often better when placed at the ends (e.g. encryption, flow control and error control).

    Later in a comment you say:

    >If nothing changes in Internet architecture, we have huge problems going to the next level of services, and the E2E boys want the architecture frozen where it is today.
    >
    >The architecture and the protocols of the Internet need to progress and advance, so I’m simply arguing against stagnation.
    >

    Actually the architecture has changed, significantly. NATs, firewalls, proxies, caches and so on have broken many of the assumptions of the origional architecture. Lessig and others who espouse e2e may really be arguing for what they see as something that is being eroded or in some cases not being exploited. Barriers are being placed to make full use of the architecture (e.g. countries filtering UDP ports that are often used for VoIP services).

    John

  17. As a matter of historical fact, the only serious question about functionality addressed by the Internet architecture concerns the placement of the reliable stream service. In previous long-haul networks (ARPANET, X.25), the network layer provided it, but in LANs (Ethernet) it was up to the end system to provide. There were arguments that the network-based reliable stream still had to be duplicated in the end system for one reason or another (wrong arguments, as it turned out).

    End-to-end, if it means anything (and “content-neutrality” is a feature off all data networks), means making the whole network look to the computers attached to it like a large Ethernet, or trying to make it look that way. Only you can’t, because the Internet is so much larger and more complex, with multiple routes, multiple links with varying economics, and transient traffic patterns that have to be balanced and metered, etc.

    But the Internet assumption, or really the Ethernet assumption because that’s where it comes from, is that if you have enough total bandwidth you can make any application work without managing the bandwidth. We know this works at the level of LANs, where we can now get gigabits for nominal prices.

    Scaling Ethernet architecture to the whole planet is a somewhat different story, however.

  18. An expensive reliable system can be built out of inexpensive unreliable systems. This is done in storage technology with RAID (Redundant Arrays of Independent Disks). There is a additional price to pay, but it’s not mandatory if you have an application that doesn’t require it, and both reliable and unreliable systems gain economies of scale from being built from similar components.

    What is not possible to do, is to build cheap, “good enough” systems out of expensive reliable systems.

    If the Internet is not good enough, there are many, more expensive systems available for whatever purpose you deem requires the additional “reliability” that generally is the source of the cost.

    Like putting valuables in an armoured car rather than building an armoured road.

    The option that is removed by building a “spam”-free, QoS (of who’s service, and of which service is always the question begged by that one, and even Dave Farber doesn’t believe there is an answer.), high speed network, is the cheap and cheerful “good enough” Internet.

    It is that option that has led to the burst of innovation and growth, which may have been mistaken by some as leading inevitable to “new and improved,” but for the vast majority, IMHO, it is enough and we don’t want to pay more so a few can have their QoS “99.999%” reliability delusions gratified.

    Those quaint and ancient protocols, that lost in the marketplace, and you quote with such authority exist, go get ’em.

    If its not free, it’ll be enough if it is fixed cost, and if it’s not finite, I’ll be satisfied around 100Mbs and that’s not far off at an affordable price, sooner (a la Japan) when the incumbents are shifted off their comfortable stove-piped monopoly over the last mile.

    Hamish.

    PS. “they were systems guys who wanted to make sure that they got as much network code as possible inside the system, the better to get grants to tinker with it.” On what do you base this particularly snide observation?

  19. What is not possible to do, is to build cheap, “good enough” systems out of expensive reliable systems.

    That’s exactly what the Internet is. Most of the low-level data links that connect Internet routers run telephony protocols like T1, SONET, and ATM. These data links are eminently capable of providing jitter-free service, since that’s what they were designed to do, and that’s what they do when they aren’t hamstrung by IP.

    It strikes me as completely absurd that these datalinks are forced to run inefficiently by the defective IP protocol, when a null IP layer that simply allowed connection-oriented transports to map directly to connection-oriented datalinks would be not only simpler but more effective, not to mention cheaper.

    That’s the difference between good network design and bad network design.

  20. 1) there’s a technical term for this — it’s ‘Quality of Service’ (QoS).

    2) You’re generally confused. First, you’re complaining about the wider internet and how it doesn’t let you do QoS. That is because you are a retail user and there isn’t any demand for hard QoS at the retail level. Second, you’re complaining about the TCP stack, which is orthogonal to the idea of QoS and certainly doesn’t stop it from happening.

    3) If you think that you have the magical idea which would dwarf the success of the most amazing network — no, the most amazing application of technology — since the dawn of man, then please inform us as to the specifics.

  21. Felix says: “you’re complaining about the TCP stack, which is orthogonal to the idea of QoS and certainly doesn’t stop it from happening”

    Sorry, Felix, but you’re wrong about that. In order to provide QoS guarantees for jitter-free, real-time flows, the network needs to offer an isochronous service to the transport layer. This is done by X.25, ARPANET, SONET, ATM, etc.

    IP has some header bits that can be used to signify QoS (with or without DIFFSERV), but it lacks a protocol mechanism that can ensure hop-to-hop behavior consistent with a particlular QoS.

    The Internet itself is trying to grow a QoS capability, but it can only do so by bypassing IP and mapping the network layer directly on the underlying isochronous services in a cell-wise fashion.

    I’m sure you understand this, and you’re just being rascally today.

    As to the most amazing application of technology since the dawn of man, I’d vote for the earthmover, with air-conditioning a close second.

  22. ok, I lied. Here I am with another comment:

    ensuring the quality of particular media (video with no jitter, audio with no skipping, etc.) via technical means CAN be quite different then ensuring the reliability/availability of the content that that media contains.

    If we can make video and audio work, and I don’t care whether changing the ends or the middle to do it, fine….but ONLY as long as the network pays no attention as to what the video and audio says.

    would you agree with that, Richard ?

  23. I agree there’s no relationship between content discrimination and rational transport mechanisms for real-time data, if that’s what you mean. I should caveat that to say that the present architecture makes it easier for ISPs to screw voice and video than it should be, but that can be corrected.

  24. thank you.

    it’s *precisely* content discrimination that is, when it comes right down to it, the main value of e2e supporters. nothing more. AGAIN, so you’re clear: e2e supporters are vocal about it because content neutrality/discrimination is the end, not the means (pun not intended).

    the e2e theory is just a theory that appears to support that, and that’s why you have so many non-technical (and lawyer) supporters of it.

    If you claim to know what E2E supporters say and think, then why aren’t you trying to educate yourself on what they are saying ? by just reading the Cluetrain Manifesto and Lessig’s blog ? Boy…that’s like claiming you know the history of Israel by seeing “Jesus Christ Superstar”.

    what boggles my mind is that I’ve been trying to argue what people like Lessig are saying with someone who has never read Lessig’s book on the subject.

  25. The seminal document on the E2E inanity is a paper called “The World of Ends” by Doc Searls and Dave Weinberger, j, and I’ve read it and commented on it at length. Since this paper was written, there’s been an effort on the part of the E2E people to redefine their complaint along the lines you’re using, namely as in favor of non-filtering of content.

    Searls and Weinberger have repeatedly invoked a paper by three MIT professors on system architecture to support their theories, and that paper has nothing to do with content filtering and everything to do with moving connection-oriented services to the transport layer from the place where they were traditionally done, the network layer. This has not turned out to be a good move.

  26. Richard, thanks for your continuing thoughts re: imposing QoS. However, they continue to be wrong-headed. There is no measurable retail demand for the kind of ‘hard’ QoS you are Quixotically championing. Once you get up to the level of applications for which QoS is an actual requirement, you are already spending so much money that joining a switched fabric with embedded QoS where your TCP pipe is an unshared virtual pipe on an isochronous connection is a cost that is lost in the noise.

    According to the old school 45-layer-cake network engineers, a group into which I am assigning you, such a thing as real time voice and video over TCP/IP, much less the broader Internet, is just not remotely possible. Yet millions of people do both every day. Clearly the 45-layer-cake definition of acceptable performance is not the reigning definition.

    As ethernet gets faster, and we all have terabit pipes into our houses, such a concept as isochronous will be as quaint and humorous as the concept of bitmasks for flag values, compressed filesystems, v.32bis and crank-starting your biplane.

    I would argue that we are already well past the point where QoS is fine right now, today, in TCP.
    And it’s not getting any less fine as t -> oo.

  27. I’ll tell you what, Felix – as soon as one percent of American homes have gigabit pipes capable of handling, say, two HDTV-quality video streams and a couple of real-time voice calls at Telco quality, I promise to revisit the issue of isochronous IP for RTP (TCP is a dinosaur, after all). Until then, you and I must agree to disagree.

    IPv6 appears to have most of the hooks I’m looking for, so the future looks good and rosy from where I sit.

  28. re: e2e: Add the address space and derivitive constructs to the short list (temporal guarantees, CONS).

    re: S/L/W: It is always amazing when lusrs assert lack of clue is superclue. I’d the same problem in P3P, with “privacy advocates” who couldn’t locate state or …

Comments are closed.