Rethinking the design of the Internet: The end to end arguments vs. the brave new world by David D. Clark and Marjory S. Blumenthal deals with the end-to-end concept of network architecture and its relevance to future networks.
The simple service model of the Internet (called “best effort delivery”) makes no guarantee about the throughput that any particular application will achieve at any moment. Applications such as file transfer, Web access, or e-mail are tolerant of fluctuations in rate—while a user may be frustrated by a slow delivery, the application still “works.” Today, a new set of applications is emerging, typified by streaming audio and video, that appear to demand a more sophisticated Internet service that can assure each data stream a specified throughput, an assurance that the best effort service cannot provide. Different approaches are possible, beginning with (re)design of applications to operate using only the current best effort service, perhaps by dynamically adjusting the fidelity of the transmitted information as the network throughput varies. At least some application designers reject this limitation on what they could design. Another approach would be to add new data transport services in the core of the network that provide predictable throughput and bounded delays, and there have been proposals along these lines. However, the Internet Service Providers (see below) have not so far been willing to provide these new services. As a result, application builders have adopted the strategy of installing intermediate storage sites that position the streaming content close to the recipient, to increase the chance of successful delivery. Thus, unlike a simple end to end structure, the design of these new applications depends on a two-stage delivery via these intermediate servers.
Yeah, those pesky real-time applications “appear” to have special needs, and the lazy old ISPs haven’t been willing to meet them.
Ahem.