One piece of required reading on Internet Architecture is the literature on End-to-End arguments. The canonical paper was End-to-End Arguments in System Design by Salzer, Reed, and Clark, which was substantially clarified by Clark’s follow-up, Rethinking the design of the Internet: The end to end arguments vs. the brave new world. Tim Moors offered a thoughtful critique, A Critical Review of “End-to-end arguments in system design†which makes this argument about Congestion Control:
In today’s Internet, congestion control is primarily implemented in end-systems: Most traffic is carried by TCP, which employs a Slow Start algorithm [21] to try to avoid congestion, uses the rate of acknowledgement return to estimate the permissible transmission rate, and interprets packet loss as indicating congestion that requires that the source throttle its transmissions. The only network support is some Random Early Discard devices that reinforce TCP’s behavior by signaling the onset of congestion by discarding packets. However, congestion control is not amenable to end-to-end implementation for the following reasons: First, like routing, congestion is a phenomenon of the network, and since multiple endpoints share the network, it is the network that is responsible for isolating endpoints that offer excessive traffic so that they do not interfere with the ability of the network to provide its service to other endpoints. Second, it is naive in today’s commercial Internet to expect endpoints to act altruistically, sacrificing the performance that they receive from the network in order to help the network limit congestion. The end-to-end arguments that enable the success of peer-to-peer applications also allow the rapid proliferation of applications that do not behave in a “TCP friendly†manner. It is cavalier to allow the commercially valuable Internet to be susceptible to such risks. The requirement that the transport layer implement congestion control also prevents the use of active networking to make transport layers configurable [22]. Summarizing these first two reasons: even though the network is responsible for controlling congestion, it has no reason to trust that endpoints will cooperate in controlling congestion.
A third argument against endpoint implementation of congestion control is that it is inappropriate for certain networks, leading to an unnecessary performance penalty. For example, Slow Start unnecessarily impedes sources that are transmitting on optical circuits (which don’t congest), Media Access Control protocols already provide congestion control for traffic that is local to a LAN, and the assumption that packet loss indicates congestion is invalid for wireless networks in which appreciable loss may also occur due to noise. Fourth, the transport layer lacks the innate ability to detect that congestion is imminent; it can only detect the possible presence of congestion, e.g. through observing packet loss. Schemes such as RED may signal imminent congestion, but they do so by unnecessarily discarding traffic for which the network has already spent resources partially delivering. Fifth, endpoints that implement congestion control separately must independently re-learn the network state, leading to excessively cautious behavior. Finally, while the endpoint may know how it would like to adapt to congestion, it is the network that knows when and where adaptation is needed [23], and should be responsible for ensuring that adaptation occurs. Thus, congestion control is one function that is not well suited to end-to-end implementation.
Like many things designed 40 years ago, the Internet carries design mistakes. These can be corrected, and we’ll all be better off if they are.