Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality
While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us.
Adam was in the audience at last week’s MAAWG panel on net neutrality, and raised an interesting question about Random Early Discard. The moderator cut us off before we were able to address his point (he was anxious to catch a plane) but the question deserves a response.
RED is a method of packet discard that’s intended to avoid the problems inherent in a packet drop discipline that simply uses tail-drop to prevent buffer overflow in routers. The tail drop mechanism tends to cause cycles in packet delivery rates:
1. A buffer overflows, and a whole set of transmitters throttles back.
2. Link utilization drops to 50%.
3. The transmitters as a group increase rate together, until buffer overflow occurs again.
4. Repeat.
The net result of this cycling behavior is that congested links have their effective capacity reduced to about 70% of link speed. RED is an attempt to reduce transmission rate more selectively in order to push the link toward the upper limit of capacity. RED algorithms have been under study since the late ’80s, and none is completely satisfactory. The IETF response was to draft an Internet Standard for something called ECN that enables the network to signal end systems that congestion was building, but it remains undeployed due to Microsoft’s concerns about home router compatibility. The follow-on to ECN is Bob Briscoe’s Re-ECN, which I’ve written about on these pages and in The Register.
The bottom line is that Internet congestion protocols are an area that needs a lot of additional work, which the proposed Net Neutrality laws would hamper or prevent.
Van Jacobson realizes this, per the remarks he makes in an interview in the ACM Queue magazine this month:
Also, we use buffer memory in such a way that it’s valuable only if it’s empty, because otherwise it doesn’t serve as a buffer. What we do is try to forget what we learned as soon as we possibly can; we have to do that to make our buffer memory empty.
For the Olympics (not the most recent, but the previous one), we got some data from the ISP downstream of NBC. That router was completely congested; it was falling over, dropping packets like crazy. If you looked inside its buffers, it had 4,000 copies of exactly the same data, but you couldn’t tell that it was the same because it was 4,000 different connections. It was a horrible waste of memory, because the conversations were all different but what they were about was the same. You should be able to use that memory so you don’t forget until you absolutely have to—that is, go to an LRU (least recently used) rather than MRU (most recently used) replacement policy. It’s the same memory; you just change the way you replace things in it, and then you’re able to use the content.
It wouldn’t be necessary for carriers to put disks in routers. They could just start using the existing buffer memory in a more efficient way, and any time the data was requested more than once, they would see a bandwidth reduction.
Strict neutralism would prevent this system from being implemented: it involves Deep Packet Inspection, and the fanatics have warned us that DPI is a great evil. So we’re faced with this choice: networks that are cheap and efficient, or networks that are bloated with silly ideology. Take your pick, you only get one.