The Nemertes Study

Nemertes Research speculates that investment in residential networks isn’t keeping pace with user demand for bandwidth, hence a bandwidth crunch will come about in 2010 or so. Their method is to assume that bandwidth appetite follows Moore’s Law and investment is linear, therefore the lines have to cross.

They may very well cross, but their math is wrong. One of the corollaries of Moore’s Law is that circuits grow cheaper as you pack more of them on a die, hence a linear investment in technology should result in a pool of bandwidth that accommodates Moore’s Law increases in demand. Moore’s Law applies at both sides of the network interface, in other words.

There is a caveat, however: communication networks are hybrid systems, part analog and part digital, and only the digital part obeys Moore’s Law. The way around this is to engineer them to minimize the role of analog, which is what we did when we moved Ethernet from shared coaxial cable to point-to-point twisted pair on a silicon hub. It costs more to upgrade bandwidth on shared-cable systems like DOCSIS than on dedicated cable systems like FTTH. So the real issue is getting a cable plant in place that facilitates Moore’s Law economics.

Predictably, the regulation fanatics fail to deal with any substantial issues in relation to this study, and simply throw poo at the walls of their cages. See: Save the Internet, and DSL Reports. An ad hominem is not an argument, and Nemertes refuted Save the Internet smartly in the comments.

Nobody knows, of course, how fast user demand for bandwidth will grow in the next few years, but it’s inextricably bound with how fast carriers deploy fatter pipes. At some point, we will get our TV shows in HD over an IP network or something similar, and even that (100 Mb/s or so) won’t be the final upper limit.

Verizon’s Open Cell Network

This is impressive:

Verizon Wireless today announced that it will provide customers the option to use, on its nationwide wireless network, wireless devices, software and applications not offered by the company. Verizon Wireless plans to have this new choice available to customers throughout the country by the end of 2008.

Does it signal VZ’s intention to bid on the 700 Mhz C block? Fleishman thinks so.

One thing it does show is that markets are more efficient than regulators, which is why the regulation-happy crowd is silent on it*. Tim Wu in particular should have something to say as he’s the one who petitioned the FCC for Carterfone-like regulations on cellular networks.

Let’s see.

*UPDATE: Harold Feld takes respite from his painful loss at the FCC today to take credit for Verizon’s move. I don’t think so, and here’s my alternate theory: Verizon has figured out that winning in the marketplace requires superior technology. Don’t tell Harold, he’ll be sad.

DOCSIS vs. BitTorrent

A couple of weeks ago, I mentioned an academic paper on cable modem (DOCSIS) – TCP interaction which highlighted a couple of problems. The authors maintain that web browsing doesn’t interact efficiently with DOCSIS, and that DOCSIS is vulnerable to a DoS attack based on packet rate rather than data volume. DOCSIS mechanisms that cap downstream traffic don’t protect against the DoS attack, which is based simply on packet rate rather than volume. I said:

In effect, several BT streams in the DOCSIS return path mimics a DoS attack to non-BT users. That’s not cool.

It’s not clear to all of my network analyzing colleagues that I was correct in drawing a parallel between BitTorrent and the DoS attack, so here’s a little context from the original paper:

Denial of Service Study
The previous analysis showed that downstream TCP transfers are impacted by the DOCSIS MAC layer’s upstream best effort transmission service. In this section we show that it is possible for a hacker to take advantage of this inefficiency by initiating a denial of service attack on CMs that can cause high levels of upstream collisions resulting in serious performance degradation. To accomplish the denial of service attack, a host located outside the network must learn the IP address of a number of CMs that share the same downstream and upstream channels. The attacker simply needs to ping or send a TCP SYN packet to the CMs at a frequency that is on the order of the MAP_TIME setting. The actual frequency, which might range from once per MAP_TIME to once every 5 MAP_TIMEs, is a parameter of the attack.

A couple of things will help clarify. The researchers say it’s only necessary to send TCP SYNs at a frequency that resembles a multiple of the network’s scheduling period. A TCP SYN is a connection request, the thing that the infamous TCP Reset (RST) cancels. It’s part of the fabulous three-way handshake that starts a TCP connection (SYN -> SYN/ACK -> ACK) and is a very frequent part of BitTorrent interactions during seeding, as leeches are connecting to seeders and seeing what sort of rate they can get. The significance is that these are short packets which, in high frequency, cause a large demand for upstream transmit opportunities, a scarce commodity in DOCSIS.

So a relatively small number of BitTorrent seeds can place a high load on the upstream path with very little data, and can’t be controlled by bandwidth caps. DOCSIS allows piggybacking of bandwidth requests, which alleviates the problem of contention slot exhaustion for steady streams, but it’s only effective when a lot of data is queued. If several modems are dealing with a large number of responses to connect requests, other modems that are simply supporting web surfing will starve because they too will have to compete for limited contention slots to ACK the data they’re getting.

This is a very different scenario than the Internet congestion case that’s addressed by dropping packets and backing-off on TCP pipelining. The response rate to connection requests is only governed by the rate at which the connecton requests arrive, and dropping packets on established connections doesn’t affect it. And there’s the further complication that this is a first-hop congestion scenario, while Internet congestion is an intermediate hop scenario. The rule of congestion is to drop before the congested link, and if that happens to be the first link, the dropping agent is the customer’s computer or the BitTorrent leech who’s trying to connect to it.

So this can only be addressed by limiting connection requests, which can be done in real-time by routers that can inspect every incoming TCP packet for the SYN bit and keep track of total connections. The Comcast alternative is to asynchronously monitor traffic and destroy connections after the fact. It’s not as efficient as stateful packet inspection, but the gear to do it is a lot cheaper. Given their Terms of Service, which ban servers on their network, it’s sensible.

So the debate comes back to the question of the legality of Comcast’s TOS. The FCC says ISPs can’t limit the applications that customers can run, and BitTorrent is certainly an application. It strikes me as unreasonable to demand that every ISP satisfy every application requirement, and it’s a certain path to the destruction of VoIP if they must. These asymmetrical residential networks aren’t going to do well with lots of VoIP and lots of Torrents, so something has to give if the law is going to insist on this Utopian goal.

I hope that clears things up.

Intelligent Design on Trial

Set your TiVo to record Nova on Nov. 13th. The program is Judgment Day: Intelligent Design on Trial, the story of the Dover, PA trial on teaching religious doctrines in science classes. The trial, popularly known as Scopes II, established that Intelligent Design is no different from classical creationism and is therefore out of place in the public schools’ science classrooms. I grew up a hour away from the venue of the Scopes Trial, and went to public schools where and when it was illegal for my teachers to give a truthful account of the evolution of life on this planet, so this sort of thing makes me very, very happy.

The Comcast Net Neutrality Controversy: A Discussion

The Heritage Foundation has graciously transcribed the discussion we had a couple of weeks ago on Comcast, featuring Adam Thierer, Jerry Brito, Ed Felten, James L. Gattuso and yours truly.

The Comcast Net Neutrality Controversy: A Discussion

On October 25, the Technology Liberation Front, a technology policy weblog, hosted an online discussion concerning recent actions by Comcast Corporation to limit certain types of traffic on its network in order to manage demand. This is an edited transcript of that discussion.

The actual podcast is here.

The End of the Stupid Network

Writing in EE Times, Mobidia CEO Fay Arjomandi offers suggestions for managing IP traffic on wireless networks with QoS:

Achieving the best results requires the mobile device to participate with the carrier network as an active and intelligent element so that it can share the responsibilities of network traffic delivery management in a wireless-friendly manner. A service element within the device cooperating with the carrier node would enable the device to handle the following tasks:

* Determine the type of service and identify its nature (e.g., interactive, background or real-time).

* Apply the proper network policy to each different type of service on the device.

* Determine the available network type, its characteristics, the available channel, status and allocation.

* Schedule and prioritize traffic based on the service type.

* Use a network-aware and network-friendly transport layer protocol for data communication over the wireless network link.

In an IP Multimedia Subsystem (IMS) architecture this service element resides underneath the IMS client or can be seen as part of the IMS client framework.

This should all sound familiar to anyone who’s read the recent posts on this blog, of course. Mark Goldberg suggests the “stupid network” model is on its last legs because it doesn’t allow this sort of interaction. While it served us well when networks and the traffic they handled were uniform, as we move to more diverse data streams over less reliable media (wireless), we have to move toward a more cooperative approach. Technology is often unkind to our idealistic illusions.

Tit-for-Tat on BitTorrent over Comcast

Hiawatha Bray writes a nice, balanced column on legislative efforts to revive net neutrality over the Comcast thing. Highlight:

Comcast senior vice president Mitch Bowling said that BitTorrent users absorb a disproportionate amount of network capacity. That makes it necessary to throttle back BitTorrent transfers so that e-mails and other Internet traffic can get through. Bowling also said that BitTorrent slowdowns are based only on the quantity of traffic, not its content, which is not monitored by Comcast.

Snowe said that there may be a legitimate need for Comcast to limit some traffic in order to manage its network, and that the Verizon and AT&T incidents may have been isolated blunders. But she wants hearings to determine whether legislation may be needed to establish guidelines for network operators. “I think we need to glean what the issues are and get more transparency,” Snowe said.

Jim Harper, director of information policy studies at the libertarian Cato Institute, agreed that network operators have made some poor decisions recently. But Harper said Internet performance could suffer if network managers were barred from setting limits on their traffic. “How do you write a law about this?” Harper said. “This is technically challenging and unsettled.”

This is the most moderate net neutrality statement Sen. Snowe has yet made in print, so maybe she’s learning something from the debate.

Telecom Regulators Eye the Internet

(Note: A slightly revised version of this post is in The Register, titled Harold and Kumar Go to Comcastle.)

As expected, the coalition of the frustrated who comprise Save the Internet! have filed a multi-trillion dollar complaint with the FCC regarding Comcast’s blatant exercise of, um, reasonable network management. The key fact seems to be this: if you live in a world of secret manipulative forces, evil wizards, fire-breathing dragons, scary ghosts and freaky monsters, the actions of ordinary businesses seem peculiar. Appropriately, it was filed on Halloween Eve.

The complaint is a combination of science fiction and group therapy, with generous doses of pure paranoia thrown in. The highlight:

The FCC should act immediately to enjoin Comcast’s secret discrimination and, even before deciding the merits, issue a temporary injunction requiring Comcast to stop degrading any applications.

Secret discrimination? The same complaint describes Comcast’s network management practices as blatantly violating the FCC’s Internet Policy Statement. So which is it, secret or blatant?

And it gets better. The basis of the complaint is the claim that Comcast discriminates against BitTorrent in particular and Peer-to-Peer file swapping applications in general. The FCC’s principles say that customers can run any application they want, and BitTorrent is an application. Hence, limiting the bandwidth P2P can use is breaking the law.

There are only two problems with this line of reasoning:

1) P2P applications aren’t the only ones that Comcast throttles. The complaint itself admits that classic ftp has problems when trying to act as a server on the Comcast network, and further charges that Comcast’s management has much wider effects:

While only Comcast knows the algorithm they use to decide when to forge RST packets, it is unlikely that they ever tested the plethora of applications that are potentially broken by that algorithm.

2) BitTorrent isn’t disabled on the Comcast network, not even the seeding mode where it acts as a file server. I’m a Comcast customer, and as I write this I’m seeding several video files from the current season of a certain murder mystery series set in the most dangerous county in England. The key fact about BitTorrent that the reporters have missed is that it typically takes hours for a large file or set of files to transfer, so a five to ten minute test doesn’t say anything.

It’s a nonsense complaint.

Should the FCC be issuing injunctions before deciding on the merits of completely meritless complaints? In a way I wish they would, so that Comcast’s customers could see what their network would look like without any traffic management. I’ll predict an outcome: people swapping illegal videos would be thrilled, everybody else would be super-pissed-off. And if that harms Comcast’s business, then it would certainly be reasonable for the complainants to be required to compensate them.

And finally, how is Comcast to manage those situations that arise in the course of operating a network millions of times a day when the traffic bound for a certain path exceeds the capacity of the path without degrading any applications? Surely some applications are going to be degraded, assuming the network’s capacity is, you know, finite.

And this brings us to the funniest part of the complaint, the helpful suggestions about how to manage the Comcast network:

…if Comcast is concerned that the collective set of users running P2P applications are affecting quality of service for other users on a cable loop, they could readily set dynamic quotas for each user on the loop, so as to ensure that there is always bandwidth available for users who are not running P2P applications – and they could do so without interfering in protocol choice

As far as we know, this is exactly what Comcast is doing, setting dynamic upload quotas and enforcing them by preventing excessive numbers of upload streams from starting inside their private network.

The angels apparently believe there’s a magic “quota” knob inside each cable modem owned or rented by each Comcast subscriber, but that’s not the case. These modems can take a hard cap at boot time, but after that they lack a mechanism to prevent them from issuing excessive numbers of upstream transfer requests. That’s undoubtedly a flaw in the definition of the DOCSIS protocol, but it’s one that isn’t going away simply because we may earnestly wish it would.

So what we have here is a demand that the FCC and Comcast repeal the laws of physics and provide unlimited non-delayed uploads to everybody everywhere on the Comcast network all the time. That’s not going to happen no matter who’s in charge of the agency.

I would like to see the FCC clarify the nature of reasonable network management so forcefully that the coalition is motivated to take its regulatory zeal elsewhere. In particular, that would mean saying that the following are presumed reasonable unless proven otherwise:

* Any network management practice whose effect is to limit traffic load to the capacity of the network.

* Any network management practice whose effect is to prevent a small number of users from dominating limited network resources and increasing delay for others.

* Any network management practice necessary to keep the network operating within its stability parameters, which depend on the technology itself. E.g., Wi-Fi, DOCSIS, and Ethernet are all stable in different ranges of load.

* All that virus and security stuff is also clearly fine.

I believe that we need to ignore the methods and concentrate on the effects because network equipment doesn’t always allow multiple choices of the methods that manage traffic. The regulation advocates suggest, for example, that Comcast use “dynamic caps” to manage BitTorrent traffic rather than busy signals (TCP Reset is a digital busy signal). Dynamic caps aren’t available in all DOCSIS modems, but the busy signal at the network portal clearly is.

Comcast manages their network according to what works, in other words, not what some weird coalition of angry lawyers and sweaty advocates think will work.

And I’d seriously like for the FCC to consider referring the complaint to the Justice Department for consideration of perjury charges. Some of the signatories have filled out “penalty of perjury” declarations after making irresponsible charges and claiming to be subject matter experts. I know this is pro forma, but some of their charges are so extreme that it would be wise to make examples of them. I don’t expect that to happen, of course, because there’s a wide latitude for lying in advocacy, but it would thrill me if it did.

The bottom line here is that the Internet, which has succeeded largely because of its lack of regulation, has become the lust-object of a whole slew of regulators. Its existence and success are an affront to the livelihood and reputation of regulators everywhere, and they can’t allow such a rebuke to stand. And sooner or later, the 535 regulators who make up the Congress will side with them. The Internet is too juicy a plum for them not to get their grubby little paws on it.

Net neutrality’s father, Tim Wu, committed a gaffe recently and admitted that his movement is a power struggle for control of the Internet. Indeed it is. We can either allow the Internet to continue operating as it always has, under the control of engineers and users, or we can transfer control to lawyers, bureaucrats, and appliance testers. There are disturbing signs that the bureaucratization of the Internet is well underway. ICANN, the group that oversees domain name registrars and other sundry details, recently elected a lawyer to lead it in place of a former network engineer.

So enjoy it while it lasts.