Net Neutrality Backlog

I need to find some time to respond to the Electronic Frontier Foundation’s defense of packet-drop in some detail, but for now it’s sufficient to note that they refuse to admit that Layer Two networks have traffic problems.

The last Register piece has some people at the Net Neutrality Squad excited, most notably Bob Frankston, who says: “a great engineer questions the constraints and gets fired because the constraints serve a policy need higher than mere science or reality.” Far be it for me to let “mere reality” bias my perspective.

The folks at Public Knowledge and Open Left (Matt Stoller) are up in arms about the success of Connect Kentucky, a perfectly benign public interest bringing broadband to the rural residents of the namesake state. The fuss is explained by Dr. Robert Atkinson. It illustrates the principle that no good deed should go unpunished.

Scott Cleland has a fascinating piece on the parallels between process scheduling in operating systems and packet scheduling in cloud computing that’s amazingly insightful. Scott’s not a software engineer, but he thinks like one of us.

Andrew Odlyzko’s latest paper on net neutrality is severely flawed, with significant errors of fact. It needs a serious examination, but nobody’s happy with it on either side of the debate.

Tim Swanson has an interesting piece on the economics of networking on the Ludwig Mises web site that’s fine reading. Market solutions to these problems are always preferable.

Ok, back to work.

Who Manages the First Mile?

A discussion at CES concerning the load that pirated movies place on carrier networks has generated a bit of controversy, beginning at the NY Times:

For the last 15 years, Internet service providers have acted – to use an old cliche – as wide-open information super-highways, letting data flow uninterrupted and unimpeded between users and the Internet.

But I.S.P.’s may be about to embrace a new metaphor: traffic cop.

At a small panel discussion about digital piracy at NBC’s booth on the Consumer Electronics Show floor, representatives from NBC, Microsoft, several digital filtering companies and the telecom giant AT&T said discussed whether the time was right to start filtering for copyrighted content at the network level.

Of course, most of us do know that the Internet and its related privately-owned carrier networks has never really been as wild and free as our network-romantic set would have us believe, but yes, carriers are dealing with extraordinary loads in the upstream direction today, and as most of the “content” is illegal, there is a convergence of interests between carriers and copyright holders.

As far as I gather, this was a hypothetical discussion, but that doesn’t stop the prophets of doom from printing currency against it. The most interesting discussion turned up in the comments at David Weinberger’s blog, in a conversation between Seth Finkelstein, David Isenberg, and Harold Feld. The conclusion that the Davids and Harold reached is that end users should administer the Layer Two network:

So rather than turn traffic shaping and QoS over to the carriers, or to third parties whose choices will distort the market away from true user preferences, why not turn QoS decisions over to the users themselves?

We have already seen the most primitive forms of this idea in the development of edge-based QoS solutions and metered pricing. Things like caching technology (move the content closer), distributed computing (distribute the work among many more computers), and virtual private networks (control of security and privacy by the communicating machines at the edges) are all ways in which end users of various kinds achieve the quality of service they want. Certainly these are not perfect solutions, and network operators can replicate them. But, rather like the magicians of Pharoh replicating the trick of Moses and Aaron of turning a staff into a snake, the fact that network operators can replicate these technologies is not the point. The point is that these primitive first steps at end-user managed QoS rather than network provided QoS are a sign that the folks on the edge do not need to remain in bondage to the telcos and cable cos in order to enjoy QoS. Let end users go and they will provide for themselves.

I don’t see that as practical, but there is a way to deal with this that’s not completely retarded. More on that later.

Net Neutrality Loses New Hampshire

Matt Stoller, the most intense of the pro-regulation, net neutrality advocates, crowed after the un-democratic Iowa caucus. Net Neutrality Wins Iowa:

Right now the telecom lobbyists that control the Republican Party and the Clinton wing of the Democratic Party should be extremely worried. On Thursday, they were soundly thumped in the most important caucus of the year, in both parties.

If that were true, then surely the New Hampshire victories for Clinton and McCain must be the death-knell of the regulation he desires. CNN’s Ann Broache nails it in fine piece on the distinct lack of passion New Hampshirites have for obscure technical regulation, New Hampshire voters: Net neutrality? Huh?

At a booth across the chrome-accented restaurant, Kelly Parsons, 32, cradled her infant son, Christian, and admitted she’d never heard of Net neutrality either. Parsons professed to be reasonably tech-savvy but said technology policy issues had nothing to do with her decision to support Mitt Romney. Illegal immigration and terrorism were among her top concerns for the next president to confront.

Stoller famously ranked net neutrality as the number one issue for the Democrats in Congress following their takeover of the House, ahead of wages, health care, the environment, or Iraq:

On the one hand, we have no legislative agenda except for net neutrality. Since we locked that down as an important issue before the election, our chances are pretty good (though it’s not by any means a slam dunk. This means that we are free to pick our fights, flexible, and not bogged down by a long list of people to satisfy. We can ride public opinion to get what we want, with agility and intelligence.

New Hampshire voters clearly don’t share his agenda, nor do Democrats generally.

How silly is this?

An Op-Ed about net neutrality in last week’s Seattle Times by Avis Yates Rivers makes all the obvious points: solution in search of a problem, treatment worse than the disease, etc., including this one:

Because a network’s bandwidth is a finite resource, the management tools function like traffic lights and yield signs. They seek an orderly way to allow heavy P2P-like traffic to flow without interfering with other users. At peak times, these tools send a signal to a high-bandwidth user that they will find the requested content when a lane opens on the information highway.

But wonders never cease, and David Isenberg found it wrong and offensive:

So mostly Yates Rivers is wrong when she says that bandwidth is finite. Where it is finite, the blame lies at the feet of the telcos . . . well, not really, they wouldn’t be so stupid as to build such abundance that they have nothing to sell anymore. The blame lies with our limited vision — we have affordable, mature technology that would make bandwidth scarcity as obsolete as horsepower from horses.

Can Isenberg really be this stupid? He worked for Bell Labs for 12 years, presumably doing something more technical than sweeping floors, but he still makes bonehead statements like this. I can only conclude that he’s lying deliberately.

Yes, Virginia, bandwidth is finite and it always will be. Even when we have gigabit access connections, we’re still counting on everybody not using theirs full-tilt at the same time. For every consumer of data there’s a producer, and for every pair of consumer/producers there’s a carrier, and every link has its limit. Beef up the core, and the access network becomes a bottleneck. Beef up the access network and the core becomes a bottleneck. That’s life.

Internet Fairness, or Not

My latest piece for The Register is up: Dismantling a Religion: The EFF’s Faith-Based Internet. In it, I explore the difference between the way the EFF wants to manage the Internet and the new way the IETF folks are discussing.

Bottom line: the Internet has never had a user-based fairness system, and it needs one. All networks need one, actually.

On that note, the TCP-Friendly folks remind us:

The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network (probably in the form of preferentially dropping their packets during times of congestion).

An actively-managed Internet is a functional Internet.

Faster, More Symmetric Networking

Would you like to have a fat Internet connection to your home? If we can agree that 100 Mb/s in both directions would qualify as “fat”. you should be able to have your way in a year or two, three at the most. Here’s a quick survey of the alternatives.

First. we have a clue as to why Comcast still uses relatively pokey DOCSIS 1.1: it’s skipping the faster and more symmetric DOCSIS 2.0 and going straight to the nirvana of even faster connections with DOCSIS 3.0:

SANTA CLARA, Calif. — Comcast Corp. (Nasdaq: CMCSA, CMCSK) plans to have a Docsis 3.0 infrastructure in place in about 20 percent of its footprint by the end of 2008, teeing up cable modem services capable of delivering shared Internet speeds in excess of 100 Mbit/s.

The nation’s largest MSO will be 3.0-capable in one-in-five homes by the end of next year, according to Comcast Chief Technology Officer Tony Werner, the keynoter here Wednesday morning at the first CableNEXT conference.

(H/T Engadget)

This should make them competitive with FTTH for a good while, but not forever:

While we’ve seen all sorts of blazing feats over fiber here lately, it’s not often that such wide open bandwidth gets piped directly to a home, but a 75-year old Swede recently changed all that when she had a 40Gbps connection installed in her domicile.

She can download a DVD in two seconds.

Closer to home, Verizon is going faster and more symmetric with FiOS:

With the help of the symmetrical services, users can benefit from equally fast downstream and upstream connections of up to 15 megabits per second (Mbps) or up to 20 Mbps, based on the state where the service is sold.

DSL over copper isn’t sitting still either:

University of Melbourne research fellow Dr John Papandriopoulos is in the throes of moving to Silicon Valley after developing an algorithm to reduce the electromagnetic interference that slows down ADSL connections.

Most ADSL services around the world are effectively limited to speeds between 1 to 20Mbps, but if Dr Papandriopoulos’s technology is successfully commercialised that speed ceiling would be closer to 100Mbps.

Papandriopoulos is joining ASSIA, a company founded by DSL inventor John Cioffi (and named after his wife and EVP.) ASSIA currently develops tools, but I suspect that will change. (Assia, on the other hand, is an aesthete.)

And wireless is on the move as well. Corporate products conforming to the new ~100 Mb/s (real speed) 802.11n standard are starting to roll out in trials, 4G cellular network deployments are starting, and UWB for the home is available at multi-hundred megabit/sec rates.

One has the feeling that the residential network regulators are already fighting yesterday’s war, and that the network pessimists have no basis for their fears.

The Nemertes Study

Nemertes Research speculates that investment in residential networks isn’t keeping pace with user demand for bandwidth, hence a bandwidth crunch will come about in 2010 or so. Their method is to assume that bandwidth appetite follows Moore’s Law and investment is linear, therefore the lines have to cross.

They may very well cross, but their math is wrong. One of the corollaries of Moore’s Law is that circuits grow cheaper as you pack more of them on a die, hence a linear investment in technology should result in a pool of bandwidth that accommodates Moore’s Law increases in demand. Moore’s Law applies at both sides of the network interface, in other words.

There is a caveat, however: communication networks are hybrid systems, part analog and part digital, and only the digital part obeys Moore’s Law. The way around this is to engineer them to minimize the role of analog, which is what we did when we moved Ethernet from shared coaxial cable to point-to-point twisted pair on a silicon hub. It costs more to upgrade bandwidth on shared-cable systems like DOCSIS than on dedicated cable systems like FTTH. So the real issue is getting a cable plant in place that facilitates Moore’s Law economics.

Predictably, the regulation fanatics fail to deal with any substantial issues in relation to this study, and simply throw poo at the walls of their cages. See: Save the Internet, and DSL Reports. An ad hominem is not an argument, and Nemertes refuted Save the Internet smartly in the comments.

Nobody knows, of course, how fast user demand for bandwidth will grow in the next few years, but it’s inextricably bound with how fast carriers deploy fatter pipes. At some point, we will get our TV shows in HD over an IP network or something similar, and even that (100 Mb/s or so) won’t be the final upper limit.

Verizon’s Open Cell Network

This is impressive:

Verizon Wireless today announced that it will provide customers the option to use, on its nationwide wireless network, wireless devices, software and applications not offered by the company. Verizon Wireless plans to have this new choice available to customers throughout the country by the end of 2008.

Does it signal VZ’s intention to bid on the 700 Mhz C block? Fleishman thinks so.

One thing it does show is that markets are more efficient than regulators, which is why the regulation-happy crowd is silent on it*. Tim Wu in particular should have something to say as he’s the one who petitioned the FCC for Carterfone-like regulations on cellular networks.

Let’s see.

*UPDATE: Harold Feld takes respite from his painful loss at the FCC today to take credit for Verizon’s move. I don’t think so, and here’s my alternate theory: Verizon has figured out that winning in the marketplace requires superior technology. Don’t tell Harold, he’ll be sad.

DOCSIS vs. BitTorrent

A couple of weeks ago, I mentioned an academic paper on cable modem (DOCSIS) – TCP interaction which highlighted a couple of problems. The authors maintain that web browsing doesn’t interact efficiently with DOCSIS, and that DOCSIS is vulnerable to a DoS attack based on packet rate rather than data volume. DOCSIS mechanisms that cap downstream traffic don’t protect against the DoS attack, which is based simply on packet rate rather than volume. I said:

In effect, several BT streams in the DOCSIS return path mimics a DoS attack to non-BT users. That’s not cool.

It’s not clear to all of my network analyzing colleagues that I was correct in drawing a parallel between BitTorrent and the DoS attack, so here’s a little context from the original paper:

Denial of Service Study
The previous analysis showed that downstream TCP transfers are impacted by the DOCSIS MAC layer’s upstream best effort transmission service. In this section we show that it is possible for a hacker to take advantage of this inefficiency by initiating a denial of service attack on CMs that can cause high levels of upstream collisions resulting in serious performance degradation. To accomplish the denial of service attack, a host located outside the network must learn the IP address of a number of CMs that share the same downstream and upstream channels. The attacker simply needs to ping or send a TCP SYN packet to the CMs at a frequency that is on the order of the MAP_TIME setting. The actual frequency, which might range from once per MAP_TIME to once every 5 MAP_TIMEs, is a parameter of the attack.

A couple of things will help clarify. The researchers say it’s only necessary to send TCP SYNs at a frequency that resembles a multiple of the network’s scheduling period. A TCP SYN is a connection request, the thing that the infamous TCP Reset (RST) cancels. It’s part of the fabulous three-way handshake that starts a TCP connection (SYN -> SYN/ACK -> ACK) and is a very frequent part of BitTorrent interactions during seeding, as leeches are connecting to seeders and seeing what sort of rate they can get. The significance is that these are short packets which, in high frequency, cause a large demand for upstream transmit opportunities, a scarce commodity in DOCSIS.

So a relatively small number of BitTorrent seeds can place a high load on the upstream path with very little data, and can’t be controlled by bandwidth caps. DOCSIS allows piggybacking of bandwidth requests, which alleviates the problem of contention slot exhaustion for steady streams, but it’s only effective when a lot of data is queued. If several modems are dealing with a large number of responses to connect requests, other modems that are simply supporting web surfing will starve because they too will have to compete for limited contention slots to ACK the data they’re getting.

This is a very different scenario than the Internet congestion case that’s addressed by dropping packets and backing-off on TCP pipelining. The response rate to connection requests is only governed by the rate at which the connecton requests arrive, and dropping packets on established connections doesn’t affect it. And there’s the further complication that this is a first-hop congestion scenario, while Internet congestion is an intermediate hop scenario. The rule of congestion is to drop before the congested link, and if that happens to be the first link, the dropping agent is the customer’s computer or the BitTorrent leech who’s trying to connect to it.

So this can only be addressed by limiting connection requests, which can be done in real-time by routers that can inspect every incoming TCP packet for the SYN bit and keep track of total connections. The Comcast alternative is to asynchronously monitor traffic and destroy connections after the fact. It’s not as efficient as stateful packet inspection, but the gear to do it is a lot cheaper. Given their Terms of Service, which ban servers on their network, it’s sensible.

So the debate comes back to the question of the legality of Comcast’s TOS. The FCC says ISPs can’t limit the applications that customers can run, and BitTorrent is certainly an application. It strikes me as unreasonable to demand that every ISP satisfy every application requirement, and it’s a certain path to the destruction of VoIP if they must. These asymmetrical residential networks aren’t going to do well with lots of VoIP and lots of Torrents, so something has to give if the law is going to insist on this Utopian goal.

I hope that clears things up.