FCC Comments on Comcast

Some 28,000 public comments have already been posted to the FCC on the Comcast petition, and Save The Internet! is urging members to file more.

This is taking on the character of a DOS attack, as most of the comments are junk. Telling the FCC that you love the Internet is nice, but’s not exactly relevant to this enquiry. How will the non-junk comments get to the attention of the examiners?

But that’s the whole point.

Scientology v. Internet

I have to congratulate Gawker Media honcho Nick Denton for the courageous stand he’s taken against Scientology. Nick is standing tall and refusing to take down the video of a rambling and incoherent Tom Cruise doing $cieno-babble, while the rest of the Internet has been cowed by threats. The Scientology empire will sue, so their threats aren’t idle.

Scientology is a dangerous cult, and we need more Nick Dentons fighting the good fight.

Net Neutrality 2008 Presentation

Here’s my slide deck from the Net Neutrality 2008 Symposium at the U. of San Francisco Law School this weekend.

The section on network-based solutions to piracy seems to be particularly engaging. Nick Weaver has been working out a similar solution. I felt compelled to sketch this out because of the breathless reaction from some of our privacy buffs to the CES discussion about piracy.

There are plenty of legitimate interests in the net neutrality debate, but morping privacy rights into piracy rights isn’t one of them.

End-to-End Dogma

One piece of required reading on Internet Architecture is the literature on End-to-End arguments. The canonical paper was End-to-End Arguments in System Design by Salzer, Reed, and Clark, which was substantially clarified by Clark’s follow-up, Rethinking the design of the Internet: The end to end arguments vs. the brave new world. Tim Moors offered a thoughtful critique, A Critical Review of “End-to-end arguments in system design” which makes this argument about Congestion Control:

In today’s Internet, congestion control is primarily implemented in end-systems: Most traffic is carried by TCP, which employs a Slow Start algorithm [21] to try to avoid congestion, uses the rate of acknowledgement return to estimate the permissible transmission rate, and interprets packet loss as indicating congestion that requires that the source throttle its transmissions. The only network support is some Random Early Discard devices that reinforce TCP’s behavior by signaling the onset of congestion by discarding packets. However, congestion control is not amenable to end-to-end implementation for the following reasons: First, like routing, congestion is a phenomenon of the network, and since multiple endpoints share the network, it is the network that is responsible for isolating endpoints that offer excessive traffic so that they do not interfere with the ability of the network to provide its service to other endpoints. Second, it is naive in today’s commercial Internet to expect endpoints to act altruistically, sacrificing the performance that they receive from the network in order to help the network limit congestion. The end-to-end arguments that enable the success of peer-to-peer applications also allow the rapid proliferation of applications that do not behave in a “TCP friendly” manner. It is cavalier to allow the commercially valuable Internet to be susceptible to such risks. The requirement that the transport layer implement congestion control also prevents the use of active networking to make transport layers configurable [22]. Summarizing these first two reasons: even though the network is responsible for controlling congestion, it has no reason to trust that endpoints will cooperate in controlling congestion.

A third argument against endpoint implementation of congestion control is that it is inappropriate for certain networks, leading to an unnecessary performance penalty. For example, Slow Start unnecessarily impedes sources that are transmitting on optical circuits (which don’t congest), Media Access Control protocols already provide congestion control for traffic that is local to a LAN, and the assumption that packet loss indicates congestion is invalid for wireless networks in which appreciable loss may also occur due to noise. Fourth, the transport layer lacks the innate ability to detect that congestion is imminent; it can only detect the possible presence of congestion, e.g. through observing packet loss. Schemes such as RED may signal imminent congestion, but they do so by unnecessarily discarding traffic for which the network has already spent resources partially delivering. Fifth, endpoints that implement congestion control separately must independently re-learn the network state, leading to excessively cautious behavior. Finally, while the endpoint may know how it would like to adapt to congestion, it is the network that knows when and where adaptation is needed [23], and should be responsible for ensuring that adaptation occurs. Thus, congestion control is one function that is not well suited to end-to-end implementation.

Like many things designed 40 years ago, the Internet carries design mistakes. These can be corrected, and we’ll all be better off if they are.

Frisco Weekly into the Fray

A generally gushy article on the EFF and Comcast appears today in The Frisco Weekly, making one telling point:

A 2007 study by professors at Clemson University offered solid proof that as few as 15 BitTorrent users on a Comcast-like network could degrade downloads and uploads for everyone else, making streaming videos stutter, or causing other delays. The popularity of BitTorrent, combined with video-streaming sites like YouTube, now clogs up the Internet, Comcast says. That’s why the company says it performs “traffic management” to keep the lanes open for everyone.

Comcast has repeatedly denied that it can “block” BitTorrent traffic. Instead, a spokesman says all ISPs “manage” Net traffic to ensure all customers can receive e-mail and surf the Web. Peer-to-peer users of BitTorrent are a bandwidth-hungry minority, Comcast contends.

[BitTorrent creator Bram] Cohen agrees. In fact, it’s something he predicted when he first thought up BitTorrent. “My whole idea was, ‘Let’s use up a lot of bandwidth,'” he laughs. “I had a friend who said, ‘Well, ISPs won’t like that.’ And I said, ‘Why should I care?'”

Why indeed, as long as somebody else pays the bill?

Hogging the Trough: The EFF Strikes Back

My latest salvo in the ongoing debate with the EFF over Comcast is up at the Register today, Hogging the Trough: The EFF Strikes Back

BitTorrent’s behavior on the Comcast network is like a glutton at an all-you-can-eat buffet who insists on planting himself at the trough, preventing others from getting to the food. This causes multiple problems for DOCSIS cable networks, which caused Comcast’s network managers to throttle uploads under high-load conditions (but not prohibit them outright) using a technique called Reset Spoofing.

The EFF has a preferred way of dealing with this, random packet drop. For EFF this is the One True Method of traffic management. But as I’ve explained to Eckersley both in print and over the phone, the problems BitTorrent causes can’t be healed by random packet drop.

Packet drop would work with the regular diner who takes a plateful and moves on, but not with this super-hungry dude.

This discussion will be continued Saturday at the Toll Roads symposium in Frisco, which you can catch via webcast on UStream.tv.

What’s the real cost of internet video?

Check this article from The Register on the cost of Internet video:

Users want it, but today, the business models give operators the incentive to throttle, rather than encourage, high-bandwidth uses of the internet. MIT calls this the ‘Broadband Incentive Problem’.

Last July, my company IP Development published research into the cost of 1080p HDTV [PDF, 128k] delivered over a UK LLU network and came to a figure of £2.10 per two hour film. This research was of interest to a wide community, from ISPs who bear this cost to internet evangelists who believed that we were somehow in the pocket of the big telcos in the Net Neutrality debate.

(We were not paid by anyone for that research – but the conclusions then and now clearly support the view that Net Neutrality is likely to neuter the internet.)

The point is that such figures are not economically viable, and if this is the best the net can do, then so long and thanks for all the fish…

And note the utilization graph, with spikes at odd times of the day.

Tim Wu’s Bafflement

Writing on the AT&T discussion of pirated material, Professor Tim Wu professes amazement:

No one knows exactly what AT&T is proposing to build. But if the company means what it says, we’re looking at the beginnings of a private police state. That may sound like hyperbole, but what else do you call a system designed to monitor millions of people’s Internet consumption? That’s not just Orwellian; that’s Orwell.

We’d like to help. Wu assumes, incorrectly I think, that AT&T would implement this plan by snooping traffic and separating legal from illegal packets on the network. This accounts for the fears he expresses about police states and Orwellian jack boots and that sort of thing.

But let’s suppose AT&T went about finding the pirated movies the same way downloaders do, at the source. They would check Mininova and Pirate’s Bay every day for top downloads, and the associated trackers would gladly give them a list of all the uploaders, because that’s what trackers do. From that information, all AT&T would have to do is match IP addresses with their customers and take appropriate action.

I’m not saying that AT&T should do this, simply that if they were to target pirated content they wouldn’t do it the way Wu imagines; snooping packets won’t work in the general case because they can be encrypted. But the whole purpose of a BitTorrent tracker is to point you to the source of the pirated stuff, so the information is just sitting there, ripe for the picking.

Toll Roads

I’ll be speaking at the Toll Roads Symposium on the legal and political debate over network neutrality in Frisco on the 26th. My panel is:

Panel 3: Social Control: Issues of Privacy & Autonomy

• Scott Cleland, Founder of Precursor LLC and NetCompetition.org
• Kevin Bankston, Staff Attorney, Electronic Frontier Foundation
• Bob Frankston, Net Neutrality Squad, Project of People for Internet Responsibility
• Colette Vogele, Vogele and Associates
• Richard Bennett

Moderator: Susan Freiwald, Professor, University of San Francisco School of Law

Cleland is a solid guy, Frankston is a character, and I don’t know the others.

Net Neutrality Backlog

I need to find some time to respond to the Electronic Frontier Foundation’s defense of packet-drop in some detail, but for now it’s sufficient to note that they refuse to admit that Layer Two networks have traffic problems.

The last Register piece has some people at the Net Neutrality Squad excited, most notably Bob Frankston, who says: “a great engineer questions the constraints and gets fired because the constraints serve a policy need higher than mere science or reality.” Far be it for me to let “mere reality” bias my perspective.

The folks at Public Knowledge and Open Left (Matt Stoller) are up in arms about the success of Connect Kentucky, a perfectly benign public interest bringing broadband to the rural residents of the namesake state. The fuss is explained by Dr. Robert Atkinson. It illustrates the principle that no good deed should go unpunished.

Scott Cleland has a fascinating piece on the parallels between process scheduling in operating systems and packet scheduling in cloud computing that’s amazingly insightful. Scott’s not a software engineer, but he thinks like one of us.

Andrew Odlyzko’s latest paper on net neutrality is severely flawed, with significant errors of fact. It needs a serious examination, but nobody’s happy with it on either side of the debate.

Tim Swanson has an interesting piece on the economics of networking on the Ludwig Mises web site that’s fine reading. Market solutions to these problems are always preferable.

Ok, back to work.