My official comments on the FCC Comcast petitions are right here
Category: Networks
Demand for Video Reshaping Internet
Peter Svensson, the AP reporter last seen carrying water for the EFF, has a new piece out on the bandwidth shortage:
Internet service providers and consumer advocates agree that some form of network management, also called “traffic shaping,” can be good for everybody. Not all Internet traffic has the same level of urgency. It makes sense for the service providers to give priority to a voice call, which needs a steady stream of quickly delivered data, over a movie download.
This is unusual territory for telecommunications providers — in the old telephone network, some phone calls aren’t generally prioritized over others. Prioritization makes the Internet more like the postal system, where you pay for delivery speed and quality of service.
Indeed, one the major problems with Internet regulation is the tendency of the consumer lobby to insist it act like the old telephone network. If the only tool you have is a telecom regulation, every computer looks like a phone. But on the Internet and its access networks, every user competes with every other use for Quality of Service, and that’s why we need traffic shaping and other forms of QoS.
Comcast upgrades begin
This news item was a pleasant surprise:
Comcast said today it is offering a new tier of service called Blast that tops out at 16 megabits per second for downloads, twice the speed of its Performance Plus, while retaining the same price. Comcast users will still be able to purchase the basic Performance service, which offers 10 Mbps. Upload speeds also are getting a boost from 768 kilobits per second to 2 Mbps…
To create the upgrade in speed, Comcast has been building more capacity into its network of nodes, which serve hundreds of homes each. The upgrade is part of Comcast’s $663 million investment in the Bay Area since it acquired the former AT&T cable system five years ago.
This reaction to AT&T’s U-verse upgrade shows that competition works, even for people like me who can’t get U-verse.
Burning ears
Check out the latest Tech Policy Weekly podcast, Network Management Redux:
Two networking / IT experts join us for the podcast this week to discuss the ramifications of potential government regulation of broadband network engineering issues. The experts are Matt Sherman, a San Francisco Bay Area web developer and a technology policy blogger who blogs at RichVsReach.com, and George Ou who is the Technical Director of ZDNet, and is a former IT consultant specializing in Internet engineering and IT infrastructure and architecture issues. Also on the show are Adam Thierer of the Progress and Freedom Foundation, Jerry Brito of the Mercatus Center at George Mason University, and Tim Lee of the Cato Institute.
It’s a frank and lively discussion of broadband policy from several points of view.
Google takes the wrath of Fake Steve Jobs
Brilliance on Google’s crocodile tears about Microsoft’s takeover of Yahoo! from The Secret Diary of Steve Jobs:
This is Google’s favorite song. They sing it whenever they want to get their way. Whether the issue is net neutrality, or the bid for wireless spectrum, or now this — these guys do their best Eddie Haskell impersonation and look everyone straight in the eye and say it’s not that they’re concerned about their stock price or their revenue growth, it’s that they’re concerned about users, because gee whiz these evil guys who are opposing our efforts could really mess things up for everyone. The Google guys are the good guys, see? They’re the ones who put the needs of Internet users first. The amazing thing to me is that so many people believe this bullshit.
Ahem and indeed.
FCC Comments on Comcast
Some 28,000 public comments have already been posted to the FCC on the Comcast petition, and Save The Internet! is urging members to file more.
This is taking on the character of a DOS attack, as most of the comments are junk. Telling the FCC that you love the Internet is nice, but’s not exactly relevant to this enquiry. How will the non-junk comments get to the attention of the examiners?
But that’s the whole point.
Net Neutrality 2008 Presentation
Here’s my slide deck from the Net Neutrality 2008 Symposium at the U. of San Francisco Law School this weekend.
The section on network-based solutions to piracy seems to be particularly engaging. Nick Weaver has been working out a similar solution. I felt compelled to sketch this out because of the breathless reaction from some of our privacy buffs to the CES discussion about piracy.
There are plenty of legitimate interests in the net neutrality debate, but morping privacy rights into piracy rights isn’t one of them.
End-to-End Dogma
One piece of required reading on Internet Architecture is the literature on End-to-End arguments. The canonical paper was End-to-End Arguments in System Design by Salzer, Reed, and Clark, which was substantially clarified by Clark’s follow-up, Rethinking the design of the Internet: The end to end arguments vs. the brave new world. Tim Moors offered a thoughtful critique, A Critical Review of “End-to-end arguments in system design†which makes this argument about Congestion Control:
In today’s Internet, congestion control is primarily implemented in end-systems: Most traffic is carried by TCP, which employs a Slow Start algorithm [21] to try to avoid congestion, uses the rate of acknowledgement return to estimate the permissible transmission rate, and interprets packet loss as indicating congestion that requires that the source throttle its transmissions. The only network support is some Random Early Discard devices that reinforce TCP’s behavior by signaling the onset of congestion by discarding packets. However, congestion control is not amenable to end-to-end implementation for the following reasons: First, like routing, congestion is a phenomenon of the network, and since multiple endpoints share the network, it is the network that is responsible for isolating endpoints that offer excessive traffic so that they do not interfere with the ability of the network to provide its service to other endpoints. Second, it is naive in today’s commercial Internet to expect endpoints to act altruistically, sacrificing the performance that they receive from the network in order to help the network limit congestion. The end-to-end arguments that enable the success of peer-to-peer applications also allow the rapid proliferation of applications that do not behave in a “TCP friendly†manner. It is cavalier to allow the commercially valuable Internet to be susceptible to such risks. The requirement that the transport layer implement congestion control also prevents the use of active networking to make transport layers configurable [22]. Summarizing these first two reasons: even though the network is responsible for controlling congestion, it has no reason to trust that endpoints will cooperate in controlling congestion.
A third argument against endpoint implementation of congestion control is that it is inappropriate for certain networks, leading to an unnecessary performance penalty. For example, Slow Start unnecessarily impedes sources that are transmitting on optical circuits (which don’t congest), Media Access Control protocols already provide congestion control for traffic that is local to a LAN, and the assumption that packet loss indicates congestion is invalid for wireless networks in which appreciable loss may also occur due to noise. Fourth, the transport layer lacks the innate ability to detect that congestion is imminent; it can only detect the possible presence of congestion, e.g. through observing packet loss. Schemes such as RED may signal imminent congestion, but they do so by unnecessarily discarding traffic for which the network has already spent resources partially delivering. Fifth, endpoints that implement congestion control separately must independently re-learn the network state, leading to excessively cautious behavior. Finally, while the endpoint may know how it would like to adapt to congestion, it is the network that knows when and where adaptation is needed [23], and should be responsible for ensuring that adaptation occurs. Thus, congestion control is one function that is not well suited to end-to-end implementation.
Like many things designed 40 years ago, the Internet carries design mistakes. These can be corrected, and we’ll all be better off if they are.
Frisco Weekly into the Fray
A generally gushy article on the EFF and Comcast appears today in The Frisco Weekly, making one telling point:
A 2007 study by professors at Clemson University offered solid proof that as few as 15 BitTorrent users on a Comcast-like network could degrade downloads and uploads for everyone else, making streaming videos stutter, or causing other delays. The popularity of BitTorrent, combined with video-streaming sites like YouTube, now clogs up the Internet, Comcast says. That’s why the company says it performs “traffic management” to keep the lanes open for everyone.
Comcast has repeatedly denied that it can “block” BitTorrent traffic. Instead, a spokesman says all ISPs “manage” Net traffic to ensure all customers can receive e-mail and surf the Web. Peer-to-peer users of BitTorrent are a bandwidth-hungry minority, Comcast contends.
[BitTorrent creator Bram] Cohen agrees. In fact, it’s something he predicted when he first thought up BitTorrent. “My whole idea was, ‘Let’s use up a lot of bandwidth,'” he laughs. “I had a friend who said, ‘Well, ISPs won’t like that.’ And I said, ‘Why should I care?'”
Why indeed, as long as somebody else pays the bill?
Hogging the Trough: The EFF Strikes Back
My latest salvo in the ongoing debate with the EFF over Comcast is up at the Register today, Hogging the Trough: The EFF Strikes Back
BitTorrent’s behavior on the Comcast network is like a glutton at an all-you-can-eat buffet who insists on planting himself at the trough, preventing others from getting to the food. This causes multiple problems for DOCSIS cable networks, which caused Comcast’s network managers to throttle uploads under high-load conditions (but not prohibit them outright) using a technique called Reset Spoofing.
The EFF has a preferred way of dealing with this, random packet drop. For EFF this is the One True Method of traffic management. But as I’ve explained to Eckersley both in print and over the phone, the problems BitTorrent causes can’t be healed by random packet drop.
Packet drop would work with the regular diner who takes a plateful and moves on, but not with this super-hungry dude.
This discussion will be continued Saturday at the Toll Roads symposium in Frisco, which you can catch via webcast on UStream.tv.