Great debate coverage

eWeek covers the debate between Dave Farber and one of his former students:

Professor Farber, on the other hand, said that he worried about too much Congressional meddling, if only because it might prevent the next major innovations from coming to the Internet.

He noted that he agreed with Cerf that there are plenty of mechanisms in place now to protect against abuse by broadband providers.

He did note, however that he doesn’t believe that the FCC has an unblemished record in such protection. He also noted that the FCC can have its decisions tied up in courts for a very long time, delaying enforcement.

But he also noted that the FCC can act quickly, such as when it acted to require phone companies that provided Internet service to also allow VOIP (voice over IP) calls.

What Farber is most worried about, he said, is poorly drafted legislation that would leave regulation of the Internet open to broad interpretations that could lead to unintended restrictions on the use of the Internet .

He said that regulators, in an attempt to somehow make the Internet more fair, could actually end up restricting access. “The net work never has been a fair place,” he said.

(I’ve always spelled it “network”, but that’s just me apparently.)

I’d like to hear a podcast.

An excellent dodge

Networks need Quality of Service mechanisms if they are required to carry low-jitter traffic such as voice or video-conferencing alongside a large volume of other traffic, such as large pre-recorded HDTV files. It doesn’t matter how much bandwidth the network has at maximum. File transfers on packet networks are designed to use all available bandwidth, so you’re always going to have situations where bursts of file data cause jitter to voice.

This is an inconvenient truth for the “fat pipe, always free” folks, so the generally try and define it out of existence. Here’s a good example from the U of Oregon’s head network admin, Dr. Joe St. Sauver:

Artificial/Unrealistic Demands

• At the same time I oppose metering, you should also know that I oppose artificial/unrealistic “tests” or “challenges” of converged networks.

• For example, a classic example of an unrealistic network demand for a converged network is uncompressed high definition video over IP – that can run 1.2-1.5 gigabit per second. At that rate, dedicated video networks make sense.

• There’s no problem handling MPEG1 video (at 1.5Mbps) however, or even reasonable amounts of MPEG2 video at 1.5 to 20Mbps (on a fast ethernet connection going into a gig core).

Are you getting that? Dr. Joe’s non-priority scheme only works if each users has: A) 100 Mbps straight to a 1 gig “core” and he never uses it to transfer large HDTV files, compressed or otherwise.

Gee, that’s nice. But here on Planet Earth, we’re trying to figure out how to move multiple HDTV streams over connections much less broad than that, because, you know, bandwidth isn’t free off the University of Oregon’s campus.

And even in this scenario, what happens if 10 people are using their 100 Mbps Ethernet connections to the Gig core to transfer big video files? The core is maxed out. And then somebody comes along and tries to use VoIP while all this traffic is flowing. Boom.

That’s what QoS is for. No matter how fast a link is, it can always be overloaded because a billion people use the Internet, and not always at the ideal time.


H/T Frank Paynter.

How ‘Saving The Net’ may kill it

There’s an interesting interview up at The Register:

So on January 1, 1983 when TCP/IP was deployed, it all worked fine. Primarily the net was used for email. Then there were more FTP sessions, and it began to melt down.

So people were writing a lot of papers in mid-1984 about what was then called “congestion collapse” Some of the design features of TCP windowing actually made congestion worse; so protocol engineers went to work. They made enhancements to TCP such as Exponential Backoff – another thing stolen directly from old Ethernet and Slow Start – where the initial window size is small. They re-engineered TCP to solve IP’s congestion problem.

Today, the internet is only stable to the extent people are using TCP over it. People also tend to miss that you can defeat TCP’s attempt to limit traffic over something less than congestion of the backbone if you simply have multiple instances of TCP.

Some guy with strong opinions.

The Great Debate

This could be mildly amusing:

The Center for American Progress is pleased to present Dr. Vinton Cerf and Professor David Farber in “The Great Debate: What is Net Neutrality?” The event is currently sold out for attendees on-site, but you may access the reservation page to be put on a waiting list here.

The event will take place from 10:30-12:00 on Monday, July 17 at the Center’s event space, which is located at 1333 H Street, NW, 10th Floor, Washington, D.C. Audio streaming from the event will be available at the following URL:

* (Streaming will activate one hour before event.)

Our streaming servers have capacity for the first 200 listeners. The stream will be audio only. (If anybody would like to provide additional streaming capacity, please feel free to contact Carl at:

* mailto:[email protected]?subject=neutrality

A Jabber conference room will be available for people who wish to contribute running commentary:

* xmpp:[email protected]
* ALTERNATIVE ADDRESS: xmpp:[email protected]

Audience members will be able to submit questions using Jabber-compliant software such as Google Talk or iChat. You may send your questions here:

* xmpp:[email protected]
* ALTERNATIVE ADDRESS: xmpp:[email protected]

(If your instant messaging client doesn’t support XMPP URL’s, you may go to the conference room using “go to group chat” or “join conference” or a similar command in your client. Likewise, to send a question, simply send a message to [email protected])blockquote>

Public Knowledge blows it

A grad student in media studies named Bill Herman makes an earnest attempt to rationalize Snowe-Dorgan for Public Knowledge and fails miserably:

For instance, what in the Snowe-Dorgan proposal, S 2917, mandates a specific internet architecture? The text is remarkably free of techno jargon. It forbids the blocking or degrading of legal net traffic, but it specifically authorizes companies to prioritize packets. If VoIP and streaming video need a smarter network, companies can build that smarter network. They just cannot charge extra for delivery of those specific services.

Technical people schooled in network protocols in general and priority-based QoS see the hole in his argument instantly: Priority-based QoS isn’t something you can give to everybody. There are a very limited set of time slots available on any network segment for low-latency delivery, and the only way we have to guarantee QoS to limit the number of QoS users at each segment in the routes we find for QoS. And that implies some sort of queue policing, which in general is triggered by a service contract.

So Snowe-Dorgan does mandate an architecture for datalinks and network segments, and it just so happens that the architecture it mandates is out of step with all new networks engineered in the past 10 years: WMM for WiFi, MBOA UWB, IEEE 802.15.3a UWB, WiMax, and even DOCSIS. Network engineers know this stuff, but media critics don’t.

As far as the “strike now while the iron is hot” argument goes, the argument for taking rash action because the issue will soon fade from public interest is the best argument for doing nothing we could possibly have. If the predictions of abuse the pro-regulation neutralists have made come true, the issue will certainly not fade from the public’s attention; that only happens if the predictions of abuse don’t materialize.

The neutralists have put themselves between a rock and hard place by making these hysterical claims, by the way. If nothing happens on the regulation front this year and these dire predictions fail to materialize, their credibility will certainly be damaged, perhaps permanently.

See the Ed Felton paper for the background on Herman’s complaints. The paper has a number of smallish technical errors, but reaches the right conclusion anyway.

Battle lines drawn over net neutrality

Computerworld has a pretty good summary of the Net Neutrality issues today:

In a May 17 letter to congressional leaders, 35 manufacturers — including Alcatel, Cisco, Corning,and Qualcomm — said there’s no evidence that broadband providers now block or impair competing content. The Internet doesn’t need the burden of new regulations, the letter said, adding that passing a bill risks “hobbling the rapidly developing new technologies and business models of the Internet with rigid, potentially stultifying rules.”

The network equipment vendors are the closest thing we have to an informed and neutral party in this debate, so their opinion should carry a lot of weight.

Myth Number One

Protocol Wars have raged in the network design community since the 1970s, or as long as we’ve had packets to switch. In many ways, the current debate over Internet regulation (AKA “net neutrality”) recaps those debates, albeit in a much more foolish way as the loudest voices are the most ignorant. Dana Blankenhorn spills the beans on the article of faith that motivates the regulators:

The idea of absolute top-down control in any area — entertainment, business, technology, politics — is simply non-optimal. Technology has enabled us all to be participants, in any way we wish to be, as well as audience.

That’s heavy, isn’t it? And true! Or not.

“Absolute top-down control” sounds like a really horrible thing, but so does “absolute bottom-up chaos”, because the two most hysterical terms are the first and last. Strip them off and you simply have “top-down”, a pretty respectable concept in many settings. Functional families are managed top-down, and so are businesses, nations, all social groups of animals, etc. So what’s the problem?

Top-down systems are quite often the optimal solution to engineering problems, as they eliminate from the get-go the fundamental problem with edge-managed or bottom-up systems: flapping. If you have four selfish people in a taxi, each of whom wants to get to a different place as quickly as possible, and they can all bark orders at the driver, you may never get anywhere, you’ll simply go around in circles until you run out of gas. All complex systems revolve around compromise, and the top-down control system is in a good position to assess general benefit and ensure that it’s allocated appropriately. It doesn’t scale well, and that’s where modularity and segmentation kick in.

The classic example in networking is the Ethernet. As originally conceived, it was a completely decentralized system, where all decisions about who got to transmit when were made by contention. Each system listened to the wire until it was silent, and then started transmitting. If two systems transmitted at once, they both noticed the collision of their packets and backed-off. After a semi-random delay, they tried again. Eventually everybody got to transmit, but the delays could be substantial. So along comes the Ethernet re-design, where the decision about who gets to transmit is moved up a level, to the switch. Everybody can transmit anytime the switch says they can, and the switch allows several systems to submit packets at the same time, which it sorts out as bandwidth becomes available. Switched Ethernet is centralized, and capable of using 100% of network bandwidth without losing a millisecond to collisions, backoffs, and dead time. It’s the optimal system, and that’s why you use it today.

In this as in many instances, absolute top-control is more optimal than absolute bottom-up chaos. And better than both is moderate top-down management, driven by a policy of the greatest good for the greatest number, or some similar rule.

As an example of engineering fact, it could well be that the best design for a network that permits all people to be performers is for the network’s tubes to be controlled top-down in the interest of fair access. Some at the edges are spammers, virus-makers, and other low-lifes. Can we always count on them to police their own behavior? I don’t think so, and neither do you.

So why do so many grown men still repeat the most idiotic sayings of the scruffy hippies of the 60s?

It’s a great mystery to me.

H/T Doc Searls.

Ten years of political blogging

Ten years ago today, I started a web site for the Coalition of Parent Support’s Silicon Valley Chapter, called Silicon Valley News. I update the site several times a week, putting the latest entry on top and bumping the other down. It was a proto-blog, although nobody knew it at the time, the oldest in existence. You can see what it looked like in 1996 through the Wayback Machine, if it’s working.

COPS is still in existence, fighting for justice for fathers in Family Court. Things are no better than they were in 1996, but they aren’t any worse so perhaps that’s victory of a small sort.

The New Paranoid style in American politics

The always provocative Andrew Orlowski finds a classic theme in the blogosphere’s reaction to the “net neutrality” debate:

For a moment, let’s dispose of the telco lobby’s argument that the phantom of a “free market” means any new regulation is unnecessary. With spectacularly poor timing, AT&T launched its IPTV service this week, pricing it exactly in line with the cable operators it’s competing with. And you thought competition is supposed to lower prices? The cable companies and IP giants are a duopoly – and they don’t like competition. Verizon’s patent infringement lawsuit against Vonage reminds us of that.

For the Neutralists, any suggestion of packet prioritization was interpreted as a speech issue – as censorship. But packet prioritization is very useful to the health of a data network. Neutralists assumed that because some services may have a higher priority, and travel faster – as video packets must – their favorite services would automatically travel slower. This is in spite of the recent doubling – at no extra cost – of internet bandwidth to Verizon fiber customers, who are the only US consumers to enjoy European-style broadband speeds today.

As for business – which you’ll note conspicuously failed to join the campaign – the various attempts at drafting ‘neutrality’ legislation would have rendered today’s Service Level Guarantees, the SGAs, or SLGs which businesses demand – illegal. IP expert Richard Bennett has offered a sensible technical antidote to both the free market utopians who came out to support the big telcos, and the hysterical Neutralists. Bennett argues that the net needs new policies because VoIP and Bittorrent simply saturate it further. You may disagree, but at least it’s a rational argument.

That last line needs a little clarifying as we failed to make telephone contact today: VoIP won’t generate much traffic until a lot more people use it, but it’s the canary in the coal mine that will die as BitTorrent overload sets in.

Technology is all about change, so any movement allegedly grounded in tech that’s resistant to change on principle has some problems.