Smackdown!

I’m listening to the House debate on the Markey Amendment with the fraudulent “net neutrality” regulations. Listening to these guys describe the Internet is one of the most hilarious things I’ve ever experienced, like the blind men and the elephant.

Some yahoo from W. Va. is talking about a “two-lane Internet” now. Like a one-lane road is better? His poor momma.

Guy from Texas says “if it ain’t broke don’t fix it.” Amen.

Inslee from WA. says there’s a non-discrimination principle in the DNA of the Internet, that all bits are equal. He better not learn about the TOS header, or check the ping times between yahoo and everybody else.

Lady from Tennessee, Blackburn, says Markey bites because she Googled net neutrality and nobody can agree what it means. She must have read hit #1, Wikipedia. Check it out.

Anna Eshoo, one of my former representatives, is drooling about equal access and “profound change to the Internet”. She doesn’t understand the difference between speed and QoS. It’s about The Future, dude. Google’s bitch is calling Republicans crooks. Takes one to know one.

Charlie Gonzales is talking to bloggers: “this is not about you”. And he’s right, it’s about Google and Yahoo. Markey takes sides, choosing Google over the Internet. He also says it’s driven by hostility to the phone company, there’s no doubt about that.

Dingell lies about the Markey Amendment, saying it preserves the status quo. Sorry, dude, but there is no law today, nor has their been on in the past, forbidding QoS tiering. This is the fictitious history that Google’s coalition has written for the Internet.

Ferguson says Markey’s Amendment is a solution in search of a problem, and they don’t know what the neutrality word means. He’s also against common carriage price controls, but that’s sort of tangential because Markey goes way overboard. He talks about Network Neutering. He’s a hero.

Some other dude points out that Markey’s approach regulates the Internet. They’re running out of speakers in support of the Amendment.

Democrat Gene Green says the Four Freedoms are in the bill, and says Markey means higher prices for consumers, Google gets a free ride. HDTV takes bandwidth. Right on.

Markey is down to his last speaker, himself. He says the debate is a travesty. His amendment is the travesty. Now he’s drooling about car dealerships, Ferraris, and toll booths. He misrepresents his amendment as “preserving the status quo.” That’s more like horses than cars, dude. Fundamental change is happening, and we don’t want that, do we? Oh, and our choices? Forget it, you need to pay for access to the Internet. Preserve the status quo! Moron.

Barton is doing the close. He points out that “net neutrality” the term didn’t exist nine months ago, and nobody knows what it means. We all want an open Internet, and we all want broadband. So how do we get it, by shackling the phone companies with a flat fee structure, how do you get that? Markey says a Ferrari has to sell for the same price as the Taurus.

Let’s get the US in the broadband game, dude. That’s real Net Neutrality. Great close.

Markey Amendment fails on a voice vote.
Excellent! Conyers the clown wants a recorded vote, a fundraising ploy.

That was the best 10-minute debate I ever heard. The votes and all that will be updated shortly.

Lessig’s Latest Work of Fiction

The Washington Post gives Larry Lessig a soap box from which to spew his drivel today, just in time for the floor vote on the COPE Act. It’s the same story he’s been telling for five years, all about toll booths, gatekeepers, and extortion, complete with a fictitious history of the Internet. Fear-and-smear is what he does for a living, and this Op-Ed is a classic example.

Most of it is blatantly counter-factual, and had he posted it on his blog, it would have been ripped apart like his latest posting on the insane regulatory scheme he’s proposing. Just go look.

Lessig doesn’t understand how the Internet works today and how it needs to work tomorrow in order to keep up with user demands, so he’s blind to the damage that his unprecedented regulations will cause to the nation if they’re enacted.

For the record, here’s the relevant bill text:

If a broadband network provider prioritizes or offers enhanced quality of service to data of a particular type, it must prioritize or offer enhanced quality of service to all data of that type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or enhanced quality of service.

Lessig says this is the same ordinary common carrier law that’s governed the Internet from the beginning, but can’t account for the fact that FedEx and DHL are able to offer priority service for a fee under common carrier regulations today.

These apocalyptic fantasies are appealing to conspiracy theorists, but there’s no connection between any real threat facing the Internet and the regulations his irresponsible crusaders are proposing. The “net neutrality” amendment his people want would ban Quality of Service discrimination that’s vital to voice and video over wireless networks, good engineering practices that are widely used today with good results. This would turn the clock back on the development of broadband in America by at least ten years and prevent future innovation.

Nobody should fall for this crap.

Accidental Emperor sticks foot in mouth, twice

So this Sergey Brin character flies his private jet to Washington and cowboys into the Senate Office Building in his jeans and sneakers to set Sen. McCain straight on the net neutrality delusion. According to the Reuters report, he makes an incredibly retarded statement about packet priorities:

“The only way you can have a fast lane that is useful — that people will pay a premium for — is if there are slow lanes,” Brin told reporters after meeting with Republican John McCain, a member of the Senate committee that oversees telecommunications issues.

Wrong, moron. People will pay extra for a special Quality of Service “lane” that provides them consistent service, and it has value even if it’s not faster than the normal “lane”.

The reason is that normal packet traffic is bursty, so its “lane” is sometimes fast and sometimes slow, probably about the same on average as the QoS lane. The traffic in the QoS lane can’t tolerate variations in delivery time because it’s a phone call and not a stinking web page.

Do I have draw sweetums a little picture?

The title of Robert X. Cringely’s tome Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition and Still Can’t Get a Date, was inspired by people like Brin.

After showing he doesn’t have a clue about service tiering, the main issue his goons are trying to criminalize, he then proceeds to tell a bald-faced lie about his company’s collusion with the tyrants who run China:

“We are not actually censoring in China,” he added.

This based on his claim that 99% of Chinese use the uncensored google.com instead of the censored google.cn. Associated Press delivers the slap-down:

The Paris-based group Reporters Without Borders said Tuesday that Google’s main Web site, http://www.google.com, was no longer accessible in most Chinese provinces due to censorship efforts, and that it was completely inaccessible throughout China on May 31.

Who you gonna believe, the King of American Internet Regulation or the virtuous French reporters? That’s not even a close call.

Go see what google.cn has to say about Tiananmen, and check the pretty pictures, not one of which shows a pro-democracy dissident being flattened by a tank.

Isn’t there some way we can revoke this clown’s visa and send him back to Russia?

UPDATE: The WaPo account of the hep cat’s trip to the Hill is pretty funny.

California’s COPE on track

All the hoopla in Washington about fictitious network neutrality hasn’t had much effect on the progressive California legislature, where a state-wide version of COPE passed the lower house without a single “no” vote:

By a final 77-0 vote, the state Assembly passed AB 2987 this past Wednesday and the Senate is expected to take up the bill later this summer. If it clears both houses, the bill would provide for a blanket agreement allowing telephone companies to offer cable TV service in California, eliminating the need for phone companies, such as AT&T Inc. in the Sacramento region, to negotiate separate deals with every municipality.

The Kool-Aid hasn’t reached Sacramento. That doesn’t mean it won’t, but few bills that clear the Assembly this easily run into trouble in the Senate.

An article of faith

The Net Neutrality crowd is motivated by a desire to promote innovation. They firmly believe that such things as blogs, blog aggregators, on-line auctions and dating services like My Space are innovative. While it’s not fruitful to argue with this point of view, its background is questionable.

The early Internet, you see, was justified as a playpen for experimentation with packet switching. Kahn and Cerf argued that keeping traffic control out of the network interior would enable researchers to experiment with protocols that could be easily implemented outside the network. They were motivated by the belief that Ethernet made traffic management moot, so the only field where they could hope to distinguish themselves was in the protocols and applications that used the Ethernet. They wanted the Internet to be Ethernet on a larger scale.

This dubious assumption gave rise to a religious dogma: “The “smarter” the network, the less innovation on top of it.” I recently encountered this on the Lessig Blog, a veritable wonderland of Kool-Aid consumption, and you can find similar sentiments on David Isenberg’s blog, where he promotes his idea of “Stupid Networks” as great stimulators of innovation. Isenberg once worked for a phone company, but not in a technical capacity.

This article of religious faith isn’t provable. Let me show you why with an example.

The old Ethernet was a simple, dumb network. It provided a single speed and single priority, making end-users control access through a totally distributed CSMA/CD scheme. As I said, it was the architectural model that Kahn and Cerf borrowed for TCP/IP.

But the marketplace was offered a choice with the New Ethernet, the one that used active switches, twisted-pair and fiber optic cable, and multiple speeds. It centralized access to the network inside network switches instead of in end-user nodes. It offered VLAN overlays. The New Ethernet killed the Old Ethernet, completely and utterly.

Along comes WiFi, offering still more intelligent network services than even the New Ethernet. It does things that make the Internet shudder, such as mobility, and uses obscure features of the IP suite to prioritize traffic.

Does WiFi nurture innovation? Clearly it does, as it makes the entire realm of mobility-enabled applications possible and does cool things for voice and video.

So a careful look at the historical record says, no, dumb networks don’t promote innovation, they circumscribe it to the class of applications they can support. Like anything else in engineering, we should place network controls where they can do the most good, not where they can do the least harm.

Religion is not a good guide to engineering, folks, logic and evidence work much better.

Architecture of the Internet

Fans of network regulation wax teary-eyed over the perfection of Internet architecture as they understand it, but the inventors themselves are much less sentimental. Here’s RFC1958, Architectural Principles of the Internet, an argument for constant change:

In searching for Internet architectural principles, we must remember that technical change is continuous in the information technology industry. The Internet reflects this. Over the 25 years since the ARPANET started, various measures of the size of the Internet have increased by factors between 1000 (backbone speed) and 1000000 (number of hosts). In this environment, some architectural principles inevitably change. Principles that seemed inviolable a few years ago are deprecated today. Principles that seem sacred today will be deprecated tomorrow. The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely.

The purpose of this document is not, therefore, to lay down dogma about how Internet protocols should be designed, or even about how they should fit together. Rather, it is to convey various guidelines that have been found useful in the past, and that may be useful to those designing new protocols or evaluating such designs.

A good analogy for the development of the Internet is that of constantly renewing the individual streets and buildings of a city, rather than razing the city and rebuilding it. The architectural principles therefore aim to provide a framework for creating cooperation and standards, as a small “spanning set” of rules that generates a large, varied and evolving space of technology.

Some current technical triggers for change include the limits to the scaling of IPv4, the fact that gigabit/second networks and multimedia present fundamentally new challenges, and the need for quality of service and security guarantees in the commercial Internet.

As Lord Kelvin stated in 1895, “Heavier-than-air flying machines are impossible.” We would be foolish to imagine that the principles listed below are more than a snapshot of our current understanding.

Damn that must make Alyssa Milano’s head hurt.

Yoo vs. Crawford on Regulation

PBS’s smarmy NOW program has a mini-debate on its web site between law professors Christopher Yoo and Susan Crawford, in which they were asked five questions about Internet regulation. It’s pretty interesting to note that Crawford’s answers are highly emotional while Yoo’s are factual and well-reasoned. Some might say that’s the feminine style vs. the masculine style, but I think it’s a question of competence. Crawford is bluffing because she doesn’t understand the technical issues, while Yoo is telling it like it is.

Rethinking the design of the Internet

Rethinking the design of the Internet: The end to end arguments vs. the brave new world by David D. Clark and Marjory S. Blumenthal deals with the end-to-end concept of network architecture and its relevance to future networks.

The simple service model of the Internet (called “best effort delivery”) makes no guarantee about the throughput that any particular application will achieve at any moment. Applications such as file transfer, Web access, or e-mail are tolerant of fluctuations in rate—while a user may be frustrated by a slow delivery, the application still “works.” Today, a new set of applications is emerging, typified by streaming audio and video, that appear to demand a more sophisticated Internet service that can assure each data stream a specified throughput, an assurance that the best effort service cannot provide. Different approaches are possible, beginning with (re)design of applications to operate using only the current best effort service, perhaps by dynamically adjusting the fidelity of the transmitted information as the network throughput varies. At least some application designers reject this limitation on what they could design. Another approach would be to add new data transport services in the core of the network that provide predictable throughput and bounded delays, and there have been proposals along these lines. However, the Internet Service Providers (see below) have not so far been willing to provide these new services. As a result, application builders have adopted the strategy of installing intermediate storage sites that position the streaming content close to the recipient, to increase the chance of successful delivery. Thus, unlike a simple end to end structure, the design of these new applications depends on a two-stage delivery via these intermediate servers.

Yeah, those pesky real-time applications “appear” to have special needs, and the lazy old ISPs haven’t been willing to meet them.

Ahem.

Open Letter to the Neutrality Regulators

Dear Neuts,

Telco abuse of Internet users is hypothetical, not real. There has only been one documented instance of arbitrary service blockage in the USA, the Madison River case which the FCC promptly stopped. In response to this case, they devised the “four freedoms”.

These principles are part of the COPE Act, which also gives the FCC the authority to levy fines up to $500,000 per infraction. The Stevens Senate bill directs the FCC to do a study of provider abuses and report back.

The sensible approach to regulation, in this sphere or in others, is to draft broad principles, set up an enforcement regime, and let case law evolve.

Once we see some actual abuse that isn’t covered by these provisions, Congress will still be in the business of drafting laws and we’ll be able to go ask for whatever approach is necessary to solve the real problems. What you people are proposing is pre-emptive legislation that will most likely do to the Internet what Bush’s pre-emptive war did to Iraq.

I appreciate your sentiments, and firmly believe that you have good intentions. But I’ve worked with legislative bodies before and have seen the unintended consequences that can flow from legislation that’s guided by too much emotion and not enough information.

There’s no immediate crisis here so the best thing course of action is to simply gather information. We all want a “neutral” network that enables innovation to flourish, and the fact that we’ve never really had one shouldn’t discourage us.

Networks are technical artifacts that improve with time, and even the Internet is not so perfect that we should freeze it.

In fact, the architects of the Internet made several design mistakes because of the model they chose to imitate, the early Ethernet. That system tried to do bandwidth management in a fully distributed manner with a clunky scheme of medium sensing, collision detection, and backoff. The Internet analogies are slow start, congestion, and backoff for TCP.

The early Ethernet model doesn’t work well under high load, and was abandoned in the late 80s after my colleagues and I on the IEEE 802.3 standards committee devised a scheme that ran Ethernet over twisted pair wiring into a hub or switch. It turns out that you can manage bandwidth better from a central point that knows who wants to do what when than you can in a totally random, distributed system. The system we devised is the Ethernet that we all use today.

When we re-designed the Ethernet, we faced the same aesthetic criticisms that the neutrality people are heaping on the phone companies today: our system wasn’t democratic, it wasn’t reliable, it couldn’t ensure fairness, and it wasn’t going to be cool with the FCC. But all those criticisms turned out to be groundless, and we now have 40 Gigabit Ethernet running on switch-based systems.

We fought the same battle when we designed the WiFi system. One faction wanted an Access Point-based system and another wanted an Aloha system that was fully distributed and all that. Once again, the network engineering work showed that an Access Point provided better performance to everyone than a distributed system that was constantly flailing for access to the network.

It’s about time that the architecture of the Internet was updated to reflect modern practices in network design where traffic is classified and moderated in points of intelligence that are distributed around the network. This sort of re-design, which was started by the MPLS and DiffServ people, will ultimately produce a network that can do more things better for more applications than the dated Vint Cerf design where the guy with the fattest pipe controls the network.

The original Interment was a fine piece of work given the limited knowledge of packet-switched networks in 1980, but we’ve learned a lot since then and consumers will benefit if some of this learning can be passed on the form of new networks with better performance and richer choices.

I think my networks are better than the old collision-based Ethernet that was the model for the TCP Internet, and a new Internet based on new LANs and WLANs would be better than the old one.

I’m biased, but so are that Cerf character and his fellow-traveler Sir Berners-Lee.

Whether you agree with me or not, I think the moral and decent thing to do is to offer consumers the opportunity to see which one they like better. That’s what we did with Ethernet, and the market responded strongly. There’s no shame in designing a system that works pretty well in its day but is ultimately replaced by a better one.

That’s the whole story of technology, so let’s not pre-judge the outcome.

Best wishes,

The Network Bunny

PS: Go read the Heritage Foundation’s paper on this question. It’s very edifying.