Yoo vs. Crawford on Regulation

PBS’s smarmy NOW program has a mini-debate on its web site between law professors Christopher Yoo and Susan Crawford, in which they were asked five questions about Internet regulation. It’s pretty interesting to note that Crawford’s answers are highly emotional while Yoo’s are factual and well-reasoned. Some might say that’s the feminine style vs. the masculine style, but I think it’s a question of competence. Crawford is bluffing because she doesn’t understand the technical issues, while Yoo is telling it like it is.

Rethinking the design of the Internet

Rethinking the design of the Internet: The end to end arguments vs. the brave new world by David D. Clark and Marjory S. Blumenthal deals with the end-to-end concept of network architecture and its relevance to future networks.

The simple service model of the Internet (called “best effort delivery”) makes no guarantee about the throughput that any particular application will achieve at any moment. Applications such as file transfer, Web access, or e-mail are tolerant of fluctuations in rate—while a user may be frustrated by a slow delivery, the application still “works.” Today, a new set of applications is emerging, typified by streaming audio and video, that appear to demand a more sophisticated Internet service that can assure each data stream a specified throughput, an assurance that the best effort service cannot provide. Different approaches are possible, beginning with (re)design of applications to operate using only the current best effort service, perhaps by dynamically adjusting the fidelity of the transmitted information as the network throughput varies. At least some application designers reject this limitation on what they could design. Another approach would be to add new data transport services in the core of the network that provide predictable throughput and bounded delays, and there have been proposals along these lines. However, the Internet Service Providers (see below) have not so far been willing to provide these new services. As a result, application builders have adopted the strategy of installing intermediate storage sites that position the streaming content close to the recipient, to increase the chance of successful delivery. Thus, unlike a simple end to end structure, the design of these new applications depends on a two-stage delivery via these intermediate servers.

Yeah, those pesky real-time applications “appear” to have special needs, and the lazy old ISPs haven’t been willing to meet them.

Ahem.

Open Letter to the Neutrality Regulators

Dear Neuts,

Telco abuse of Internet users is hypothetical, not real. There has only been one documented instance of arbitrary service blockage in the USA, the Madison River case which the FCC promptly stopped. In response to this case, they devised the “four freedoms”.

These principles are part of the COPE Act, which also gives the FCC the authority to levy fines up to $500,000 per infraction. The Stevens Senate bill directs the FCC to do a study of provider abuses and report back.

The sensible approach to regulation, in this sphere or in others, is to draft broad principles, set up an enforcement regime, and let case law evolve.

Once we see some actual abuse that isn’t covered by these provisions, Congress will still be in the business of drafting laws and we’ll be able to go ask for whatever approach is necessary to solve the real problems. What you people are proposing is pre-emptive legislation that will most likely do to the Internet what Bush’s pre-emptive war did to Iraq.

I appreciate your sentiments, and firmly believe that you have good intentions. But I’ve worked with legislative bodies before and have seen the unintended consequences that can flow from legislation that’s guided by too much emotion and not enough information.

There’s no immediate crisis here so the best thing course of action is to simply gather information. We all want a “neutral” network that enables innovation to flourish, and the fact that we’ve never really had one shouldn’t discourage us.

Networks are technical artifacts that improve with time, and even the Internet is not so perfect that we should freeze it.

In fact, the architects of the Internet made several design mistakes because of the model they chose to imitate, the early Ethernet. That system tried to do bandwidth management in a fully distributed manner with a clunky scheme of medium sensing, collision detection, and backoff. The Internet analogies are slow start, congestion, and backoff for TCP.

The early Ethernet model doesn’t work well under high load, and was abandoned in the late 80s after my colleagues and I on the IEEE 802.3 standards committee devised a scheme that ran Ethernet over twisted pair wiring into a hub or switch. It turns out that you can manage bandwidth better from a central point that knows who wants to do what when than you can in a totally random, distributed system. The system we devised is the Ethernet that we all use today.

When we re-designed the Ethernet, we faced the same aesthetic criticisms that the neutrality people are heaping on the phone companies today: our system wasn’t democratic, it wasn’t reliable, it couldn’t ensure fairness, and it wasn’t going to be cool with the FCC. But all those criticisms turned out to be groundless, and we now have 40 Gigabit Ethernet running on switch-based systems.

We fought the same battle when we designed the WiFi system. One faction wanted an Access Point-based system and another wanted an Aloha system that was fully distributed and all that. Once again, the network engineering work showed that an Access Point provided better performance to everyone than a distributed system that was constantly flailing for access to the network.

It’s about time that the architecture of the Internet was updated to reflect modern practices in network design where traffic is classified and moderated in points of intelligence that are distributed around the network. This sort of re-design, which was started by the MPLS and DiffServ people, will ultimately produce a network that can do more things better for more applications than the dated Vint Cerf design where the guy with the fattest pipe controls the network.

The original Interment was a fine piece of work given the limited knowledge of packet-switched networks in 1980, but we’ve learned a lot since then and consumers will benefit if some of this learning can be passed on the form of new networks with better performance and richer choices.

I think my networks are better than the old collision-based Ethernet that was the model for the TCP Internet, and a new Internet based on new LANs and WLANs would be better than the old one.

I’m biased, but so are that Cerf character and his fellow-traveler Sir Berners-Lee.

Whether you agree with me or not, I think the moral and decent thing to do is to offer consumers the opportunity to see which one they like better. That’s what we did with Ethernet, and the market responded strongly. There’s no shame in designing a system that works pretty well in its day but is ultimately replaced by a better one.

That’s the whole story of technology, so let’s not pre-judge the outcome.

Best wishes,

The Network Bunny

PS: Go read the Heritage Foundation’s paper on this question. It’s very edifying.

This is what started it all

FYI, here’s the article that but a bee in the bonnet of the special interests who’re trying to shackle the Internet with their so-called net-neutrality regulations:

William L. Smith, chief technology officer for Atlanta-based BellSouth Corp., told reporters and analysts that an Internet service provider such as his firm should be able, for example, to charge Yahoo Inc. for the opportunity to have its search site load faster than that of Google Inc.

Or, Smith said, his company should be allowed to charge a rival voice-over-Internet firm so that its service can operate with the same quality as BellSouth’s offering.

Network operators can identify the digital “packets” of content moving through their wires from sites and services and can block some or put others at the head of the stream.

But Smith was quick to say that Internet service providers should not be able to block or discriminate against Web content or services by degrading their performance.

The complaint from regulation-happy special interests is that the telcos want to make the Internet like cable TV, to which I will simply say that the Internet has already internalized that model, and quite successfully. The dominant web services companies make their money the same way TV does, from a combination of ads, subscriptions, and pay-per-view. Google sells ads just like ABC does, iTunes is pay-per-view, and the New York Times sells subscriptions to Tom Friedman and Maureen Dowd. So where does all this high-and-mighty “we’re so much better than TV” crap come from, some delusion that Internet access is a gift from the Easter Bunny?

There’s a real public benefit to an accelerated service offering inasmuch as its a cheap way to level the playing field between rich incumbents like Google with fat pipes and server farms all over the net and startups with lean budgets. If people like Gigi Sohn were really the friend of the entrepreneur — and had even a basic knowledge of how the Internet actually worked and not just a sentimental fantasy about a democratic net — they would applaud Bell South’s ideas.

The 21st Century Network

Adam Thierer has a first person account of the Verizon’s new network, the one the hippies are screaming about:

Folks, this is serious broadband competition. For those critics who say that the rivalry between two competitors will not be intense, I say come visit my neighborhood. You’ll see Verizon showering people with free gifts (flowers for women, baseballs for kids, and even free gas at local gas stations!) to try to build name recognition and win new subscribers. And you’ll see Cox responding with flyers and e-mails about new services that are coming in an effort head-off this threat. And then you’ll see Verizon flyers and ads responding to those Cox flyers and ads. And you’ll see both of them cutting prices left and right to get customers or win them back.

(Meanwhile, wireless lurks as an alternative that could decimate both cable and telco wireline providers if they can just get the broadband part of the puzzle solved. Rupert and the boys over at DirecTV are in the process of rolling out an ambitious HDTV plan. Can more robust, reliable satellite broadband services be far behind? If DirecTV ever merges with EchoStar and combines all that satellite capacity, look out. That’s when things will get really exciting.)

The real question now is not whether broadband competition works, it is whether or not it is sustainable among more than two players per region. I am one of just 3 or 4 people in my neighborhood who have signed up for FIOS so far. Verizon is going to need to get A LOT more subscribers AND SOON. If they get caught up in a price war with cable in the short term they could be in serious trouble because the fixed deployment and installation costs associated with FIOS are killing them. They need customers and they need them now. At a minimum, Congress needs to enact local franchising relief and make sure that burdensome state or local regulation does not stand in the way of Verizon and other telcos rolling out these exciting new services. The market challenge they face is stiff enough that they don’t need such artificial regulatory impediments to success standing in their way.

It sounds like Verizon doesn’t have their act together on customer service, with all those union workers, but the service is something I’d buy if I had the chance. Unfortunately, fascist forces want to deprive me of the opportunity.

Neutral Net? Who Are You Kidding?

Wired has a half-good article on Internet regulation today, Neutral Net? Who Are You Kidding?

“Net neutrality” has many meanings, but in the broadest sense refers to a cooperative principle whereby everyone on the net is supposed to make the same effort to help deliver everyone else’s traffic. In fact, pushing bits through the network-of-networks that makes up the internet is an anarchic business and frequently an ugly one. ISPs must often fight to get their data carried on neighboring networks, and those who are willing to pay extra reap immediate benefits in the form of faster and better service. Vast amounts of traffic are rerouted and blocked every day. The system, while successful overall, seems to ride on the very edge of chaos, insiders say.

They ultimately get around to discussing the priority thing, but they more or less mangle it, so here we go again:

A truly neutral network provides each application with the type of service it requires, which is low latency for voice, low jitter for video, and high throughput for the downloads/file transfers/web browsing. The Internet can only accomplish this by assigning priorities to these different traffic types. These priority assignments don’t have the effect of making some web sites faster than others, as all web traffic has the same priority.

This does not have anything to do with blocking or degrading specific web sites, really it don’t, and it’s quite consistent with the meaning of “net neutrality” orginally used by Prof. Tim Wu in the paper cited below.

Priority traffic is not the end of the world, in fact it’s a step toward a more application-neutral Intenet.

Saving the Internet from Web tedium

What’s really going on with “net neutrality” regulations is that the Big Content companies are afraid the Internet will become more a medium of communication than a means of delivering canned content, and that will eat away at their profits. So they’ve concocted this whole FUD campaign that dishonestly makes Quality of Service enhancements for communications applications a threat to web access.

The Internet is more than the Web, and beefing up the communications side makes the content less compelling. Instead of whining about the Telcos, Big Content should try to be more interesting.

John Carroll is not a moron

He’s pretty clear on why “net neutrality” as imagined by leftwing blogs and pandering politicians is big, steaming load of crap:

That’s a bit of an exaggeration. Broadband providers, most notably AT&T, aren’t suggesting that they will “control what we access over the Internet.” Rather, they are saying that some content may be given “fast track” access into the home, access to which is contingent on a fee paid either by the provider of content or the consumer.

To consider why this might be useful, imagine 5-10 years down the road when the average size of the datapipe into the home is in the 25-50 megabit range (placing us still behind the South Koreans, but oh well). VoIP service might start to get enhanced by video that is not just the simple 320 by 240 image we are used to in the IM world, but in the Standard Definition, or even the High Definition range.

Given the time criticality of such data, not to mention the fact that such usage will vastly outweigh, from a network load standpoint, the simple text and image traffic that constitutes a large percentage of the current Internet, a fast track might not be just useful, but fair to boot. We charge ten ton trucks more for access to a toll road than standard-sized automobiles because of the extra wear and tear they cause on these roads. Why shouldn’t we make “bandwidth hogs” pay more?

Net neutrality = shifting communications costs from content retailers to consumers. Brilliant.

Adam Cohen drinks the Kool Aid

The New York Times isn’t what it used to be. Rocked by scandal over the made-up reporting of Jayson Blair, torn apart by the dramatic ouster of Howell Raines, and shaken-up by Judith Miller’s megaphoning the Bush Administration’s fantasies about Iraq’s nuclear program, it increasingly relies on sensationalized, drama-queen reporting and opinion to hold on to a piece of market share. The most recent example of the Times’ descent into rank hysteria is a column today by Adam Cohen on the pending destruction of the World Wide Web:

Sir Tim Berners-Lee, the British computer scientist who invented the Web in 1989, envisioned a platform on which everyone in the world could communicate on an equal basis. But his vision is being threatened by telecommunications and cable companies, and other Internet service providers, that want to impose a new system of fees that could create a hierarchy of Web sites. Major corporate sites would be able to pay the new fees, while little-guy sites could be shut out.

This is bullshit, as anyone who’s read this blog before surely knows. There is no plan on anybodies part to create a “hierarchy of web sites,” there is simply a plan to make the Internet — not the Web — better able to meet the needs of real-time applications such as telephony and live streaming of TV programs. This plan actually makes the Internet more “neutral” than it was at the time of its creation as an e-mail carrier for the nation’s weapons researchers, and it does nothing at all to web sites other than potentially make more bandwidth available to them under the existing structure for pricing and selling Internet hook-ups.

Cohen is a member of the Times’ editorial board, and that fact should tell you a lot about that paper’s slide into mediocrity.

Tiered peering and business models

This post by Dave Siegel on tiered peering addresses some the most high-anguish issues in a truly application-neutral Internet:

The issue of peering is this. If ATT demands that content providers pay them in order to put them into the premium class (similar to how our IP-VPN customers pay us to put traffic in the premium queue, but on a whole different level, then that content provider has to connect directly to the ATT core (i.e. become a directly customer of theirs). Unless QoS/CoS is enabled across the existing peering connection that exists between ATT and Global Crossing, and then a content provider could connect to Global Crossing and we could transport the packets between the content provider and ATT at a premium level and give them to ATT at a premium level.

The kicker is that today peering amongst the top providers (known as the Tier 1, not to be confused with the other use of tier in this post) is settlement free. So here is where the business issue enters the equation. If the Best Effort Internet traffic remains settlement free, what is the charge for the premium traffic, and how will it be billed? By the bit? In both directions? Do we bill ATT for the premium traffic that we receive from them and vice-versa? But what about VoIP where the traffic is roughly equal bi-directionally? How does that help ATT if the billing is a wash? Believe it or not, these are the same discussions that have raged in the Internet peering debate since the Internet went fully commercial in 1995. Ultimately, we all decided that peering should remain settlement free, although some providers do requirement settlements if the ratio of in to out traffic is too high because of the unfavorable economics.

What if ATT billed the content provider directly and Global Crossing was just a pass-through, with a contract with ATT to ensure that we didn’t give them any premium traffic that wasn’t being billed. Then there are the billing mechanics to content with…ATT has to know which content provider to bill for what amount of traffic, since they will probably get multiple content providers sending them premium traffic over the same link.

There is also the technical issue of insuring that the content provider that pays for the priority ride on ATT’s network gets it in both directions. In order for that to happen, ATT has to identify the content originating at each broadband connection and identify it as premium traffic. This is not so easy in current edge equipment…it is not only processor intensive but it requires a lot of administrative overhead. The lists of IP addresses that identify content providers and their applications will need to be updated across all customer connections every time a provider connects or disconnects from this premium service.

Okay, for now let’s assume that we can get that all sorted out so that ATT can collect additional revenue, either by billing us and Global Crossing passing that cost on to the content provider, or by collecting it directly from the content provider.

How does the content provider who is either still trying to turn a profit or turning a profit (but not an unreasonable one) going to pay this added business expense. They probably can’t absorb it in their current business model, and even if they can to maintain margins they will have to raise prices. Since most content revenues are derived from advertising (see Andrew Odlyzko’s paper Content is Not King, they will have to raise their ad rates. I’m not sure if advertising will pay more, my understanding is that it’s a pretty competitive business as it is.

While other technologies like ye old Infranet iniative or IMS may have something to say about improving the billing mechanics around this, the business implications are still something to be concerned about.

If you don’t understand it, you probably aren’t entitled to an opinion on the various attempts at regulating Internet service levels we’re contending with these days.

H/T Lippard Blog