Net Neutrality: Back to the Drawing Board

Public Knowledge has been one of the knuckle-draggers puffing the net neutrality issue out of all sane proportion, so it was surprising to read a rational article on the subject on their blog. Written by guest blogger Phil Weiser, it valiantly attempts to separate fact from fiction. See Net Neutrality: Back to the Drawing Board:

The real concern of net neutrality advocates should not be insisting on a single tiered Internet that—at least for those who have heard of Akamai already know—is a fantasy. (For those who are unfamiliar with Akamai, its caching services provide enhanced performance to the big companies who can afford to pay for them.) The real concern is that with only two broadband providers offering service, there remains a risk of anticompetitive conduct that understandably makes policymakers nervous. The question is thus, in an environment where there are real benefits from improving network performance and possible efficiencies from alternative pricing strategies, what should policymakers do. For Rob Atkinson and I, the answer is embrace a third way—i.e., a course different from the “do little” approach of the Stevens bill and the “no tiering” approach of the Wyden bill. In particular, we advocate transparency requirements (to enable consumers to be their own protectors); a competition policy based regime that would provide after-the-fact oversight of non-discriminatory conduct (evaluating whether a legitimate business reason existed for it); and a requirement that the best efforts Internet continue to evolve (in bandwidth) so those unable to pay for prioritization are still able to develop and deploy new services. For the details on our proposal, see here.

Comments are predictably bone-headed, of course, except for mine.

Tech companies to Senate: No Neut Regs

This letter from 100+ tech companies to the Senate on the neut regs is great.

First, the Internet has benefited greatly from the relative absence of regulatory restrictions, which has allowed content businesses to grow and prosper. Congress has wisely refrained from burdening this still-evolving medium with regulations, except in those cases where the need for policy action has been clear, and it can be narrowly tailored. This is not the time to deviate from this posture.

Second, it is too soon to enact network neutrality legislation. The problem that the proponents of network neutrality seek to address has not manifested itself in a way that enables us to understand it clearly. Legislation aimed at correcting a nebulous concern may have severe unintended consequences and hobble the rapidly developing new technologies and business models of the Internet. Third, enacting network neutrality “placeholderlaws” could have the unintended effect of dissuading companies from investing in broadband networks.

If our industry leaders are heard, the crazy regulatory regime will not be enacted. Said leaders consist of Cisco, 3M, Nortel, Motorola, and a hundred others.

AeA report: False and Misleading

The crack research team at the American Electronics Association has issued a report on net neutrality that sets a new bar for rank absurdity in public policy discourse. The report simply parrots the most outrageous, counter-factual and a-historical claims made by the professional protest groups that have assembled to promote this dubious cause, and then jumps through hoops to argue for a new and unprecedented network regulation regime. It’s amazing that the primary trade group for the electronics industry would employ a “research” staff that’s clearly out-of-touch with reality and the concerns of its membership, and I have to believe that heads are going to roll over this abomination. Here’s most of the report with my comments interspersed and some redundancies removed. It makes for a good laugh.

The AeA research team produces regular reports on the most timely and relevant issues to the high-tech industry and to U.S. competitiveness in a global economy. We combine rigorous data with careful analysis to provide industry leaders and policymakers the information they need to assess the issue.

While this is certainly a timely issue, the report actually fails to provide any “rigorous data” or “careful analysis.” It makes a number of unsourced and unsupportable claims, states them hysterically, and leaps to an unwarranted conclusion. Read on and you’ll see.

Network neutrality is a wide ranging concept with many facets and many different groups trying to define what it means. Unfortunately, much of the current debate is being driven by network operators, resulting in a one-sided view, full of misleading information.

It seems to me that the pro-regulation side has done plenty of “driving” of this issue, from its original manufacture to supplying the broad set of false and misleading claims that we’re going to see here. Certainly, network operators and equipment manufacturers should have a voice in any new attempt to regulate their industry. This is a democracy and all voices should be heard, especially those of the experts.

This paper focuses on addressing these misperceptions and on the most contentious part of the debate, the discrimination of Internet traffic on the basis of source or ownership of content.

As exciting as this subject matter may be, it’s off to the side of the network neutrality debate as it’s been framed in the bills proposed by Snowe, Dorgan, Markey, and the rest of the pro-regulation coalition. Their bills ban the sale of enhanced services, such as the low-delay Quality of Service needed by telephony and live TV broadcasts, to residential Internet access customers. These are services that business and education can freely buy today, but which aren’t generally available to consumers. So right off the bat we can see that the AeA’s crack research team means to misframe the issue and deal with a strawman.

When the Internet was first built it was designed to be content neutral; its purpose was to move data from one place to another in a nondiscriminatory fashion regardless of who provided the original content.

When the Internet was first built, it was designed to be a playground for research on network protocols, not to be the final word on public networking. We’ve learned a lot from that research, mainly that the Internet lacked adequate mechanisms for fair access, congestion control, security, authentication, and Quality of Service. But this assertion is at best a red herring; whatever the principles were that guided the Internet at its inception, now that it’s a general purpose network used by a billion people outside the network research community, it should be guided by the needs of its current users, not the relative ignorance of its first wave of designers. And in any event, the Internet’s architecture has always recognized that all packets don’t have equal needs, which is why each Internet packet carries tags reporting its desired Class of Service.

Initially, the Federal Communications Commission (FCC) enforced this principle by requiring nondiscriminatory treatment by the telecom carriers, where content was delivered on a “best effort” basis, i.e., by treating all “packets” as relatively equal.

However, this changed in August 2005 when the FCC effectively removed the legal protection of content neutrality for all broadband Internet access providers.

This is total gibberish. “Best effort” delivery simply means that the network does not attempt to re-transmit lost or corrupted packets. The term comes from the design of the now-obsolete coax-cable Ethernets that were built at Xerox PARC. And it certainly has nothing to do with any notion of treating all packets as equal regardless of their requested Class of Service. And for as long as the commercial Internet has existed, packets have been routed differentially depending on source, destination, and paid service agreements between ISPs and NSPs. All routes are not equal, and they’re chosen based on who’s communicating with whom.

The FCC has never regulated the behavior of packet-switching networks. What it has done is regulate wires owned by monopoly telephone companies with respect to the source and destination end-points, which is a very different thing. The former FCC rules on DSL, for example, provided that independent ISPs could rent lines from the phone company at discount prices and connect them to their own equipement. These regulations – called “unbundling” – did not dictate how packets should be handled. And during the time that the DSL regulations were in place, similar services provided by Cable TV were not subject to “unbundling” rules. We found that Cable Internet was faster and cheaper than DSL, so the experiment with different regulations was terminated and DSL was re-regulated under Cable rules. This has nothing to do with preferred content.

Some broadband providers want to be able to offer priority service to those content providers who agree to pay an additional fee beyond what they already pay to access the Internet. Those who can afford to pay the fee would have their content moved to the front of the line.

These carriers claim that the next generation of Internet content (such as videos, voice over IP, real-time gaming, and distance learning) requires higher levels of speed and quality than other content, and as a result, must be prioritized ahead of other Internet traffic. To pay for this increased capacity, the network operators argue that they need additional revenue.

Notice the use of the term “content” here to describe things that are clearly “communication”. This is the essence of the confusion in net neutrality regulation. The old Internet was indeed a system for moving stored “content” from one site to another, whether in the form of e-mail or computer files. But the New Internet is emerging as a system where content has to share wires with real-time communication that can’t sit on the shelf for hours or minutes or even seconds before its delivery. Real-time gaming has a completely different set of communications requirements than BitTorrent downloads, and the network neutrality debate revolves around the question of providing each with the service it requires at a price that’s fair to all. This isn’t an empty carrier claim, it’s technical reality.

Countering this, Internet content providers and consumer groups state that they already pay billions of dollars to access the Internet. They are also concerned that telecom and cable operators, which dominate broadband Internet access with over 92 percent market share, will leverage their potential monopoly power to pick winners and losers in the highly competitive Internet content market. This environment has historically been quite egalitarian.

Yes, Virginia, we all know that Google pays to connect to the Internet, and their carrier agreements probably specify a peak level of bandwidth and little else. Does this mean that they’re automatically entitled to access higher levels of service beyond what they pay for? Perhaps, but that’s certainly not an obvious conclusion. The AeA is trotting out a big strawman here.

And the claim that the Internet is egalitarian is patently false. The more you pay, the more you get and there’s nothing equal about it.

There seems to be the perception that Internet companies (also called Internet content providers) and, to a lesser extent, Internet consumers are not paying their fair share to access the Internet. This perception is just wrong.

Actually, it’s plain right. A small fraction of Internet consumers – like 5% – use most of the bandwith. As your granny pays the same monthy bill as these boys, there is in fact considerable inequity in the present billing system. Now one way to remedy this problem is to give low-volume users priority in access to network pipes and to give lower priority to heavy volume users who pay the same price. This sensible approach is forbidden by heavy-handed network neutrality regulations.

By tiering the Internet based on who pays the most to prioritize their content, the telecom industry is creating a system of haves and have-nots: those that can afford the premium for preferred treatment and those that cannot.
A tiered system for broadband services is already in place, but it is based on the bandwidth purchased by the consumer and content provider, who both are already paying for Internet access. This current system allows consumers equal access to any legal content they choose and gives even the smallest content provider the chance to compete in a robust marketplace.

This system treats all packets equally.

Broadband providers certainly do want to create service tiers, because this will allow them to pay for their investment in fiber-optic networks to the home the way that all infrastructure projects are paid for in America: by selling services. In particular, the carriers want to sell cable TV and voice services, just as Cable TV companies already do. We don’t seem to have any problem with the technical steps the Cable companies have made to sell “triple-play services” over their lines, so why do we have a problem with fiber-optic network providers doing what amounts to the same thing?

The controversial part of the plan is whether they should be allowed to give some actual web sites better service than others, thereby nullifying the capital advantage that companies such as Google, with its 450,000 servers, have over the next pair of guys in a dorm room. Depending on several factors, nullifying Google’s advantage could be a good thing or a bad thing, so I’d rather have this sort of arrangement perused by a regulatory agency than committed to statute. The FCC says they already have this authority, and they’ve used in the past. No new law is needed here.

These types of tiered services already exist in other countries, without resorting to additional fees on content providers. Internet subscribers in Japan can receive 100-megabit service for $25 a month. Sweden is planning for a 1-gigabit (1,000 megabit) service for about $120 a month — this is over 150 times faster than the fastest typical DSL service available in the United States, which currently tops out at around 6 megabits.

This is just plain false. Korea has fiber-to-the-home, and they pay for it by blocking VoIP and selling voice services exclusively. And similar arrangements exist in the UK and other countries. The analysts are either intentionally lying or they’re woefully uninformed.

OK, that’s enough for today, I’ll get to the rest of it as I have time. Suffice it to say, the study’s authors, Matthew Kazmierczak and Josh James, should be fired.

Free Software Communists

I used to spend a lot of time in the Indian state of Kerala, so this article in Salon by Andrew Leonard caught my eye:

Richard Stallman must be sleeping well this week. Eight years ago, I accompanied the free software pioneer on a visit to the Bill Gates-funded computer science building on the Stanford campus. To get in we had to pass through an entrance that sported the Microsoft founder’s name engraved on high. Stallman gave Bill the finger, and then tried to convince some passersby that they should likewise flip Bill off. They looked at him like he was crazy.

Crazy like a fox. This week, the New York Times reported that the Communist state government of Kerala, India, “is campaigning to eliminate Microsoft from use in public institutions.” The government wants state-funded entities, such as public schools, to switch to free software, such as Linux-based operating systems. And guess what? Richard Stallman was very much involved.

Stallman has been pushing free software in India for years. In 2001 he chose Kerala as the headquarters for the Indian affiliate of the Free Software Foundation, the nonprofit he founded to promote software that users can freely copy and modify. Not long after the socialist Left Democratic Front won control of Kerala’s state assembly in May 2006, he was back, lobbying the government with his trademark indefatigability. A few weeks ago, the government banned Coca-Cola and Pepsi, on the grounds that an environmental watchdog had found high levels of pesticides in the products. Now it’s tackling Microsoft.
Continue reading “Free Software Communists”

Speaking of Cults…

The reaction of the Apple faithful to the disclosure of a security hole in the design of Apple OSX was amazing. A couple of guys figured out that you could trick OSX into executing some foreign code with root privilege by sending a malformed packet to a third-party wireless LAN card. The guys – David Maynor and Johnny Ellch – have been viciously attacked by the Kool-Aid drinking Apple faithful:

I was absolutely shocked when I ran across these stories on Digg. I had personally video interviewed Maynor and his partner Jon “Johnny Cache” Ellch and these two gentlemen were very honest and straightforward. But as soon as I read the stories, the stench began to rise. Maynor and SecureWorks had been telling the truth the entire time and they had falsified nothing. The only falsification going on was the stories themselves! Not only did Dalrymple and Chartier and others like them not follow the most basic of journalism principles to at least check with the source, they apparently didn’t even bother looking at the original video of David Manor released by SecureWorks.

The Faithful claim Maynor and Ellch alleged something they didn’t allege, and are therefore out to get Apple.

The saga continues on George Ou’s ZDNet blog today. It seems to me that the flaw the dudes found depends on bad behavior from both the driver and the OS, and if it exists on one vendor’s product, it certainly can exist on others as well. So Apple and its faithful should simply fix the problem and stop smearing people.

Is that too much to ask?

Welcome to the neutral net

We pointed out the other day that net neutrality fiends want public ownership of the Internet access network. Here’s a report from Broadband News on what that looks like:

Culver City, California was the first Los Angeles municipality to offer the public a free all-access Wi-Fi network. They’re also the first to ban all porn and p2p from that network, according to an announcement made yesterday. The city says they’ve added Audible Magic’s CopySense Network Appliance to filter illegal and “problematic content” from their network.

Be careful what you ask for, kids, you just might get it.

H/T Techdirt.

How much bandwidth is enough?

Reader Dave Johnson argues that QoS isn’t necessary on big packet networks because the carrier can simply provision the network adequately to carry all possible traffic at once:

If an ISP has ample internal and outgoing bandwidth, as in more than enough to provide for the sum total of their customers allocations, then where does that leave the latency issue? The only way packets are delayed or refused is because there is not enough capacity on the destination wire. If there is enough capacity then *where* is the problem? Customers are by definition limited in the amount of data (under any protocol) that they can send, so all quantities are known.

As idiotic as this sounds, it’s a common Urban Myth, originally expressed in the works of David Isenberg, if not elsewhere. Bandwidth is free, you see, so just hook me up and don’t worry about it.

OK, let’s play along. How much bandwidth does an ISP have to have on its internal network to allow all of its customers to use all the bandwidth in their hookups all the time? Verizon’s FIOS customers have 100 megabit/sec connections, and there are 375,000 of them. So all Verizon needs for “ample” bandwidth inside its own network is a 37.5 terabit/sec (terabit being a million million) switch, and a similar sized connection to the public Internet.

Of course, that kind of bandwidth doesn’t exist.

Will it in the future? Maybe, but by then instead of worrying about 375,000 customers on the Verizon network, we’ll be worrying about 200 million Americans with 100 megabit/sec each. That adds up to 20,000 terabits/sec. I don’t see any switches capable of handing that load on the horizon, of course. This is a ridiculous exercise, and I only do it because the argument from hyper-regulation side is so lame.

Now lets assume that ISPs can cap bandwidth for each user to some level of transport per day, week, or month. Does that alter the arithmetic above? Actually, no, because you still have to design for peak load. If everybody wants to download American Idol at the same time, you need to accommodate that, so that’s where we are.

The fastest datalinks we have today are 40 gigabits/sec. So let’s take a bunch of them and bond them together to get a 20,000 terabit/sec pipe. We only need 500,000 of them. Supposing we can build a switch that handles 20 such pipes (not really practical today, because of limitations on bus speeds, but let’s be generous) you need 25,000 of them. But now how do we interconnect these switches to each other? Well, we just need to interconnect them to each other in a big mesh, but we’re playing with probabilities again, betting that no combination of users will over-use the path between one switch and anohter. So we’ll have to add another level of switches to enable each end-user to reach each end-user through any intermediate switch, and there will be a lot of these. Somebody has to pay for all these switches, because even if they were cheap (and they aren’t), they’re not free.

This is why QoS is needed: “more bandwidth” only works up to the economic and physical constraints on bandwidth, both of which are real.

So here’s the lab problem in summary: the fastest pipes we have are 40 gigabits/second. How many of them, and in what topology, do you need in order for 100 million users to transmit 100 megabits/second of traffic with no delay?

The Internet’s Oedipal Drama

The Internet regulation debate has unfortunately descended into Oedipal drama where the bad old phone companies are cast in the part of King Laius and the hero is played by brash young startups such as Google and Moveon.org. Our heroes are unfortunately blind to the larger context that surrounds their quest for power.

The Internet’s in trouble alright, but not by phone companies seeking to censor the blogs. As blogs are mostly unread, they don’t generate enough traffic to register a blip on the Internet’s traffic meters, so it’s simply not worth anyones time to shut them down. The carriers would much rather we chatter away than download movies or e-mail TV shows to our friends. Their concern is simply managing a mix of traffic well enough that we’ll pay our bills, however big or small they are.

Fundamental changes have already taken place in the Internet’s traffic load. In the good old days when the Internet was a private club for elite Universities and defense contractors, traffic was light even for the primitive pipes of the day. When congestion collapse appeared it was viable, just barely, to manage it with an end-to-end system that relied on good behavior on the part of the community, because there was a community. The overloaded Internet of the mid 80’s got new life from exponential backoff and slow start in TCP, because the most aggressive consumer of bandwidth was ftp, the files it transferred were short, and users were patient. They didn’t have spam, viruses, worms, or phishing either.

Now that the Internet has to contend with a billion users and multi-gigabyte file transfers with BitTorrent, the honor box model no longer works at all. When BitTorrent is slowed down by backoff, it simply propagates more paths, creating more and more congestion. In another year, the Internet is going to be just as unstable as it was in 1985.

This being the case, the carriers have to implement traffic limits inside the network, building on the mechanisms established as far back as the 1980s with RED and its progeny. This is the only way to control BitTorrent. There is no community and we’re not patient people.

And while they’re doing that, it makes perfect economic and technical sense to implement voice- and video-oriented QoS. Even Berners-Lee acknowledges this, he’s just on the neutrality bandwagon because he’s exercised about third-party billing for web content, a very obscure concern. So whether the phone company manages its links or not, whether they offer third-party billing for QoS or not, and whether the phone company competes with Akamai by offering content caching or not, the Internet will either change or collapse.

The network neutrality regulations proposed by political bloggers, PACs, Big Content companies and their Congressional collaborators pose a serious dilemma for Internet management. Ultimately, they’ll do nothing but make it unattractive for investors to connect routers to all that dark fiber that’s supposed to be out there waiting for us to use.

The loud voices who’ve taken up the “neutrality” cause out of their misguided concern about the Internet’s First Amendment are blind to the real issue.

Congress needs to hear from sane people that these new Internet regulations are poorly-crafted and premature. The issues of free expression and third-party billing need to be discussed, but not in the heat of a fever-pitched battle that’s been cranked way out of proportion by an angry mob of ignorant citizen engineers with axes to grind.

It might just be best to take the telecom bill off the Senate calendar so we can discuss it after the morons cool down. Their attention span is short, and a year from now they’ll be obsessed with different issues.

UPDATE: The Senate CST committee is debating the neutrality add-ons at this very moment. Internet congestion and server load are so high I can’t get a feed, however. What does that tell you?

The Real Meaning of Net Neutrality

Consumer pays. All the smoke about “discrimination” is intended to cover it up. If the consumer foots the whole bill for his connection, there is no incentive for the ISP to massage content.

Somehow the additional requirement that no service plans can be written around Quality of Service gets a free ride under this concept. They don’t have to, but as the bills are currently written, that’s the situation.