The crack research team at the American Electronics Association has issued a report on net neutrality that sets a new bar for rank absurdity in public policy discourse. The report simply parrots the most outrageous, counter-factual and a-historical claims made by the professional protest groups that have assembled to promote this dubious cause, and then jumps through hoops to argue for a new and unprecedented network regulation regime. It’s amazing that the primary trade group for the electronics industry would employ a “research” staff that’s clearly out-of-touch with reality and the concerns of its membership, and I have to believe that heads are going to roll over this abomination. Here’s most of the report with my comments interspersed and some redundancies removed. It makes for a good laugh.
The AeA research team produces regular reports on the most timely and relevant issues to the high-tech industry and to U.S. competitiveness in a global economy. We combine rigorous data with careful analysis to provide industry leaders and policymakers the information they need to assess the issue.
While this is certainly a timely issue, the report actually fails to provide any “rigorous data” or “careful analysis.” It makes a number of unsourced and unsupportable claims, states them hysterically, and leaps to an unwarranted conclusion. Read on and you’ll see.
Network neutrality is a wide ranging concept with many facets and many different groups trying to define what it means. Unfortunately, much of the current debate is being driven by network operators, resulting in a one-sided view, full of misleading information.
It seems to me that the pro-regulation side has done plenty of “driving” of this issue, from its original manufacture to supplying the broad set of false and misleading claims that we’re going to see here. Certainly, network operators and equipment manufacturers should have a voice in any new attempt to regulate their industry. This is a democracy and all voices should be heard, especially those of the experts.
This paper focuses on addressing these misperceptions and on the most contentious part of the debate, the discrimination of Internet traffic on the basis of source or ownership of content.
As exciting as this subject matter may be, it’s off to the side of the network neutrality debate as it’s been framed in the bills proposed by Snowe, Dorgan, Markey, and the rest of the pro-regulation coalition. Their bills ban the sale of enhanced services, such as the low-delay Quality of Service needed by telephony and live TV broadcasts, to residential Internet access customers. These are services that business and education can freely buy today, but which aren’t generally available to consumers. So right off the bat we can see that the AeA’s crack research team means to misframe the issue and deal with a strawman.
When the Internet was first built it was designed to be content neutral; its purpose was to move data from one place to another in a nondiscriminatory fashion regardless of who provided the original content.
When the Internet was first built, it was designed to be a playground for research on network protocols, not to be the final word on public networking. We’ve learned a lot from that research, mainly that the Internet lacked adequate mechanisms for fair access, congestion control, security, authentication, and Quality of Service. But this assertion is at best a red herring; whatever the principles were that guided the Internet at its inception, now that it’s a general purpose network used by a billion people outside the network research community, it should be guided by the needs of its current users, not the relative ignorance of its first wave of designers. And in any event, the Internet’s architecture has always recognized that all packets don’t have equal needs, which is why each Internet packet carries tags reporting its desired Class of Service.
Initially, the Federal Communications Commission (FCC) enforced this principle by requiring nondiscriminatory treatment by the telecom carriers, where content was delivered on a “best effort” basis, i.e., by treating all “packets” as relatively equal.
However, this changed in August 2005 when the FCC effectively removed the legal protection of content neutrality for all broadband Internet access providers.
This is total gibberish. “Best effort” delivery simply means that the network does not attempt to re-transmit lost or corrupted packets. The term comes from the design of the now-obsolete coax-cable Ethernets that were built at Xerox PARC. And it certainly has nothing to do with any notion of treating all packets as equal regardless of their requested Class of Service. And for as long as the commercial Internet has existed, packets have been routed differentially depending on source, destination, and paid service agreements between ISPs and NSPs. All routes are not equal, and they’re chosen based on who’s communicating with whom.
The FCC has never regulated the behavior of packet-switching networks. What it has done is regulate wires owned by monopoly telephone companies with respect to the source and destination end-points, which is a very different thing. The former FCC rules on DSL, for example, provided that independent ISPs could rent lines from the phone company at discount prices and connect them to their own equipement. These regulations – called “unbundling” – did not dictate how packets should be handled. And during the time that the DSL regulations were in place, similar services provided by Cable TV were not subject to “unbundling” rules. We found that Cable Internet was faster and cheaper than DSL, so the experiment with different regulations was terminated and DSL was re-regulated under Cable rules. This has nothing to do with preferred content.
Some broadband providers want to be able to offer priority service to those content providers who agree to pay an additional fee beyond what they already pay to access the Internet. Those who can afford to pay the fee would have their content moved to the front of the line.
These carriers claim that the next generation of Internet content (such as videos, voice over IP, real-time gaming, and distance learning) requires higher levels of speed and quality than other content, and as a result, must be prioritized ahead of other Internet traffic. To pay for this increased capacity, the network operators argue that they need additional revenue.
Notice the use of the term “content” here to describe things that are clearly “communication”. This is the essence of the confusion in net neutrality regulation. The old Internet was indeed a system for moving stored “content” from one site to another, whether in the form of e-mail or computer files. But the New Internet is emerging as a system where content has to share wires with real-time communication that can’t sit on the shelf for hours or minutes or even seconds before its delivery. Real-time gaming has a completely different set of communications requirements than BitTorrent downloads, and the network neutrality debate revolves around the question of providing each with the service it requires at a price that’s fair to all. This isn’t an empty carrier claim, it’s technical reality.
Countering this, Internet content providers and consumer groups state that they already pay billions of dollars to access the Internet. They are also concerned that telecom and cable operators, which dominate broadband Internet access with over 92 percent market share, will leverage their potential monopoly power to pick winners and losers in the highly competitive Internet content market. This environment has historically been quite egalitarian.
Yes, Virginia, we all know that Google pays to connect to the Internet, and their carrier agreements probably specify a peak level of bandwidth and little else. Does this mean that they’re automatically entitled to access higher levels of service beyond what they pay for? Perhaps, but that’s certainly not an obvious conclusion. The AeA is trotting out a big strawman here.
And the claim that the Internet is egalitarian is patently false. The more you pay, the more you get and there’s nothing equal about it.
There seems to be the perception that Internet companies (also called Internet content providers) and, to a lesser extent, Internet consumers are not paying their fair share to access the Internet. This perception is just wrong.
Actually, it’s plain right. A small fraction of Internet consumers – like 5% – use most of the bandwith. As your granny pays the same monthy bill as these boys, there is in fact considerable inequity in the present billing system. Now one way to remedy this problem is to give low-volume users priority in access to network pipes and to give lower priority to heavy volume users who pay the same price. This sensible approach is forbidden by heavy-handed network neutrality regulations.
By tiering the Internet based on who pays the most to prioritize their content, the telecom industry is creating a system of haves and have-nots: those that can afford the premium for preferred treatment and those that cannot.
A tiered system for broadband services is already in place, but it is based on the bandwidth purchased by the consumer and content provider, who both are already paying for Internet access. This current system allows consumers equal access to any legal content they choose and gives even the smallest content provider the chance to compete in a robust marketplace.This system treats all packets equally.
Broadband providers certainly do want to create service tiers, because this will allow them to pay for their investment in fiber-optic networks to the home the way that all infrastructure projects are paid for in America: by selling services. In particular, the carriers want to sell cable TV and voice services, just as Cable TV companies already do. We don’t seem to have any problem with the technical steps the Cable companies have made to sell “triple-play services” over their lines, so why do we have a problem with fiber-optic network providers doing what amounts to the same thing?
The controversial part of the plan is whether they should be allowed to give some actual web sites better service than others, thereby nullifying the capital advantage that companies such as Google, with its 450,000 servers, have over the next pair of guys in a dorm room. Depending on several factors, nullifying Google’s advantage could be a good thing or a bad thing, so I’d rather have this sort of arrangement perused by a regulatory agency than committed to statute. The FCC says they already have this authority, and they’ve used in the past. No new law is needed here.
These types of tiered services already exist in other countries, without resorting to additional fees on content providers. Internet subscribers in Japan can receive 100-megabit service for $25 a month. Sweden is planning for a 1-gigabit (1,000 megabit) service for about $120 a month — this is over 150 times faster than the fastest typical DSL service available in the United States, which currently tops out at around 6 megabits.
This is just plain false. Korea has fiber-to-the-home, and they pay for it by blocking VoIP and selling voice services exclusively. And similar arrangements exist in the UK and other countries. The analysts are either intentionally lying or they’re woefully uninformed.
OK, that’s enough for today, I’ll get to the rest of it as I have time. Suffice it to say, the study’s authors, Matthew Kazmierczak and Josh James, should be fired.
Sweden is planning to provide FTTP, which is much faster than 6 Mbps DSL, for $120 a month.
Is that supposed to be an argument? Is that supposed to even make sense?
This is sort of misleading statement . Actual end users don’t get a Gigabit connection, that’s the bandwidth at the head end. Plus, think about how unrealistic this is:
Sweden only has a few OC-192’s connectivity to the “rest” of the Internet, even with DWDM, they probably are planning on hosting more than a handful of subscribers:)
Also note the fact that they are *planning* to roll this out… BFD, so is Verizon, Sprint et. al. you don’t compare Vaporware networks with existing one unless your a slimy slideware sales droid.
And again, if Americans all lived in high rises, we’d have 100Mbps ethernet. But we don’t.
Richard,
Overall, nice response. While I think you overstate your criticisms at times (e.g. some scholars seriously doubt the FCC’s authority and willingness to act if discrimination surfaces), the AEA’s report has some serious factual problems.
One question: You mentioned live TV as needing QoS, and I think you mean live broadcast (multicast) TV programming. Are you talking about IPTV or a temporarily dedicated frequency or something else? George Ou doesn’t include that as one of the time-sensitive apps needing QoS in some of his posts. I’ve re-read some of your earlier posts on the issue, but I am having a problem reconciling the possible disagreement between you two. (It very well might be my lack of understanding about the technology.) Could a TV provider use caching stations to avoid congestion in the backbone and could users use buffering software so that QoS would not be needed for TV?
Thanks.
ChadB,
I think the need for QoS for TV might depend on the programming. A live action sporting event has different demands than say Prime Time programming. Specifically, you could distribute a Prime Time program out to the caching stations ahead of time. However, you can not do the same with a sporting event.
Actually, the technical issues of digital convergence are really quite interesting.
Richard, “pro-regulation?” You gotta be kidding. Here’s how much the regulators-favoring-large-businesses have screwed Americans:
-Hong Kong has far higher broadband penetration than the US.
-As a function of penetration vs. population density we’re at or near the bottom.
-The United States is the most expensive country in the world for broadband.
Read that whole report. It’s shocking and should outrage anyone.
The fact is, the free market itself has failed in BB deployment, and until that problem is addressed, the QoS problem is merely a sideshow; besides, it’s being addressed in countries wherever there is an activist government favoring BB deployment to everyone.
That report was written by Robert McChesney’s Free Press, and its not worth the paper it isn’t written on. A few other countries are ahead of the US in broadband because they privatized their phone companies at the right time for broadband, not because they over-regulate them.
And the foreign scenario isn’t actually as “neutral” as McChesney’s mouthpieces would have us believe; foreign ISPs routinely block VoIP so they can have the whole revenue stream for themselves.
This is the old “everything’s so much better in Sweden” trick that American pseudo-sophisticates have been playing on the yokels for 40 years. Things aren’t nearly as idyllic in Hong Kong as we might like to think, they’re just a lot more compact.
“Over-regulation” isn’t the issue, Richard, it’s the fact that the regulations are in one area made to benefit a few who happen to run companies versus regulations that are made to benefit all.
The fact is, Japan, Korea, and the other countries had national regulatory bodies that saw to it that there were single standards for things, not the hodge-podge nonsense we have in the US.
And by the way, my wife has no problem calling China via VoIP, and I hadn’t heard of anyone having problems calling Europe via Skype.
Net neutrality is a matter of regulating the behavior of Internet routers such that each packet has the opportunity to take any route, regardless of payment, subscription, or desired Class of Service. That’s pretty draconian, in my humble opinion.
No foreign government that I’m aware of has gone that far, but if you have information to the contrary, by all means share it.
And BTW, the story of Korea Telecom blocking VoIP got a lot of play here, because American servicemen were among the affected parties.
Robert McChesney appears to misuse statistics to get around uncomfortable facts. Specifically, population density and urbanization are tied to the speed and cost of broadband networks, despite his wish to the contrary.
Low density and low urbanization effects both DSL and FTTP:
1) DSL: The distance from the central office determines whether DSL can be offered and the speed at which it can be offered. In low density, suburban-rural areas, homes tend to be further from the central office, limiting the speed of DSL (or even precluding the availability of DSL).
2) FTTP: The cost of laying fiber to a single-family home versus an apartment building is basically the same. However, the potential revenue from an apartment building are much, much higher than a single-family home.
Max: I have 100mbs Ethernet, have for years.
Now, my connection to the outside world is a paltry 512k DSL line (because I’m too cheap and lazy to pay for a bigger pipe). So I guess this means I’m analogous to Sweden!