3000 Percent Growth in Wireless Broadband in a Year

The latest FCC report on broadband penetration is out and the growth in wireless broadband is amazing:

One interesting detail of the new statistics is the rise of new platforms for delivering broadband. Cable and DSL still dominate the market, with 28.5 and 22.6 million lines, respectively. Mobile wireless, however, went from only about 379,000 subscribers in June 2005 to more than 11 million in June 2006.

Empirical research finds, without question to my knowledge, that cross-platform and other facilities-based competition is a key driver pushing investment and innovation. The arrival of wireless broadband is a very good sign regarding market competitiveness.

The next person who says “broadband duopoly” gets slugged. It’s at least a triopoly and maybe a quadropoly. This is clearly bad news for Google’s “grassroots” campaign to save the Internet.

Lovable scamp Colbert nails Wikipedia again

Stephen Colbert takes on Wikipedia again with this great clip on Wiki-lobbying. He exhorts fans to edit the Wikipedia entry on “Reality” to say “Reality has become a commodity” and sure enough they comply. Ironically, they’re fought off by Wikipedia Admin Raul654, one of the Google fan-boys who mangled the Network Neutrality entry to reflect a completely warped point of view.

Linklove to ValleyWag.

A tale of two visionaries

For today’s “compare and contrast” exercise look at the concise and clear Peter Huber explaining why net neutrality is a boon to lawyers:

The new Congress is determined to enact a “net neutrality” bill. Nobody yet knows what those two words mean. The new law won’t provide any intelligible answer, either. It will, however, put a real drag on new capital investment in faster digital pipes by making it illegal for many big companies to help pay for them, while leaving everyone guessing about the details for years. That last bit is great news for all the telecom lawyers (like me) who get paid far too much to make sense out of idiotic new laws like this one.

…with Visicalc Frankston’s vision of citizen net admins:

The Internet connection just carries raw bits. We get to decide what the bits mean. The carriers’ attempt to provide us with services over-defined and thus limited the solution. The results may seem counter-intuitive – by narrowly defining the allowable solutions the carriers’ effort failed. Instead those Internet packets gave us the opportunity to choose among any solutions including our own.

Today the gas and electric utilities deliver information via their web sites without having to make any special deals with the carriers. I use the term “carrier” but these companies are really in the business of providing services and deploy CFR only as a means of providing the services. The failure of the residential gateway highlights the sharp difference between the service culture and the Internet culture. Yet we still confuse the two.

Do you stop BitTorrent before picking up the VoIP phone? Then shame on you for discriminating, you control freak!

Google’s plans for world domination

Check out Cringely on Google’s plans for world domination:

Google controls more network fiber than any other organization. This is not to say that Google OWNS all that fiber, just that they control it through agreements with network operators. I find two very interesting aspects to this story: 1) that Google has acquired — or even needs to acquire — so much bandwidth, and; 2) that they don’t own it, since probably the cheapest way to pick up that volume of fiber would be to simply buy out any number of backbone providers like Level 3 Communications.

The “do no evil” guys are a lot scarier than the telcos or Microsoft, as they have absolutely no conscience. You’ve been warned.

Of course, it’s likely that a company so full of itself and stocked with overly-narrow employees will blow up in a fairly spectacular way someplace between here and world domination. On the other hand, it may soon be too late to stop global warming or global Googling.

The worst academic paper ever written

Public Knowledge intern Bill Herman has written a marvelous piece of propaganda for net neutrality regulations and managed to have it published in a law journal.

Herman argues, in essence, that the traditional Internet represents the last word on network architecture, and any attempt to improve it will necessarily result in the Lord smiting the American Economy with boils, plagues of locusts, and hordes of hungry millionaires unable to take their rightful place among the billionaires. It’s actually a fine example of the errors that will be made when questions of technical network architecture are decided by legal academics, regulators, interns, and other wannabes.

My short rebuttal: the traditional Internet doesn’t represent the last word in network architecture, it’s actually the first of a series of experiments that will need to be conducted to find the best way to design networks for multiple services. For essentially its entire life, the Internet has only been concerned with providing service to one type of application, the “careful file transfer”, hence it’s not needed to deal with the problems it faces today.

The Internet of the future will absorb the functions that have traditionally been provided by the telephone and cable TV networks, as well as functions that aren’t performed at all today (such as rapid service for massively multi-player on-line games.) In the course of this future development, network engineers need the same freedom to experiment that the original researchers had in the ARPANet and early Internet days.

It’s way too early in the game for government to begin mandating solutions to technical problems that are just now beginning to be researched.

If this paper gets any traction, I’ll do a more detailed critique of the mistaken assertions, shoddy reasoning, and unfortunate smears it contains.

Jeff Chester changes the subject

One of the lamest forms of discourse is sliming your opponent as a stooge of capitalistic or other assumed-to-be-evil interests. Professors Farber, Katz, Yoo, and Faulhaber have been attacked as shills by Jeff Chester on account of their publishing a well-reasoned Op-Ed in the Washington Post opposing new Internet regulations:

Super cable monopoly Comcast hired UC Berkeley’s Katz in 2003 to produce research which placed the industry in favorable light. Comcast, of course, opposes network neutrality [I cover the role of Katz and other communications -academics-for -industry hire in my new book, btw]. Professor Yoo worked for the cable lobby NCTA last year to write a net neutrality study as well. Even Davd Farber should have disclosed he has spoken under the banner of the Verizon Foundation at Carnegie Mellon.

Note that Chester’s smear of Farber consists solely of the professor’s giving one of a series of lectures at his university sponsored by Verizon (UPDATE: Farber was not compensated for this lecture in any way. He frequently gives talks for no honorarium, even to Microsoft after testifying against them. Chester’s smear is asinine.)

Chester’s trick often works in a capitalist society because we’re awash in money and somebody’s always paying somebody else. Chester himself makes his living writing books and giving talks on the evils of capitalism, and apparently does pretty well at it. Ralph Nader and Noam Chomsky are both multi-millionaires from that very same pursuit.

In the network neutrality debate, one could point out that Google employees give thousands of dollars to Moveon.org, a primary supporter of Save the Internet, or that the other primary supporter, Free Press, is financially supported by the Schuman Foundation headed by Bill Moyers, who’s paid $200,000 a year for this virtuous task. Moyers aired a program on PBS that was a naked advertisement for Free Press, and spoke at Free Press’ National Conference on Media Reform funded by multi-billionaire currency trader George Soros’ Open Society Institute. One might argue that Soros wants to weaken investigative journalism so he can engage in legally questionable currency raiding without restrictions, so he supports a cause that would erode the financial basis of real investigative journalism.

According to Jeff Chester’s analysis, that’s the only way to understand net neutrality: I’m following the money (and Speaking Truth to Power, dude!)

I don’t buy that, so I’ll have to follow the arguments and judge them on their merits.

The anguish of regulation

Note: This post isn’t clear. I’m trying to say that the notion of “layering” in network protocol design doesn’t mean there’s some kind of firewall of ignorance between layers. In layered architectures, protocol layers advertise services to their higher-layer consumers, and notions of regulation built on the notion of layering have to take that fact into account. Crawford misunderstands protocol layering and attempts to build a regulatory framework on the back of her mistaken idea.

Some of the fans of network neutrality regulations are sincere but misguided, such as law professor Susan Crawford. She’s in a lot of anguish about how to sell the regulators’ agenda*:

If the only economic and cultural justifications you have for the need for a layered approach to internet regulation (an approach that treats transport differently from applications) are (1) the explosive innovation that competition among applications would produce and (2) the appropriate mapping between the “actual” architecture of the internet and the regulatory approach to be taken to it, you’ll lose.

But she never questions whether the “layered approach to regulation” is a good thing or even a well-understood thing. I see this a lot among the legal academics, who seem to base most of their regulatory model on a defective model of protocol layering. Lessig is the prototype for this misunderstanding, as he wants to extract architectural features from the Internet of the Past and use them to constrain the development of the Internet of the Future.

I work with layered protocols, and have for more years than I can remember, so please allow me to explain what layering means in real network systems. We divide network functions between abstract layers (physical, link, network, session, application) so we can mix and match combinations for real systems. So the IP network layer can run on the Ethernet link layer or the WiFi link layer, and work pretty much the same. And we can run Ethernet over a fiber-optic physical layer or a copper pair physical layer, and have it work pretty much the same. ]

The key here is understanding what “pretty much the same” means. Each protocol at each layer has its own constraints, and higher layers have to be able to accommodate them. For example, Ethernet packets can’t be more than 1500 bytes long, but WiFi packets are bigger and ATM packets (cells) are smaller. So IP needs to know what the size constraints of the link layer are so it can adjust to them and operate efficiently.

The way this is done is through a service interface between the network layer and the link layer that allows the higher layer protocol to discover the capabilities of the lower layer protocol and behave accordingly. So while these two layers are defined and built separately, they’re intimately connected through a shared interface that allows them to operate together smoothly.

At the link layer, many protocols have the ability to offer different services, each appropriate to a different set of applications. WiFi, for example, has a voice service that handles short packets that need to be transmitted and received at regular intervals differently than long packets that are less sensitive to delay but more sensitive to corruption and loss. The network lingo for this selection of services is Quality of Service or QoS. Note that it’s not really correct to say that Voice QoS is “better” than the bulk data QoS called “Best Effort,” it’s simply different. It would not be in your interest to use Voice grade QoS for downloading files from Netflix, even if those files contained movies, because it actually constrains total bandwidth. You essentially trade off moving a lot of data for moving a little very quickly.

The tragedy of the Internet is that the IP layer doesn’t have good facilities for selecting QoS options from the layers below it, and his makes it difficult for applications to get the service they need from the network top-to-bottom and end-to-end. So we bypass IP in real systems through something called a “Control Plane” and tell the Link Layer how to fit QoS around the data that need it.

But the main point is that the segregation of functions into protocol layers doesn’t mean that each layer doesn’t know what the other layers are doing. In fact, the layers must know what their options are and how to use them, even though they don’t need to know how the other layers make these options available. So the layered approach to protocol design doesn’t preclude diversity of services, it in fact facilitates it by sharing the important information and hiding the unimportant details.

In the real world, a layered approach to regulation would begin by identifying service options and the means for requesting them. The neuts don’t get this and begin by banning service level communication between layers. That’s what “just move the bits, stupid” means. It’s bad network design and it’s bad regulation.

*Crawford blocks referrals from this blog. She’s at: http://scrawford.blogware.com/blog.

Gang of Four on Net Neutrality

Dave Farber, Michael Katz, Chris Yoo, and Gerald Faulhaber have a very concise and well-constructed Op-Ed in today’s WaPo on the downsides of net neutrality regulations:

The legislative proposals debated in the 109th Congress take a very different approach. They would impose far-reaching prohibitions affecting all broadband providers, regardless of whether they wielded monopoly power and without any analysis of whether the challenged practice actually harmed competition. If enacted, these proposals would threaten to restrict a wide range of innovative services without providing any compensating customer benefits.

Most of the people who’ve made substantial contributions to the Internet in the past are converging on this point of view: it’s harmful to make too many restrictions on the services the network provides to users and applications. Contrast this view, which is empirically provable, with the assertion on the other side that there are mystical and unprovable reasons to favor a dearth of network services. The evidence is all on the side of deregulation.

H/T to Verizon’s Policy Blog

Chewed it up and spat it out

Andrew Orlowski has some fun with the AT&T-Bell South merger:

In a decade, American consumers have seen the number of Regional Bell Operating Companies (RBOCs), or Baby Bells, coalesce from six down to just three: the odd man out, Qwest, the RBOC which covers the sparsely populated Mountain states, is surely next on AT&T’s menu.

The extraordinary thing is that all this took place at a time in the wake of the fall out from the telecoms bubble. The Bells enjoyed little affection from the public in any case, long before Global Crossing and MCI. and with an unpopular Republican President, Democrats can have been expected to push a few populist buttons, and hear some bells.

What actually happened is that Whitacre got everything he wanted, but only thanks to the aid of The Democratic Party, most of whom aren’t aware how thoroughly they’ve been outwitted.

Now that’s style.

It’s certainly an interesting perspective. I’ve observed that AT&T played a game of misdirection last year in a more concrete way. While activists focused on video franchising and net neutrality in Washington, the phone company quietly got statewide franchising from the big state legislatures in California, New Jersey, and Michigan without much fanfare. That’s playing the system.

The fact that states can’t impose net neutrality obligations even if they want to is not small thing, either.

Google’s QoS Patent

Google’s one patent on network Quality of Service, US Patent #7142536, isn’t very neutral:

The present invention provides efficient and effective quality of service for information that is time sensitive (e.g., real time data)…In one embodiment of the present invention time sensitive information is cut through routed on a virtual channel and pre-empts non time sensitive information. In one embodiment a communication path probe is cut through routed via intermediate network devices to establish a communication path before other information is communicated from a originating source to a final destination…

It’s also pretty incoherent as it doesn’t appear to work in any of the presented embodiments; dropping TCP packets because they’re too old doesn’t accomplish anything but a retransmission.