The Trouble with White Spaces

Like several other engineers, I’m disturbed by the white spaces debate. The White Space Coalition, and its para-technical boosters, argue something like this: “The NAB is a tiger, therefore the White Spaces must be unlicensed.” And they go on to offer the comparison with Wi-Fi and Bluetooth, arguing as Tom Evslin does on CircleID today that “If we got a lot of innovation from just a little unlicensed spectrum, it’s reasonable to assume that we’ll get a lot more innovation if there’s a lot more [unlicensed] spectrum available.”

According to this argument, Wi-Fi has been an unqualified success in every dimension. People who make this argument haven’t worked with Wi-Fi or Bluetooth systems in a serious way, or they would be aware that there are in fact problems, serious problems, with Wi-Fi deployments.

For one thing, Wi-Fi systems are affected by sources of interference they can’t detect directly, such as FM Baby Monitors, cordless phones, and wireless security cameras. Running Wi-Fi on the same channel as one of these devices causes extremely high error rates. If 2.4 and 5.x GHz devices were required to emit a universally detectable frame preamble much of this nonsense could be avoided.

And for another, we have the problem of newer Wi-Fi devices producing frames that aren’t detectable by older (esp. 802.11 and 802.11b gear) without an overhead frame that reduces throughput substantially. If we could declare anything older than 802.11a and .11g illegal, we could use the spectrum we have much more efficiently.

For another, we don’t have enough adjacent channel spectrum to use the newest version of Wi-Fi, 40 MHz 802.11n, effectively in the 2.4 GHz band. Speed inevitably depends on channel width, and the white spaces offer little dribs and drabs of spectrum all over the place, much of it in non-adjacent frequencies.

But most importantly, Wi-Fi is the victim of its own success. As more people use Wi-Fi, we have share the limited number of channels across more Access Points, and they are not required to share channel space with each other in a particularly efficient way. We can certainly expect a lot of collisions, and therefore packet loss, from any uncoordinated channel access scheme, as Wi-Fi is, on a large geographic scale. This is the old “tragedy of the commons” scenario.

The problem of deploying wireless broadband is mainly a tradeoff of propagation, population, and bandwidth. The larger the population your signal covers, the greater the bandwidth needs to be in order to provide good performance. The nice thing about Wi-Fi is its limited propagation, because it permits extensive channel re-use without collisions. if the Wi-Fi signal in your neighbor’s house propagated twice as far, it has four times as many chances to collide with other users. So high power and great propagation isn’t an unmitigated good.

The advantage of licensing is that the license holder can apply authoritarian rules that ensure the spectrum is used efficiently. The disadvantage is that the license holder can over-charge for the use of such tightly-managed spectrum, and needs to in order to pay off the cost of his license.

The FCC needs to move into the 21st century and develop some digital rules for the use of unlicensed or lightly-licensed spectrum. The experiment I want to see concerns the development of these modern rules. We don’t need another Wi-Fi, we know how it worked out.

So let’s don’t squander the White Spaces opportunity with another knee-jerk response to the spectre of capitalism. I fully believe that people like Evslin, the White Space Coalition, and Susan Crawford are sincere in their belief that unlicensed White Spaces would be a boon to democracy, it’s just that their technical grasp of the subject matter is insufficient for their beliefs to amount to serious policy.

Google open-sources Android

I lost my Blackberry Curve somewhere in England last week, so I ordered an HTC G1 from T-Mobile as a replacement. The Curve doesn’t do 3G, so it’s an obsolete product at this point. And as I’m already a T-Mobile customer (I chose them for the Wi-Fi capability of their Curves,) the path of least resistance to 3G goes through the G1. Just yesterday I was explaining to somebody that Android wasn’t really open source, but Google was apparently listening and decided to make a liar of me by open-sourcing Android:

With the availability of Android to the open-source community, consumers will soon start to see more applications like location-based travel tools, games and social networking offerings being made available to them directly; cheaper and faster phones at lower costs; and a better mobile web experience through 3G networks with richer screens.The easy access to the mobile platform will not only allow handset makers to download the code, but to build devices around it. Those not looking to build a device from scratch will be able to take the code and modify it to give their devices more of a unique flavor.

“Now OEMs and ODMs who are interested in building Android-based handsets can do so without our involvement,” Rich Miner, Google’s group manager for mobile platforms, told us earlier today. Some of these equipment makers are going to expand the role of Android beyond handsets.

This is good news, of course. I haven’t enjoyed the fact that T-Mobile sat between me and RIM for Blackberry software upgrades. The first add-on app that I’d like to see for the G1 is something to allow tethering a laptop to 3G via Bluetooth. I could tether the Curve, but as it only supports Edge it wasn’t incredibly useful.

In a more perfect world, I’d prefer the Treo Pro over the G1, but it doesn’t work on T-Mobile’s crazy array of AWS and normal frequencies, and is also not subsidized, so the G1 is a better deal. The Blackberry Storm is probably a better overall device than the G1, but it’s exclusive to Verizon so I would have had to pay a $200 early termination fee to get it. These phones are mainly for fun, so paying a fee to leave a carrier I basically like makes it all too serious.

Obama’s CTO short list

According to Business Week, Obama’s CTO will be one of these guys:

Among the candidates who would be considered for the job, say Washington insiders, are Vint Cerf, Google’s (GOOG) “chief internet evangelist,” who is often cited as one of the fathers of the Internet; Microsoft (MSFT) chief executive officer Steve Ballmer; Amazon (AMZN) CEO Jeffrey Bezos; and Ed Felten, a prominent professor of computer science and public affairs at Princeton University.

I can’t see Ballmer taking this job when he’s having so much fun, but I imagine any of the others would bite. Trouble is, they’re mostly business guys rather than tech guys, so it’s not an elite group. I’d have to go with Felten, for the fact that he has actual technical knowledge as well as a blog. I’ve debated him about net neutrality, of course.

Technorati Tags:

Europe’s Choice

Andrew Orlowski explains the state of Internet regulation in both the US and Europe in The Register:

For almost twenty years, internet engineers have persuaded regulators not to intervene in this network of networks, and phenomenal growth has been the result. Because data revenues boomed, telecoms companies which had initially regarded packet data networking with hostility, preferred to sit back and enjoy the returns.

But that’s changing fast. Two months ago the US regulator, which scrupulously monitors public radio for profanity, and which spent months investigating a glimpse of Janet Jackson’s nipples, decided it needed to start writing technical mandates. And so off it went.

Unnoticed by almost everyone, so did the EU.

“It’s the revenge of the unemployed Telecomms Regulator”, one seasoned observer in Brussels told us this week. “The internet really put them out of business. Now they’re back.”

The Internet is indeed the most lightly-regulated network going, and it’s the only one in a constant state of improvement. Inappropriate regulation – treating the Internet like a telecom network – is the only way to put an end to that cycle.

A Turgid Tale of Net Neutrality

An article by Glenn Derene on net neutrality in Popular Mechanics is getting a lot of attention this week. It attempts to define net neutrality – always a perilous task – and to contrast the positions of our two presidential candidates on it:

…there’s no accepted definition of network neutrality itself. It is, in fact, more of a networking philosophy than a defined political position. A pure “neutral” network is one that would treat all content that traveled across it equally. No one data packet would be prioritized above another. Image files, audio files, a request from a consumer for a web page—all would be blindly routed from one location to another, and the network would neither know nor care what kind of data was encompassed in each packet. For most but not all kinds of files, that’s how it works now.

When they were created, TCP/IP protocols were not intended to discriminate routinely between packets of data. The idea was to maintain a “best effort” network, one that moved packets from place to place in an effort to maximize overall throughput. But the protocols did allow for discrimination when it was needed. “Even the very first design for IP, back in 1980, had a “type of service” field, intended to provide different levels of traffic priority in a military setting,” says John Wroclawski, the director of the computer networks division at the University of Southern California’s revered Information Sciences Institute.

“The big question is not ‘can you do this technically,'” Wroclawski says. “It’s ‘how do you decide who to favor?'” In today’s multimedia-saturated Internet, streams of time-sensitive voice and video data are routinely prioritized over nonsequential data transfers such as Web pages. If one bit doesn’t follow another in a videoconference, for instance, the stream falls apart. For the most part, even proponents of net neutrality are okay with that level of discrimination.

This passage illustrates the problem with the kind of hardcore neutrality that was bandied about prior to the introduction of bills in the Congress to mandate fair treatment of network traffic, and it misses the point of a non-discriminatory network. There’s nothing wrong with prioritizing packets according to application requirements, and it would be silly not to do so. That’s one of the reasons that the IP header has a TOS field, as the quote indicates. The problem of who sets the TOS (actually DSCP in the current iteration of IP) is also not at all troubling – the application does it. So a proper definition of net neutrality is to treat all packets with the same requirements the same way, regardless of their origin, destination, or the application that generated them. And in fact that’s what the bills required: they didn’t ban QoS, they banned fees for QoS, embracing a flat-rate billing model.

And that’s a problem, of course. If we’re going to allow carriers to work with users to prioritize packets, which we should, we should also allow them to create service plans for this kind of treatment, and it should be legal for the carriers to sell QoS services to third parties (think VoIP providers) that would take effect when the consumer hasn’t purchased any QoS services. The problem of applications that set all their packets to highest priority is controlled by establishing account quotas for volume-per-minute (or less) for each priority. If you use up your quota for high-priority traffic with BitTorrent, your Skype is going to suck. And you have to deal with that. If your applications don’t signal their priority requirements to the network – and most don’t – you can allow your ISP to classify them for you, as they’ll be happy to do.

The flat-rate billing model that’s insensitive to load is a primary reason for the American controversy for net neutrality. Countries like Australia that have volume-metered pricing simply don’t have this issue as their ISP networks aren’t magnets for P2P file distribution. Net Neutrality is indeed an American problem. And moreover, there’s no particular need to cap data volume as long as the carrier is free to deprioritize bulk data. The postal service does this with very good effect, after all.

The fundamental dilemma behind the net neutrality controversy is the desire of activists to have it both ways: they want a QoS guarantee on the one hand, but no prioritization on the other. We can certainly do that in network engineering, but not without substantial changes in the network protocols and routers in use today. What we can do quite practically is provide high-confidence QoS for small amounts of data, sufficient for a single VoIP or gaming session over the typical DSL or wireless broadband link, and that should be sufficient for the time being.

If we can’t prioritize, then it follows that the only way to control network congestion is with crude caps and user-based QoS schemes that have unfortunate side-effects. And nobody really wants that, once they understand what it means

Both candidates are clueless on the issue, so I don’t see it as determinative of which to vote for.

Technorati Tags: .

Ultra-cool Computers

My next personal computer is going to be an ultra-portable tablet. I’ve never bought a laptop of my own, since my employers tend to shower me with them, and they’ve had so many drawbacks I couldn’t see any point in shelling out for one of my own. But recent research shows that we’re officially in the Dynabook Era with great gear like the Dell Latitude XT Tablet, the Lenovo X200 Tablet, the Asus R1E, Fujitsu LifeBook T5010, and the recently-announced HP Elitebook 2730p

What these babies have in common is light weight, sharp but small screens, long battery life, a wealth of connectivity features, and other goodies like web cams and mikes, GPS locators, touch-sensitive displays, and handwriting recognition. They’re more like Smartphones than traditional PCs, but without all the annoying limitations that make Blackberries better in the demo than in real life. Unlike pure slate computers that lack keyboards, they have swivel-mounted screens that can be twisted and folded to cover the laptop’s clamshell base, so you have a touch-sensitive display for when you need to jot notes or draw, and a regular keyboard for high-volume typing.

Each excels in some areas. The Dell seems to have the clearest screen and the best handwriting recognition since it uses a capacitive touchscreen. It draws a bit more power, since capacitive touch keeps an electric field active across the screen, where the more common resistive touch relies on a magnetic stylus to alert the touch sensor that something’s happening. The stylus-activated system rules out using your finger as a pointing device, which is also unfortunate, and has a thicker overlay on the screen than the Dell. The iPhone uses a capacitive touch system.

Dell also has a nice graphics chip with some dedicated memory which signficantly outperforms the shared-memory systems that are commonplace. But Dell’s CPU is at the low end of the scale, and the 1.2 GHz Intel U7600, an ultra-low voltage 65nm dual-core CPU, is as good as it gets. This is apparently a soldered-in part that can’t be upgraded. Dell is also super-expensive.

The Lenovo is too new for much in the way of evaluation, but it has very nice specs and a great pedigree. While the XT Tablet is Dell’s first convertible, the X200 is Lenovo’s third or so, and the details show. If they would only stop white-listing their own wireless cards in the BIOS they’d be at the top of my list. X200 Tablet uses a more substantial and higher power Intel CPU, around 1.8 GHz, which makes is considerably faster than* the Dell. They also use Intel’s Centrino graphics, and suffer a bit for it, but that’s a classic engineering tradeoff. Lenovo has an amazing array of connectivity choices, including the UWB system AKA Wireless USB. With an internal Wireless WAN card with GPS, internal Wi-Fi (including 3×3 11n,) Bluetooth, and Wireless USB, this system has five kinds of wireless without a visible antenna, awfully sharp.

The Fujitsu and Asus convertibles have larger screens – 13.3 in. vs. 12.1 for the Dell and the Lenovo – and add a pound or so of weight. Asus is concentrating on their netbooks these days, and doesn’t seem to be serious about keeping up to date, while the Fujitsu makes some strange choices with noisy fans and heat.

To be avoided are the older HP’s using the AMD chipset. AMD can’t keep up with Intel on power efficiency, so convertible systems that use their parts are only portable between one wall socket and another.

None of these little Dynabooks has made me swipe a card yet, but the collections of technology they represent say a lot about the future of networking. With all that wireless, the obligatory Gigabit Ethernet looks like an afterthought.

Which brings me to my point, gentle readers. What’s your experience with Wireless WANs in terms of service – between AT&T, Sprint, and Verizon, who’s got it going on? I get my cell phone service from friendly old T-Mobile, but they’re not player in the 3G world. I like Verizon’s tiered pricing, as I doubt I’ll use 5GB/mo of random wireless, as close as I tend to be to Wi-Fi hotspots, but it seems like a much nicer fall-back than using my Blackberry Curve as a modem.

For a nice demonstration of the XT’s capacitive touch screen in comparison to the more primitive Lenovo, see Gotta Be Mobile.

*Edited. The X200 non-tablet has a faster processor than the X200 Tablet. The tablet sucks power out of the system, and Lenovo had to de-tune the CPU to provide it.

Skype defense not persuasive

Now that the whole world knows that Skype’s Chinese partner, TOM, has been censoring IM’s and building a database of forbidden speakers for the government of China, Skype President Josh Silverman had to respond:

In April 2006, Skype publicly disclosed that TOM operated a text filter that blocked certain words in chat messages, and it also said that if the message is found unsuitable for displaying, it is simply discarded and not displayed or transmitted anywhere. It was our understanding that it was not TOM’s protocol to upload and store chat messages with certain keywords, and we are now inquiring with TOM to find out why the protocol changed.

We also learned yesterday about the existence of a security breach that made it possible for people to gain access to those stored messages on TOM’s servers. We were very concerned to learn about both issues and after we urgently addressed this situation with TOM, they fixed the security breach. In addition, we are currently addressing the wider issue of the uploading and storage of certain messages with TOM.

I don’t know what’s more disturbing, the fact that one of most vocal net neutrality advocates is colluding with the government of China to finger dissidents, or the fact that they didn’t know they were collaborating. Frankly, this corporate defense raises more questions than it answers.

There are always going to be countries where the local laws are antithetical to post-enlightenment values. I think the correct response to such situations is to just say “no” and go somewhere else. For particularly compelling services, such as Google and Skype, the fact that the foreign service provide can’t do business in the fascist state then becomes a pressure point for change. The companies that collaborate with China are selling out their futures to fund the current quarter. How much money does Skype need to make, anyhow?

Technorati Tags: , ,

FCC fills empty job

Kevin Martin’s FCC has hired a new chief technologist, Jon Peha:

Federal Communications Commission chairman Kevin Martin named John Peha chief technologist, the senior adviser post at the commission on technology issues, based out of the Office of Strategic Planning and Policy Analysis.

I’m a bit disappointed. Peha is the guy who delivered strong testimony denouncing the Comcast management of BitTorrent without bothering to study BitTorrent’s use of TCP connections. His testimony was substantially wrong on a factual basis. Perhaps Peha can persuade me that he means well, but his performance so far has not been encouraging.

UPDATE: What am I talking about? Well take a look at the comments Peha filed in the Comcast matter, which are on-line at the FCC’s web site. He understands what’s at stake:

In the debate over network neutrality, both sides can make points that deserve serious consideration from policymakers. Such consideration requires clear and accurate statements of the facts, to say nothing of the broader issues at stake. Unfortunately, the public debate has often been filled with hyperbole and spin from advocates on both sides.1 Such rhetoric, combined with issues of technical complexity and subtlety, has made it unnecessarily difficult for policymakers to make informed decisions.

So what did he do? He misrepresented the facts and engaged in advocacy spin, to wit:

Comcast sends Device A a reset packet, with parameters set such that Device A will believe the reset is coming from Device B. Device A is therefore led to believe (incorrectly) that Device B is unwilling or unable to continue the session. The same may be occurring at Device B. Thus, the devices determine that the session must be ended, and no further packets can be sent.

It is factually incorrect to say that the process described above merely delays P2P traffic.

Bzzzttt, wrong answer. BitTorrent “sessions” consist of multiple TCP connections, so terminating one, or two, or any number less than the total number of TCP connections a given instance of BitTorrent can use at any particular time is in fact “delaying” instead of “blocking.” Peha makes the assumption that BitTorrent “sessions” are the same as TCP “sessions” and they clearly aren’t. Most of what makes BitTorrent troublesome, in fact, is the large number of TCP “sessions” it uses. It’s particularly outrageous that Peha charges Comcast with misrepresentation and then goes on to misrepresent in his own right.

He then goes on to contradict himself and admit that it’s really “delaying” after all:

After the flow of P2P from a given sender and recipient is blocked or terminated, the recipient is likely to seek some other source for the content. If the content is extremely popular, there are many options available. Consequently, this leads to a small delay, somewhat decreasing the rate at which this recipient can gather content.

So which is it, Dr. Peha, “blocking” or “delaying?” He can’t even make up his own mind. He then goes on to whack Comcast for targeting P2P:

Comcast has elected to employ mechanisms that degrade service for a particular application, i.e. P2P, instead of relying only on congestion control mechanisms that deal with traffic of all application types. Central to their justification of this approach has been the assertion that it is specifically P2P that has an adverse impact on other traffic. This assertion is untrue.

…and he goes on talk about blue cars and red cars, a lot of nonsensical fluff. The fact remains that P2P is the only application with such a great ability to consume bandwidth on a non-stop basis as to degrade the Internet experience of web browsing, and that’s what Comcast was trying to protect.

And more significantly, Peha fails to grasp the fact that applications are not created equal in terms of their tolerance for delay. P2P has no particular time constraints when running as a seeder (serving files to the rest of the Internet) but interactive applications like Web browsing and VoIP have very little tolerance for delay. And now we have a standard in place that requires ISPs to ignore these technical distinctions, thanks largely to the inept analysis of people like Peha.

In additional remarks he confesses his ignorance of network management techniques generally, and compares the Comcast method to a “man in the middle attack.” If that’s what he thinks, really and truly, he’s seriously under-informed. A “man in the middle attack” is means of breaking into a system by stealing passwords. What system did Comcast break into, and what password did they use to do so?

In Kevin Martin’s FCC this outlandish foolishness is a job interview. Peha is smarter than Sarah Palin, but he’s no Dave Farber. Surely the FCC can do better than to employ an advocate in the position that requires depth of technical knowledge and a commitment to impartiality. Kevin Martin has failed the American people again.

A more suitable candidate exists: Just a Girl in Short Shorts Talking about Whatever:

Comcast was regulating the download speeds of peer to peer networks, such as BitTorrent. I like to pirate movies as much as next cheapskate, but I do not think it is necessary that it be given equal priority with VoIP (voice over Internet).

That’s the level of insight we need in a Chief Technologist.

Technorati Tags: ,

Comcast was right, FCC was wrong

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.

What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:

Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.

“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.

Korzeniowski then explains the facts of life:

The reality is that all ISPs are overbooked–they have sold more bandwidth than they can support.

This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.

ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.

“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.

But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.

Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to you computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.

Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.

One approach is to cap upstream traffic:

However, the “all you can eat” model may no longer be viable–a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.

In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.

Capping has its critics, mostly the same people who object to traffic management as well:

For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.

Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.

Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena–it’s not a great business.

In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory–and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.

The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.

Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.

Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.

This is not good.

Cross-posted to CircleID.

UPDATE: See Adam Thierer’s comments on this article at Tech Lib.


Technorati Tags: , ,