Obama’s CTO short list

According to Business Week, Obama’s CTO will be one of these guys:

Among the candidates who would be considered for the job, say Washington insiders, are Vint Cerf, Google’s (GOOG) “chief internet evangelist,” who is often cited as one of the fathers of the Internet; Microsoft (MSFT) chief executive officer Steve Ballmer; Amazon (AMZN) CEO Jeffrey Bezos; and Ed Felten, a prominent professor of computer science and public affairs at Princeton University.

I can’t see Ballmer taking this job when he’s having so much fun, but I imagine any of the others would bite. Trouble is, they’re mostly business guys rather than tech guys, so it’s not an elite group. I’d have to go with Felten, for the fact that he has actual technical knowledge as well as a blog. I’ve debated him about net neutrality, of course.

Technorati Tags:

Europe’s Choice

Andrew Orlowski explains the state of Internet regulation in both the US and Europe in The Register:

For almost twenty years, internet engineers have persuaded regulators not to intervene in this network of networks, and phenomenal growth has been the result. Because data revenues boomed, telecoms companies which had initially regarded packet data networking with hostility, preferred to sit back and enjoy the returns.

But that’s changing fast. Two months ago the US regulator, which scrupulously monitors public radio for profanity, and which spent months investigating a glimpse of Janet Jackson’s nipples, decided it needed to start writing technical mandates. And so off it went.

Unnoticed by almost everyone, so did the EU.

“It’s the revenge of the unemployed Telecomms Regulator”, one seasoned observer in Brussels told us this week. “The internet really put them out of business. Now they’re back.”

The Internet is indeed the most lightly-regulated network going, and it’s the only one in a constant state of improvement. Inappropriate regulation – treating the Internet like a telecom network – is the only way to put an end to that cycle.

A Turgid Tale of Net Neutrality

An article by Glenn Derene on net neutrality in Popular Mechanics is getting a lot of attention this week. It attempts to define net neutrality – always a perilous task – and to contrast the positions of our two presidential candidates on it:

…there’s no accepted definition of network neutrality itself. It is, in fact, more of a networking philosophy than a defined political position. A pure “neutral” network is one that would treat all content that traveled across it equally. No one data packet would be prioritized above another. Image files, audio files, a request from a consumer for a web page—all would be blindly routed from one location to another, and the network would neither know nor care what kind of data was encompassed in each packet. For most but not all kinds of files, that’s how it works now.

When they were created, TCP/IP protocols were not intended to discriminate routinely between packets of data. The idea was to maintain a “best effort” network, one that moved packets from place to place in an effort to maximize overall throughput. But the protocols did allow for discrimination when it was needed. “Even the very first design for IP, back in 1980, had a “type of service” field, intended to provide different levels of traffic priority in a military setting,” says John Wroclawski, the director of the computer networks division at the University of Southern California’s revered Information Sciences Institute.

“The big question is not ‘can you do this technically,'” Wroclawski says. “It’s ‘how do you decide who to favor?'” In today’s multimedia-saturated Internet, streams of time-sensitive voice and video data are routinely prioritized over nonsequential data transfers such as Web pages. If one bit doesn’t follow another in a videoconference, for instance, the stream falls apart. For the most part, even proponents of net neutrality are okay with that level of discrimination.

This passage illustrates the problem with the kind of hardcore neutrality that was bandied about prior to the introduction of bills in the Congress to mandate fair treatment of network traffic, and it misses the point of a non-discriminatory network. There’s nothing wrong with prioritizing packets according to application requirements, and it would be silly not to do so. That’s one of the reasons that the IP header has a TOS field, as the quote indicates. The problem of who sets the TOS (actually DSCP in the current iteration of IP) is also not at all troubling – the application does it. So a proper definition of net neutrality is to treat all packets with the same requirements the same way, regardless of their origin, destination, or the application that generated them. And in fact that’s what the bills required: they didn’t ban QoS, they banned fees for QoS, embracing a flat-rate billing model.

And that’s a problem, of course. If we’re going to allow carriers to work with users to prioritize packets, which we should, we should also allow them to create service plans for this kind of treatment, and it should be legal for the carriers to sell QoS services to third parties (think VoIP providers) that would take effect when the consumer hasn’t purchased any QoS services. The problem of applications that set all their packets to highest priority is controlled by establishing account quotas for volume-per-minute (or less) for each priority. If you use up your quota for high-priority traffic with BitTorrent, your Skype is going to suck. And you have to deal with that. If your applications don’t signal their priority requirements to the network – and most don’t – you can allow your ISP to classify them for you, as they’ll be happy to do.

The flat-rate billing model that’s insensitive to load is a primary reason for the American controversy for net neutrality. Countries like Australia that have volume-metered pricing simply don’t have this issue as their ISP networks aren’t magnets for P2P file distribution. Net Neutrality is indeed an American problem. And moreover, there’s no particular need to cap data volume as long as the carrier is free to deprioritize bulk data. The postal service does this with very good effect, after all.

The fundamental dilemma behind the net neutrality controversy is the desire of activists to have it both ways: they want a QoS guarantee on the one hand, but no prioritization on the other. We can certainly do that in network engineering, but not without substantial changes in the network protocols and routers in use today. What we can do quite practically is provide high-confidence QoS for small amounts of data, sufficient for a single VoIP or gaming session over the typical DSL or wireless broadband link, and that should be sufficient for the time being.

If we can’t prioritize, then it follows that the only way to control network congestion is with crude caps and user-based QoS schemes that have unfortunate side-effects. And nobody really wants that, once they understand what it means

Both candidates are clueless on the issue, so I don’t see it as determinative of which to vote for.

Technorati Tags: .

Skype defense not persuasive

Now that the whole world knows that Skype’s Chinese partner, TOM, has been censoring IM’s and building a database of forbidden speakers for the government of China, Skype President Josh Silverman had to respond:

In April 2006, Skype publicly disclosed that TOM operated a text filter that blocked certain words in chat messages, and it also said that if the message is found unsuitable for displaying, it is simply discarded and not displayed or transmitted anywhere. It was our understanding that it was not TOM’s protocol to upload and store chat messages with certain keywords, and we are now inquiring with TOM to find out why the protocol changed.

We also learned yesterday about the existence of a security breach that made it possible for people to gain access to those stored messages on TOM’s servers. We were very concerned to learn about both issues and after we urgently addressed this situation with TOM, they fixed the security breach. In addition, we are currently addressing the wider issue of the uploading and storage of certain messages with TOM.

I don’t know what’s more disturbing, the fact that one of most vocal net neutrality advocates is colluding with the government of China to finger dissidents, or the fact that they didn’t know they were collaborating. Frankly, this corporate defense raises more questions than it answers.

There are always going to be countries where the local laws are antithetical to post-enlightenment values. I think the correct response to such situations is to just say “no” and go somewhere else. For particularly compelling services, such as Google and Skype, the fact that the foreign service provide can’t do business in the fascist state then becomes a pressure point for change. The companies that collaborate with China are selling out their futures to fund the current quarter. How much money does Skype need to make, anyhow?

Technorati Tags: , ,

FCC fills empty job

Kevin Martin’s FCC has hired a new chief technologist, Jon Peha:

Federal Communications Commission chairman Kevin Martin named John Peha chief technologist, the senior adviser post at the commission on technology issues, based out of the Office of Strategic Planning and Policy Analysis.

I’m a bit disappointed. Peha is the guy who delivered strong testimony denouncing the Comcast management of BitTorrent without bothering to study BitTorrent’s use of TCP connections. His testimony was substantially wrong on a factual basis. Perhaps Peha can persuade me that he means well, but his performance so far has not been encouraging.

UPDATE: What am I talking about? Well take a look at the comments Peha filed in the Comcast matter, which are on-line at the FCC’s web site. He understands what’s at stake:

In the debate over network neutrality, both sides can make points that deserve serious consideration from policymakers. Such consideration requires clear and accurate statements of the facts, to say nothing of the broader issues at stake. Unfortunately, the public debate has often been filled with hyperbole and spin from advocates on both sides.1 Such rhetoric, combined with issues of technical complexity and subtlety, has made it unnecessarily difficult for policymakers to make informed decisions.

So what did he do? He misrepresented the facts and engaged in advocacy spin, to wit:

Comcast sends Device A a reset packet, with parameters set such that Device A will believe the reset is coming from Device B. Device A is therefore led to believe (incorrectly) that Device B is unwilling or unable to continue the session. The same may be occurring at Device B. Thus, the devices determine that the session must be ended, and no further packets can be sent.

It is factually incorrect to say that the process described above merely delays P2P traffic.

Bzzzttt, wrong answer. BitTorrent “sessions” consist of multiple TCP connections, so terminating one, or two, or any number less than the total number of TCP connections a given instance of BitTorrent can use at any particular time is in fact “delaying” instead of “blocking.” Peha makes the assumption that BitTorrent “sessions” are the same as TCP “sessions” and they clearly aren’t. Most of what makes BitTorrent troublesome, in fact, is the large number of TCP “sessions” it uses. It’s particularly outrageous that Peha charges Comcast with misrepresentation and then goes on to misrepresent in his own right.

He then goes on to contradict himself and admit that it’s really “delaying” after all:

After the flow of P2P from a given sender and recipient is blocked or terminated, the recipient is likely to seek some other source for the content. If the content is extremely popular, there are many options available. Consequently, this leads to a small delay, somewhat decreasing the rate at which this recipient can gather content.

So which is it, Dr. Peha, “blocking” or “delaying?” He can’t even make up his own mind. He then goes on to whack Comcast for targeting P2P:

Comcast has elected to employ mechanisms that degrade service for a particular application, i.e. P2P, instead of relying only on congestion control mechanisms that deal with traffic of all application types. Central to their justification of this approach has been the assertion that it is specifically P2P that has an adverse impact on other traffic. This assertion is untrue.

…and he goes on talk about blue cars and red cars, a lot of nonsensical fluff. The fact remains that P2P is the only application with such a great ability to consume bandwidth on a non-stop basis as to degrade the Internet experience of web browsing, and that’s what Comcast was trying to protect.

And more significantly, Peha fails to grasp the fact that applications are not created equal in terms of their tolerance for delay. P2P has no particular time constraints when running as a seeder (serving files to the rest of the Internet) but interactive applications like Web browsing and VoIP have very little tolerance for delay. And now we have a standard in place that requires ISPs to ignore these technical distinctions, thanks largely to the inept analysis of people like Peha.

In additional remarks he confesses his ignorance of network management techniques generally, and compares the Comcast method to a “man in the middle attack.” If that’s what he thinks, really and truly, he’s seriously under-informed. A “man in the middle attack” is means of breaking into a system by stealing passwords. What system did Comcast break into, and what password did they use to do so?

In Kevin Martin’s FCC this outlandish foolishness is a job interview. Peha is smarter than Sarah Palin, but he’s no Dave Farber. Surely the FCC can do better than to employ an advocate in the position that requires depth of technical knowledge and a commitment to impartiality. Kevin Martin has failed the American people again.

A more suitable candidate exists: Just a Girl in Short Shorts Talking about Whatever:

Comcast was regulating the download speeds of peer to peer networks, such as BitTorrent. I like to pirate movies as much as next cheapskate, but I do not think it is necessary that it be given equal priority with VoIP (voice over Internet).

That’s the level of insight we need in a Chief Technologist.

Technorati Tags: ,

Comcast was right, FCC was wrong

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.

What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:

Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.

“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.

Korzeniowski then explains the facts of life:

The reality is that all ISPs are overbooked–they have sold more bandwidth than they can support.

This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.

ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.

“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.

But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.

Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to you computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.

Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.

One approach is to cap upstream traffic:

However, the “all you can eat” model may no longer be viable–a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.

In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.

Capping has its critics, mostly the same people who object to traffic management as well:

For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.

Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.

Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena–it’s not a great business.

In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory–and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.

The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.

Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.

Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.

This is not good.

Cross-posted to CircleID.

UPDATE: See Adam Thierer’s comments on this article at Tech Lib.


Technorati Tags: , ,

Google’s Telephony Patent Application not Novel

Google has apparently filed an application for a system that allows bandwidth provider to bid on phone calls:

Google’s patent is called “Flexible Communication Systems and Methods” and the abstract says:

“A method of initiating a telecommunication session for a communication device include submitting to one or more telecommunication carriers a proposal for a telecommunication session, receiving from at least one of the one or more of telecommunication carriers a bid to carry the telecommunications session, and automatically selecting one of the telecommunications carriers from the carriers submitting a bid, and initiating the telecommunication session through the selected telecommunication carrier.”

Read the full patent here

The thing I find interesting about this is that I invented a similar technique in 1997, motivated by the desire to get bandwidth-on-demand for video conferences. If this is granted, it certainly won’t survive a court challenge.

I’ll post some details on my invention, which was never patented, shortly.

Technorati Tags: , , ,

Why I don’t like One Web Day

Today is OneWebDay, the annual exercise in promoting the World Wide Web and touting its many benefits. Each year the event has a theme, and this year’s is something to do with the American election, which is a fine, if somewhat parochial issue for a global event.

OWD is the brainchild of law professor Susan Crawford, one of the more passionate advocates of a stupid Internet (their expression) in which ISPs and Internet wholesalers have to treat all packets the same way. While Crawford is sincere, I think the exercise is misguided.

There is more to the Internet than the Web: the Internet is a general-purpose network that needs to carry real-time communications such as VoIP and Video Chat alongside Web traffic, P2P,and other kinds of large file transfer systems.

The call for a monolithic traffic handling and regulatory system comes from the misperception that all forms of traffic look and act like web traffic. This is clearly not the case, as we’ve argued until we’re blue in the face on this blog and in print.

One Web Day privileges web use over these other equally important uses of the Internet, and reinforces the myth that a dumb Internet is essential to the economy, politics, freedom, and the like. In fact, a functional network forms the basis of all human uses, for good and for ill.

Next year I’d like to see a “One Internet Day” that touts the projects that aim to improve the Internet. I’d make a sign and go to a rally for that. But “One Web Day” doesn’t do it for me.

Technorati Tags: , ,

Secret laws are not law

While looking for the essence of Lessig’s “code is law” formulation, I happened on this little gem:

If there is one thing clear about the value we demand of East Coast Code, it is transparency. Secret laws are not law. And if there is one thing clear about the recent panic about privacy, it is that much of the anxiety was about the secrets hidden within closed code. Closed code hides its systems of control; open code can’t. Any encryption or identification system built into open code is transparent to those who can read the code, just as laws are transparent to those who can read Congress’ code – lawyers.

(“East Coast code” means laws and government regulations) Kinda makes you wonder why Lessig wasn’t critical of the rabbit-out-of-the-hat regulations the FCC imposed on Comcast.

Oh well.

Technorati Tags: , , ,