Obama Desperate

In a desperate attempt to find an issue that might excite Democratic primary voters enough to overlook his youth and inexperience, Obama pledges Net Neutrality, Ewok Safety (The Register)

Asked whether he’d “re-instate Net Neutrality” as “the Law of the Land”, trailing Presidential Candidate Barack Obama told an audience in Cedar Rapids, Iowa pledged that yes, he would.

He also said he’d protect Ewok villages everywhere, and hoped that Tony Soprano had survived the non-existent bloodbath at the conclusion of The Sopranos.

(So we made the last two up – but they wouldn’t have been any more silly than what the Presidential Candidate really said.)

Andrew’s in fine form, underscoring the fundamental deception in the network neutrality debate: there’s never been a network neutrality law, so there’s no question of “restoring” it.

The Internet’s Big Idea

Because of current events, it strikes me that we need to review the fundamental idea that the Internet was based on, packet switching. This goes back to the dawn of time in the 1960s, before any corporations were involved, and is the pure essence of the thing. Only by understanding the basic idea can we see who’s true to it today and who isn’t.

Packet switching is not a hard notion to grasp, as it involves the spirit of cooperation, a commons, and mutual benefit. Recall that communications networks of the earlier sort allocated bandwidth strictly. On the telephone network you always got the same slice of bandwidth, neither more nor less. On some rare occasions like Mothers’ Day you couldn’t make a call right away, but for all practical purposes it was always there and always the same.

This isn’t a very efficient way to allocate bandwidth, however, because much of it goes to waste. When you’re on a call, you consume just as much bandwidth when you’re speaking as when you’re not, and a great deal of bandwidth is idle for most of the day because it’s simply a reserve for peak calling times. So the designers of the early Internet – it was called ARPANET back then – wondered what would happen if they built a network where bandwidth was a common pool that each user would draw from when he needed it, as he needed it, instead of being strictly divided in the old-fashioned way. In this scheme, during periods of low usage, each user would get tons of bandwidth so the network would appear to be really, really fast, and during periods of high demand it would partition up fairly just like the phone network, or so it seemed. So they launched this great experiment to see what had to be done to make a network that would scale up in performance under light load and scale down to fairness under heavy load. The method is called “packet switching” to differentiate it from the “circuit switching” technology in the phone network, and the ARPANET became the Internet in its second incarnation of protocols.

Packet switching is the single most important idea in the design of the Internet, even more than universal connectivity; after all, the phone network gave us the ability to reach out and annoy anyone on the planet long ago. Packet switching as a way to manage bandwidth is the Big Idea.

It always strikes me as odd that there’s so little understanding of the Big Idea at the base of the Internet’s design pyramid among our would-be Internet regulators and ISP critics. They’re always complaining about the deceptiveness of “unlimited access” rates and “all you can eat” deals that don’t guarantee any constant or minimum rate. (Duh, we tried that already.) This is an experiment in another direction, where the deal is that it’s going to be faster at some times than at other times, but overall it’s going to be much better and much cheaper than guaranteed bandwidth. And sure enough, it works: you can even make phone calls over the Internet of exceptional quality anywhere in the world for peanuts. It’s marvelous.

Well, mostly marvelous. Throughout the Internet’s history, even when it was a closed garden for the research world and long before the great unwashed were allowed on it, the “fairness” problem has proved very difficult to resolve, because each user and each application has a different appetite for bandwidth and a different demand for response time. In the early days, the interactive terminal protocol “telnet” was often stymied by the bulk data transfer protocol “ftp”, and today Skype has to work around BitTorrent.

In theory, it shouldn’t be hard fit the needs of programs that communicate small chunks of data on a tight time line around programs that move massive amounts of data with no particular time requirement around any one chunk. In theory, we should be able to design networks that do that, either by booking reservations for the call or by giving Skype priority over BitTorrent. And in fact we have a number of experimental protocols that will do just that, especially within the confines of a private network in a business, an organization, or a home. And they all depend on a prioritizing or scheduling function having a clear idea of which packets belong to which program, and of the programs being willing to settle for less than what they want for various periods of time. And that’s the way things were on the Internet before it went commercial.

In the mid-80s, we saw Congestion Collapse (“Internet meltdown”) during periods of heavy ftp usage. The quick fix that was cobbled together required TCP to voluntarily throttle-back on the amount of data it transmitted when messages weren’t delivered. This “overloaded” dropped packets, giving them two meanings: either a packet was hit by noise and corrupted, or a network queue was full and the packet was discarded because there was no more room in the line for it. Error rates were low (there was no WiFi back then) so it was fine to react as if the network was overloaded. And we could count on everybody being polite and graciously accepting slow response time until the overall load went down.

This could have been a fairly crappy solution as it didn’t distinguish application requirements between our interactive application and our bulk data application, but implementation did what design failed to do: in practice, telnet data came in much shorter packets than ftp data, so when you combine that with the fact that the packet droppers are looking for space in network queues, you obviously get more space out of dropping long packets than short ones. So voila, in one step you’ve got priority enforcement and congestion control.

And it’s all going to be fine until the next generation of protocols comes around and our assumptions once again have to be revised. I’ll go into that tomorrow.

(Updated by removing some stuff about a blog post that inspired me to write this stuff. That material now has its own post, right below.)

Try this article for a little insight into changes afoot inside the Internet.

A Rather Shoddy Analysis

Former head of AT&T’s dial-up Internet access service Tom Evslin has jumped into the Comcast pile-on, in a particularly disappointing manner. While I’m not surprised when non-technical people like professor of media law Susan Crawford, media regulator Craig Aaron of Free Press/Save the Internet, or lawyer Harold Feld make a hash of the fundamentals of the Internet, I expect better from a fellow who once was in the business of packaging and selling access to it. Evslin simply repeats the same tired claims that Crawford makes about impersonation that isn’t personal, blocking that doesn’t block, and conflicts of interest that don’t exist in the Netflix era, without any bothersome analysis of how BitTorrent behaves relative to other Internet applications or why an ISP might legitimately want to protect interactive response time from background noise.

He goes off on one riff that’s especially odd about a fragmented Internet:

Each of us “sees” the same Internet. Communication becomes much more constrained if each of us sees a different and perhaps incompatible Internet. You can see Google but I can only see Yahoo. I can upload photos to Flickr but you can only upload to dotPhoto. My email can’t get to you; you and I can’t share files (although we can both share with Ellen—today). Gee, almost sounds like mobile phone networks—or cable networks.

This observation, which has no material relevance to the actual case (Comcast lets you download anything from anywhere, and upload as you do it,) is an imitation of one of Tim Berners-Lee‘s complaints about tiered service from last year, and it’s apparently meant to lend gravity to the situation. I don’t know why he bothered to write this piece.

The underlying theme of all these criticisms seems to boil down to one thing: whatever the broadband Internet access providers do is wrong by definition, and whatever the freedom fighters who wrote BitTorrent do is right by the same a priori assumption. So there’s no use for Comcast to try and defend itself, it’s guilty of crimes against democracy simply because it bills consumers for service instead of being a good guy like Google and billing companies with something to sell.

It’s about time: Genarlow Wilson Freed

Cheers to the Georgia Supreme Court for finally getting around to freeing Genarlow Wilson:

ATLANTA, Oct. 26 — The Georgia Supreme Court today ended the 10-year prison sentence of a man who was convicted in 2003 of having consensual oral sex with another teenager. The court said the harsh sentence violated the Constitution’s protection against cruel and unusual punishment.

In a 4-to-3 ruling, the court’s majority said the sentence was “grossly disproportionate” to the crime, which the justices said “did not rise to the level of culpability of adults who prey on children.”

The dude was essentially molested by a drunk 15-year-old at a football party, and had been sentenced to ten years in stir for not saying no. It’s amazing that things like this (the prosecution, not the blow job) still take place in America.

Traffic Shaping and Net Neutrality: Good Versus Evil

Brian Boyko, editor of Network Performance Daily, has written one the better and more thoughtful essays on net neutrality:

See, at the core of Network Neutrality issues are appliances or programs which conduct traffic shaping. In traffic shaping, some packets are prioritized, others are held back. This prioritization can be done on the basis of content (what type of data is being transferred,) on the basis of application (what program is transferring the data) or on the basis of IP address (which computer is sending the packet, and which computer is receiving it.)

Now, here’s the rub: Traffic shaping can help improve network performance, decrease latency, and increase bandwidth by delaying those packets deemed to be of a low priority. Sounds good, right?

Not so fast. Traffic shaping can degrade network performance, increase latency, and decrease bandwidth… by the same means.

The arguments for increased regulation of Internet access boil down to the claim that carriers are bad people who can’t be trusted with such an important job as managing Internet traffic; and we know they’re bad people because they routinely lie to us, suppress viewpoints, and cooperate with the government. Yet one could easily make the same charges against the very advocates of net neutrality regulations, and make a strong case that they too are bad people and bad corporate actors. Google and Moveon.org have misbehaved around the Susan Collins ad, and advocates who equate TCP Resets with identity theft are being less than honest.

So you can’t ultimately resolve this issue by identifying the good people and standing with them. There are good and bad people on both sides, so it’s the specific behavior that matters, and how much blind faith you have in telecom regulators to envision the potential of the Internet. And that’s where I have to part company with the pro-regulation crowd, because I’ve never met a regulator who was a man (or woman) of vision; typically, they’re small-minded and vindictive.

Somebody always ends up managing the traffic on a network. If the carriers are forbidden from doing it, the job will ultimately end up in the hands of the largest users of bandwidth, the Googles and Yahoos of the world. And when you put them in control of a cable plant they didn’t have to pay for, do you really suppose they’re going to run it in anybody’s interest but their own?

Congressman to Conspiracy Nut: Let the Market Work!

Chris Soghoian is claiming that Congressmen Rick Boucher wants to slap some regulations on Comcast for engaging in Admission Control. Not so:

[Soghoian] asked Boucher what he would do if Comcast stuck to its guns and kept discriminating against BitTorrent. In particular, [Soghoian] asked him if he would propose legislation compelling the company to treat all traffic fairly.

Unfortunately for fans of Net neutrality, the congressman said he was not ready to go down this path and instead stressed market-based methods of fixing the problems. Instead of tinkering with packets, the congressman said that in the short term, Comcast should “simply tier their offerings and engage in a pricing structure that allocates more bandwidth to those who pay more, and less to those who pay less.”

However, he said “the long-term answer is to deploy more capacity. That is what municipal broadband and other telecom companies are doing. Ultimately, the cable companies will have to deploy fiber to the house.”

I don’t think it’s appropriate for the Congressman to dictate wiring specifications, but he’s entitled to his opinion as long as he doesn’t try to legislate it.

TLF Podcast on the Comcast Kerfuffle

Stop everything and cruise on down to Technology Liberation Front so you can listen to this stellar podcast on the Comcast Kerfuffle:

Two networking experts join us in the podcast this week to discuss exactly what Comcast is doing and its implications for public policy. The experts are Ed Felten, professor of computer science and public affairs at Princeton University, and Richard Bennett, a network engineer and frequent commenter to the TLF. Also on the show are Adam Thierer of the Progress and Freedom Foundation, James Gattusso of the Heritage Foundation, and Jerry Brito of the Mercatus Center at George Mason University.

And try to ignore the heavy breathing, the NSA was obviously listening in.

You can also download the podcast here, or on BitTorrent here.

Correcting the Record on Comcast

Cynthia Brumfield tries to set the record straight on the wild claims about Comcast, noting some back-pedalling by the AP reporter who carried the net neutrality water on this story:

Svensson’s follow-up article, however, retreats from the notion that Comcast “blocks” P2P uploading. Instead, Comcast “delays” P2P uploads, Svensson now writes, a nuance that the Comcast executives believe is critical. During heavy congestion, Comcast slows down P2P uploads by postponing the transaction. The system will, however, repeatedly keep trying to complete the upload until it has been completed.

Svensson does back-pedal a bit in his latest story, but not enough:

On Tuesday, Mitch Bowling, senior vice president of Comcast Online Services, added a nuance to that statement, saying that while Comcast may block initial connection attempts between two computers, it eventually lets the traffic through if the computers keep trying.

“During periods of heavy peer-to-peer congestion, which can degrade the experience for all customers, we use several network management technologies that, when necessary, enable us to delay — not block — some peer-to-peer traffic. However, the peer-to-peer transaction will eventually be completed as requested,” Bowling said.

The explanation is not inconsistent with the AP’s tests. In one case, a BitTorrent file transfer was squelched, apparently by messages generated by Comcast, only to start 10 minutes later. Other tests were called off after around 5 minutes, while the transfers were still stifled.

He doesn’t quote anybody who thinks Comcast’s actions are legit, only the snake-oil peddlers and alarmists who insist that Comcast engages in identity theft in order to prevent dorm-room innovators from cutting into Comcast’s cable TV revenues. (Frankly, I think they’re more worried about DirecTV’s 70 HD channels.)

What happened to good old-fashioned All-American balance in journalism? The one editorial I’ve seen on this story, in the LA Times, was nothing more than a plagiarized EFF press release, and surely they can do better than that.

Ed Felten’s Alternate Internet

Professor Ed Felten tells Comcast to stay after class and clean erasers:

There are well-established mechanisms for dealing with traffic congestion on the Internet. Networks are supposed to respond to congestion by dropping packets; endpoint computers notice that their packets are being dropped and respond by slowing their transmissions, thus relieving the congestion. The idea sounds simple, but getting the details right, so that the endpoints slow down just enough but not too much, and the network responds quickly to changes in traffic level but doesn’t overreact, required some very clever, subtle engineering.

Indeed, if everybody was nice, polite, and well-behaved, the Internet’s traffic management features would be enough for Comcast and everybody else. And we wouldn’t need jails, or police, or traffic signs because everybody would just be good. That’s the end-to-end world, and it exists nowhere in this universe.

What does exist is a program called BitTorrent that allows the user to set targets for bandwidth consumption in both the upstream and the downstream direction, and strives to reach those limits by any means necessary. If the link is slow, it opens additional connections. If TCP is slow, it uses UDP. If its connection requests are filtered, it encrypts them. If its port is blocked, it uses a different one. It worms through firewalls and works around NATs. Nothing in the conventional arsenal of TCP effectively limits BitTorrent’s appetite for bandwidth, it’s all up to the user. And if he’s a hog, it’s out of control.
The long-term solution to congestion is to increase bandwidth, and there is no cheaper way to to that than to expel bandwidth hogs. Comcast doesn’t always go that far, and for that they get blasted in the blogs. Life is not fair.

Fundamentally, the problem that Comcast addresses with its TCP RSTs isn’t an Internet problem, it’s an Intranet problem, as in the DOCSIS network inside Comcast doesn’t handle high loads of upstream traffic without going unstable. This isn’t a problem that the Internet can address, although TCP does provide Comcast with a knob to turn.

H/T Tech Lib.