Internet Myths

Among my missions in this life is the chore of explaining networking in general and the Internet in particular to policy makers and other citizens who don’t build network technology for a living. This is enjoyable because it combines so many of the things that make me feel good: gadgetry, technology, public policy, writing, talking, and education. It’s not easy, of course, because there are a lot of things to know and many ways to frame the issues. But it’s possible to simplify the subject matter in a way that doesn’t do too much violence to the truth.

As I see it, the Internet is different from the other networks that we’re accustomed to in a couple of important ways: for one, it allows a machine to connect simultaneously to a number of other machines. This is useful for web surfing, because it makes it possible to build a web page that draws information from other sources. So a blog can reference pictures, video streams, and even text from around the Internet and put it in one place where it can be updated in more-or-less real time. It enables aggregation, in other words. Another thing that’s unique about the Internet is that the underlying transport system can deliver information at very high speed for short periods of time. The connection between a machine and the Internet’s infrastructure is idle most of the time, but when it’s active it can get its information transferred very, very quickly. This is a big contrast to the telephone network, where information is constrained by call setup delays and a very narrow pipe.
Continue reading “Internet Myths”

Damned if you do, screwed if you don’t

The FCC has finally noticed that reducing the Quality of Service of an Internet access service affects all the applications that use it, including VoIP. They’ve sent a harsh letter to Comcast seeking ammunition with which to pillory the cable giant, in one of Kevin Martin’s parting shots:

Does Comcast give its own Internet phone service special treatment compared to VoIP competitors who use the ISP’s network? That’s basically the question that the Federal Communications Commission posed in a letter sent to the cable giant on Sunday. The agency has asked Comcast to provide “a detailed justification for Comcast’s disparate treatment of its own VoIP service as compared to that offered by other VoIP providers on its network.” The latest knock on the door comes from FCC Wireline Bureau Chief Dana Shaffer and agency General Counsel Matthew Berry.

Readers of this blog will remember that I raised this issue with the “protocol-agnostic” management scheme Comcast adopted in order to comply with the FCC’s over-reaction to the former application-aware scheme, which prevented P2P from over-consuming bandwidth needed by more latency-sensitive applications. My argument is that network management needs to operate in two stages, one that allocates bandwidth fairly among users, and a second that allocates it sensibly among the applications in use by each user. The old Comcast scheme did one part of this, and the new scheme does the other part. I’d like to see both at the same time, but it’s not at all clear that the FCC will allow that. So we’re left with various forms of compromise.

The fundamental error that the FCC is making in this instance is incorrectly identifying the “service” that it seeks to regulate according to a new attempt to regulate services (skip to 13:30) rather than technologies.

Comcast sells Internet service, telephone service, and TV service. It doesn’t sell “VoIP service” so there’s no basis to this complaint. The Commission has made it very difficult for Comcast to even identify applications running over the Internet service, and the Net Neuts have typically insisted it refrain from even trying to do so; recall David Reed’s fanatical envelope-waving exercise at the Harvard hearing last year.

The telephone service that Comcast and the telephone companies sell uses dedicated bandwidth, while the over-the-top VoIP service that Vonage and Skype offer uses shared bandwidth. I certainly hope that native phone service outperforms ad hoc VoIP; I pay good money to ensure that it does.

This action says a lot about what’s wrong with the FCC. Regardless of the regulatory model it brings to broadband, it lacks the technical expertise to apply it correctly. The result is “damned if you do, damned if you don’t” enforcement actions.

This is just plain silly. The only party the FCC has any right to take to task in this matter is itself.

The pirates who congregate at DSL Reports are in a big tizzy over this, naturally.

Keeping the Issue Alive

Friends of Broadband should be pleased with President-elect Obama’s proposed broadband stimulus program, which proposes $6 billion in grants for wireless and other forms of broadband infrastructure. Granted, the package isn’t as large as many had wished; Educause had asked for $32 billion, ITIF wanted $10 billion, and Free Press wanted $40 billion, but this is a good start. Harold Feld puts the size of the grant package in perspective and praises it on his Tales of the Sausage Factory blog.

But there’s no pleasing some people. Free Press has mounted an Action Alert, asking its friends to oppose the stimulus package as it currently stands. The Freeps, who run the “Save the Internet” campaign, want strings attached to the money, insisting it only be given to projects that meet their requirements:

1. Universal: focused on connecting the nearly half of the country stuck on the wrong side of the digital divide.
2. Open: committed to free speech and without corporate gatekeepers, filters or discrimination.
3. Affordable: providing faster speeds at lower prices.
4. Innovative: dedicated to new projects only and available to new competitors, including municipalities and nonprofits.
5. Accountable: open to public scrutiny so we can ensure that our money isn’t being spent to prop up stock prices and support market monopolies.

These goals are not even consistent with each other. Half of America uses broadband today, and half doesn’t. Most of the unconnected half have chosen not to subscribe to services that reach their homes already, opting to remain outside the broadband revolution for their own reasons. So we can’t very well pursue numbers 1 and 4 at the same time. Most of this money will be spent in rural areas that are currently served by Wireless ISPs like Lariat. Rural population isn’t as large as urban population, so going into unserved or underserved areas isn’t going to do much for the digital divide-by-choice that plagues America’s inner cities.

I suspect there’s some self-interest involved here, such that Free Press wants to keep the issue of America’s place in the global ranking of broadband penetration about where it is (between 7th and 15th, depending on whose numbers you like) in order to raise money, have a soapbox, and keep on complaining.

I don’t see any other way to explain this.

UPDATE: Freep has sent letters to the committee chairs with much less incendiary language, but arguing the same line: the Internet is a telecom network and has to be regulated the way that telecom networks have always been regulated. This angle is clearly good if you’re a career telecom regulator, but it’s blind to the technical realities of IP network management. Making an IP network fair and functional requires “discrimination”, and the Freep doesn’t get that. Not even a little bit.

This organization has established an amazing ability to confuse its self-interest with the public interest in the short time that it’s been around. Freep’s first issue, after all, was a series of regulations designed to prevent the rapacious newspaper industry from taking over the television industry. They still push for limits on TV and newspaper cross-ownership, and only got into the Internet-as-telephone fight to advance their initial cause. The number of people who think free societies need to be protected from “powerful newspapers” is vanishingly small, or course, around the same size as the flat-earther demographic.

UPDATE 2: It gets even stranger. Open access provisions are already in the bill, as Matthew Lasar points out on the Ars Technica blog:

As for the net neutrality and open access ideas; well, they’re already in the bill (PDF; see p. 53). NTIA, the executive branch agency tasked with disbursing the broadband money, is required to ensure that all grant recipients operate both wired and wireless services on an “open access basis,” though it’s left up to NTIA to define what this means and how it works.

In addition, anyone taking grant money must “adhere to the principles contained in the Federal Communications Commission’s broadband policy statement,” which lays out four basic neutrality provisions for Internet companies. In other words, although “network neutrality” isn’t mentioned, it’s already in the bill in a basic way. (Note that the FCC policy statement only protects “legal content,” however; it’s not a pure “end-to-end” packet delivery guarantee.)

Here’s a suggestion for the Freep: before issuing your next mouth-breathing Action Alert about a pending bill, read the damn thing. You won’t look like such a bunch of knee-jerk alarmists if you do.

Technorati Tags:

Briscoe explains Re-ECN in plain English

See the current issue of IEEE Spectrum for a nice description of Bob Briscoe’s Re-ECN, A Fairer, Faster Internet Protocol:

Refeedback introduces a second type of packet marking—think of these as credits and the original [ECN] congestion markings as debits. The sender must add sufficient credits to packets entering the network to cover the debit marks that are introduced as packets squeeze through congested Internet pipes. If any subsequent network node detects insufficient credits relative to debits, it can discard packets from the offending stream.

To keep out of such trouble, every time the receiver gets a congestion (debit) mark, it returns feedback to the sender. Then the sender marks the next packet with a credit. This reinserted feedback, or refeedback, can then be used at the entrance to the Internet to limit congestion—you do have to reveal everything that may be used as evidence against you.

Refeedback sticks to the Internet principle that the computers on the edge of the network detect and manage congestion. But it enables the middle of the network to punish them for providing misinformation.

The limits and checks on congestion at the borders of the Internet are trivial for a network operator to add. Otherwise, the refeedback scheme does not require that any new code be added to the network’s equipment; all it needs is that standard congestion notification be turned on. But packets need somewhere to carry the second mark in the “IP” part of the TCP/IP formula. Fortuitously, this mark can be made, because there is one last unused bit in the header of every IP packet.

This is a plan that will allow interactive uses of the Internet to co-exist happily with bulk data transfer. It’s quite brilliant and I recommend it as an alternative to a lot of nonsense that’s been floated around this space.

Technorati Tags: ,

High Noon in North Texas

Now here’s a great story:

Verizon is setting up a Wild West-style telecom showdown by expanding its FiOS network further into territory traditionally held by rival AT&T, says a new report from Information Gatekeepers.

According to IGI, a telecom consulting firm, Verizon’s recent FiOS expansion into areas of northern Texas could mark the first time that one carrier has directly competed with another in its own franchised territory for residential wireline Internet services. Traditionally, Verizon and AT&T have competed with each other primarily for wireless voice and data services, as the companies’ landline businesses have been dependent on architecture that each company has purchased over the years from the original “Baby Bell” companies formed in the wake of AT&T’s breakup in 1984.
Windows Vista: Necessity and Opportunity: Download now

But with Verizon now offering video, voice and data services over its fiber-optic network in AT&T’s home state, IGI says that the telecom industry could be “drastically” changed. In particular, IGI says that Verizon’s decision to “overbuild” its facilities into AT&T’s franchise areas could spark AT&T to begin overbuilding as well, thus turning competition for building out services into a potential “nationwide phenomenon.”

More to come as I dig into the details, but this has the potential to be the story of the decade.

One of the great insults

Brzezinski says to Scarborough: “You have a such stunningly superficial knowledge of what went on it’s almost embarrassing to listen to you.”

I know just how he feels. Larry Lessig’s opinion in Newsweek about a government ministry to make sure “innovation” happens is along the same lines. “Innovation”, which is neither good nor bad, happens when it has to, and the best way for a government to stimulate it is probably to burden businesses with more redtape and mindless regulations, but that’s not good for more reasons than I can count.

Re-naming the FCC isn’t likely to accomplish a great deal, and that’s about all that Lessig actually recommends.

The People’s Movement (for Google)

I did a podcast with the folks at The Technology Liberation Front on Google’s edge-caching system:

This week we saw a new kerfuffle of sorts develop over the revelation in a Monday front-page Wall Street Journal story that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content. What exactly is it that Google is proposing, and does it mean – as the Wall Street Journal and some others have suggested – that Google is somehow going back on their support for Net neutrality principles and regulation? More importantly, what does it all mean for the future of the Internet, network management, and consumers. That’s what we discussed on the TLF’s latest “Tech Policy Weekly” podcast.

Google’s genius at creating a citizen’s movement to boost their bottom line needs more praise, but this is a start.

Virgin Media serves the people, not the pirates

The Register broke a story today about the plan by the UK’s cable company, Virgin Media, to dump neutrality and target BitTorrent users

The UK’s second largest ISP, Virgin Media, will next year introduce network monitoring technology to specifically target and restrict BitTorrent traffic, its boss has told The Register.

The move will represent a major policy shift for the cable monopoly and is likely to anger advocates of “net neutrality”, who say all internet traffic should be treated equally. Virgin Media currently temporarily throttles the bandwidth of its heaviest downloaders across all applications at peak times, rather than targeting and “shaping” specific types of traffic.

Virgin Media’s CEO Neil Berkett has previously described net neutrality as “a load of bollocks*,” a sentiment that I can relate to if not specifically endorse.

UPDATE: Wired Blogs reports Virgin is denying the veracity of El Reg’s story, but read the world’s finest tech pub tomorrow for the real story. In the meantime, a quick perusal of Virgin’s traffic policy indicates that they already reserve extensive traffic shaping powers.

Blogger Tom Evslin has jumped on the story with some instant analysis. The problem this story causes for American Liberals is cognitive dissonance: Britain is a virtuous European nation with a National Health Service, a leftwing government, and a commitment to the Kyoto Protocol, yet they permit more traffic shaping than the FCC will allow Comcast; this sort of contradiction causes my friends on the left to drink heavily, or to blog obsessively.

*American translation: BS.

My Google piece in The Register

Thanks to the miracle of trans-Atlantic collaborative journalism, here’s my quick take on Google’s caching scheme:

Network Neutrality, the public policy unicorn that’s been the rallying cry for so many many on the American left for the last three years, took a body blow on Sunday with the Wall Street Journal’s disclosure that the movement’s sugar-daddy has been playing both sides of the fence.

The Journal reports that Google “has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content.”

Google claims that it’s doing nothing wrong, and predictably accuses the Journal of writing a hyperbolic piece that has the facts all wrong. It’s essentially correct. Google is doing nothing that Akamai doesn’t already do, and nothing that the ISPs and carriers don’t plan to do to reduce the load that P2P puts on their transit connections.

A lot of questions remain about Google’s public policy flexibility and how wise their server farm strategy has been, and we’ll deal with them as Google answers our questions.

Google Gambles in Casablanca

I’m shocked.

Google has been caught red-handed negotiating deals with ISPs to host servers inside the building, just like Akamai does. The semi-technical press thinks this is some sort of a game-changing event:

The celebrated openness of the Internet — network providers are not supposed to give preferential treatment to any traffic — is quietly losing powerful defenders.

Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers.

At risk is a principal [sic] known as network neutrality: Cable and phone companies that operate the data pipelines are supposed to treat all traffic the same — nobody is supposed to jump the line.

Oh my goodness, where do I begin? Google already has a fast lane to most Internet users today thanks to their network of mega-data centers around the world, which I’ve written and spoken about at some length. These systems are wired directly to public Internet exchange points in high population areas and effectively move Google’s traffic to a higher priority than all but three competing routes: Akamai’s servers hosted inside ISP premises, Limelight’s private network wired directly to ISP networks, and the ISPs’ own content. Google’s desire to host servers (or routers, it could be either) inside ISP networks is a move calculated to improve on the ROI on the existing network of server farms and to blunt the Akamai advantage. It makes more sense to wire directly to the ISPs through private arrangements than to stress the public Internet infrastructure any further.

One thing that this deal doesn’t do is change the Internet infrastructure. Arrangements like this already exist, predating the kerfuffle over fast lanes created out of thin air by public interest advocates three years ago.

The Internet is not a network, it’s a complex set of agreements to interconnect independently owned and operated networks in various ways. There is no standard agreement, and this story doesn’t report on a new one. What it simply shows is that money buys performance in the technology space, and that should come as no surprise to anyone. Google has to do something like this to avoid being clobbered by ISP-friendly P4P as well as by Akamai.

Yes, Virginia, network neutrality is a myth, and it always has been.

UPDATE: Google’s response to the WSJ piece does nothing but muddy the waters. Net Neutrality advocates have insisted on a wall of separation between content and infrastructure, and this deal, if it happens, brings down that wall. I’m happy with that, because I don’t see the prohibition on expedited delivery as a good thing. But Google should admit they’ve come around to my way of thinking about the Internet instead of insisting nothing has changed. See my write-up in The Register.

UPDATE 2: The spin that Google’s supporters are producing around this issue is a marvel for those of us who appreciate the major league curveball. This subtle piece of nuanced distinction by Dave Isenberg deserves some sort of prize:

The concern of Network Neutrality advocates is not with access but with delivery. The fear is that Internet connection providers would charge for expedited delivery of certain content to the end user, and in so doing would put themselves in the business of classifying which content gets enhanced delivery.

Wow. Caching speeds up delivery, otherwise there would be no reason to do it. Google has paid for expedited delivery of its content in effect, regardless of the spin. What counts is bits on the wire, and Google is out to ensure theirs are better situated than yours are.

Don’t be fooled by the spin, this is a distinction without a difference.

Technorati Tags: ,