Regulation and the Internet

Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea.

The Internet is a global network, and regulating it properly is a matter of global concern. I’d like to share a view of the technical underpinnings of the question, to better inform the legal and political discussion that follows and to point out some of the pitfalls that lie in wait.

Why manage network traffic?

Network management, or more properly network traffic management, is a central focus of the current controversy. The consumer-friendly statements of policy, such as the Four Freedoms crafted by Senator McCain’s technology adviser Mike Powell, represent lofty goals, but they’re constrained by the all-important exception for network management. In fact, you could easily simplify the Four Freedoms as “you can do anything you want except break the law or break the network.” Network management prevents you from breaking the network, which you principally do by using up network resources.

Every networking technology has to deal with the fact that the demand for resources often exceeds supply. On the circuit-switched PSTN, resources are allocated when a call is setup, and if they aren’t available your call doesn’t get connected. This is a very inefficient technology that allocates bandwidth in fixed amounts, regardless of the consumer’s need or his usage once the call is connected. A modem connected over the PSTN sends and receives at the same time, but people talking generally take turns. This network doesn’t allow you to save up bandwidth and to use it later, for example. Telecom regulations are based on the PSTN and its unique properties. In network engineering, we call it an “isochronous network” to distinguish it from technologies like the old Ethernet that was the model link layer technology when the DoD protocol suite was designed.

The Internet uses packet switching technology, where users share communications facilities and bandwidth is allocated dynamically. Dynamic bandwidth allocation, wire-sharing, and asynchrony mean that congestion appears and disappears on random, sub-second intervals. Packets don’t always arrive at switching points at the most convenient times, just as cars don’t run on the same rigorous schedules as trains.
Continue reading “Regulation and the Internet”

Canadian regulators smarter than Americans

Canada’s Internet users have won a measure of victory over bandwidth hogs. In a ruling from the CRTC, Canada’s FCC, Bell Canada is permitted to continue managing network over-use:

Bell Canada today won a largely clear victory in an anti-throttling lawsuit filed with the Canadian Radio-television and Telecommunications Commission (CRTC). The government body has issued a ruling dismissing claims by Internet providers using part of Bell’s network that accused the carrier of unfairly throttling the connection speeds of their services while also constricting its own. These rivals, represented by the Canadian Association of Internet Providers (CAIP), had accused Bell of trying to hinder competition and violating the basic concepts of net neutrality by discouraging large transfers.

The CRTC’s dismissal is based on the observation that peer-to-peer usage does appear to have a detrimental impact on Bell’s network and so requires at least some level of control to keep service running properly for all users. It also rejects neutrality concerns by claim that Bell’s throttling system, which uses deep packet inspection to investigate traffic, is adjusting speed and doesn’t restrict the content itself.

Bell hails its successful defense as proof that those running online networks are “in the best position” to judge how their networks are managed.

Canada’s Larry Lessig, a populist/demagogue law professor named Michael Geist, was heart-broken over the decision, and pro-piracy web site Ars Technica shed a few tears as well:

The proceeding was also notable for the frank admissions from other large ISPs like Rogers—they admitted that they throttle traffic on a discriminatory basis, too. It also produced wild allegations from companies like Cisco that “even if more bandwidth were added to the network, P2P file-sharing applications are designed to use up that bandwidth.” Such assertions allow the ISPs to claim that they must be able to throttle specific protocols simply to stay afloat—survival is at stake.

This is (to put it politely) highly debatable.

Actually it’s not debatable, not by sane people anyhow. Residential broadband is as cheap as it is only because ISPs can count on people sharing the wires in a civilized fashion. People who keep their broadband pipes constantly saturated take resources away from their neighbors. There are alternatives, of course. You can buy a T-1 line with a Service Level Agreement that you can saturate with all the traffic you want. In the US, count on paying $400/mo for 1.5 Mb/s upload and download. Want something cheaper? Learn to share.

Canada is widely regarded as a more left wing, business-hostile country than the US. How to account for the fact that the CRTC got this issue right while Bush’s FCC got it wrong in the Comcast case?

Technorati Tags:

Just Another Utility

Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or nearly as much) as we need electricity and running water, nobody has much of a problem with this observation, other than standard hyperbole objections.

But utilities in the USA tend to be provided by the government, so it’s reasonable to draw the implication from Crawford’s comparison that the government should be providing Internet access. This interpretation is underscored by her frequent complaints about the US’s ranking versus other countries in the broadband speed and price sweepstakes.

If you continually advocate for more aggressive spending to win a supposed international broadband arms race, you minimize the effectiveness of private investment, and you tout the planet’s fastest and cheapest Internet service as a necessity of life, you’re going to be regarded as either a garden-variety socialist or an impractical dreamer.
Continue reading “Just Another Utility”

Honorably Mentioned

The Sidecut Reports ranking of the Top 10 Net Neutrality Influencers has some interesting honorables:

Honorable Mention: Tim Wu, Columbia Law School; Kyle McSlarrow, NCTA; Eric Schmidt, Google; Chris Libertelli, eBay/Skype; Gigi Sohn, Public Knowledge; Jessica Rosenworcel, Senate Commerce Committee; Jonathan Adelstein, FCC; Phil Weiser, University of Colorado; Richard Bennett, blogger/independent network engineer and self-confessed geek.

Hmmm…I don’t know if this is entirely credible. But you never know.

Technorati Tags:

Thirty Profiles

Dave Burstein of DSL Prime has posted profiles of 30 FCC candidates to his web site, including one transition team member:

Susan Crawford, now teaching at Michigan, also has enormous respect from her peers and would bring international perspective from her role at ICANN setting world Internet policy

The selection of Crawford to join Kevin Werbach on the FCC transition team has already gotten some of my colleagues on the deregulatory side pretty excited, as she has the image of being a fierce advocate of a highly-regulated Internet. And indeed, she has written some strong stuff in favor of the “stupid network” construct that demands all packets be treated as equals inside the network. The critics are missing something that’s very important, however: both Werbach and Crawford are “Internet people” rather than “telecom people” and that’s a very important thing. While we may not like Crawford’s willingness to embrace a neutral routing mandate in the past, the more interesting question is how she comes down on a couple of issues that trump neutral routing, network management and multi-service routing.

We all know by now that the network management exception is more powerful than Powell’s “Four Freedoms” where the rubber meets the road, but we lack any clear guidance to ISPs as to how their management practices will be evaluated. Clarification of the rules is as much a benefit to carriers as it is to consumers. The one way to ensure that we all lose is to keep lumbering along in the murk of uncertain authority and secret rules. Internet people are going to ask the right questions to their candidates, and anybody who can satisfy both Werbach and Crawford will have to be a good choice. Check Werbach’s web site for his papers. Unfotunately, the most interesting of them is not yet in print, “The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing it Apart”, UC Davis Law Review, forthcoming 2008. Perhaps he’ll post a draft.

The question of multi-service routing is also very important. Crawford has written and testified to the effect that the Internet is the first global, digital, multi-service network, which is substantially correct. The Internet is not fully multi-service today, however, and can’t be unless it exposes multiple service levels at the end points for applications to use easily. The generic public Internet has a single transport service which has to meet the needs of diverse applications today, which is not really an achievable goal in the peer-to-peer world.
Continue reading “Thirty Profiles”

Missing the point of the Internet

Network neutrality advocates have been preening and cooing since the election as they expect the Obama FCC and the new Democratic Party-dominated Congress to enact new laws and regulations advancing their pet cause. They got some support today from an unexpected quarter when the Cato Institute published a paper by graduate student Tim Lee echoing and supporting their main argument:

An important reason for the Internet’s remarkable growth over the last quarter century is the “end-to-end” principle that networks should confine themselves to transmitting generic packets without worrying about their contents. Not only has this made deployment of internet infrastructure cheap and efficient, but it has created fertile ground for entrepreneurship. On a network that respects the end-to-end principle, prior approval from network owners is not needed to launch new applications, services, or content.

Tim Lee, bless his heart, is wrong about the importance of the Internet’s end-to-end architecture. While the Internet, along with all other computer-based networks, certainly does have such an architecture, it’s not the only architecture or even the most important one in the mix. The most important part of the Internet is its “network-to-network” architecture, because that’s the part that makes it what it is. The Internet is only an internet because network operators have agreed to exchange traffic with each other according to terms that they develop among themselves without government interference. This exchange of traffic is what makes it an interesting place.

Internetwork packet exchange is not as simplistic as network neutrality advocates make it out to be. Network operators do not simply forward packets first-come-first-served to anybody and everybody for the end-to-end layer to sort out; they discriminate in all sorts of ways to provide good service to as many people as possible at a reasonable price. Some network operators offer different tiers of service to different customers, and exchange traffic with other networks accordingly. This is good, but it’s not the “stupid network” that our regulators want to see.

Network neutrality is an attempt to shackle the Internet with regulations that mirror a failed model of network architecture, to give a victory to a failed vision by government fiat that it could not achieve in the market. The government should not be picking winners and losers in the competition among network architectures.

Even if you don’t accept that argument, there’s another reason that the proposed regulations should be rejected: the Internet is a technology, and technologies can always be expected to improve over time as parts to build them become cheaper and faster. Net neutrality is a backward-looking agenda that seeks to freeze the Internet core at a particular level of technology. This can only have the effect of hastening its obsolescence, and make no mistake about it, it will be obsolete some day. Nostalgia has no place in technology regulation.

Indeed, Tim’s argument against net neutrality regulations is weak and non-specific. It’s a good reminder that advocates only make arguments about unintended consequences, slippery slopes, and camel’s noses when they’ve lost the argument.

Any attempt to add new regulations to the Internet should be examined from a bias against regulation. If a case can be made that new regulations will make things better, well and good. But arguments about restoring a once golden status quo should be rejected out of hand as incoherent and reactionary.

Technorati Tags: , ,

AT&T’s Dubious Behavior

You may not have noticed in the crush of events, but AT&T announced a new broadband service option last week, up to 18 Mb/s DSL:

AT&T Inc. (NYSE:T) today announced it will launch AT&T U-verseSM High Speed Internet Max 18 on Nov. 9, offering speeds of up to 18 Mbps downstream. Exclusively available for AT&T U-verse TV customers, Max 18 is the fastest high speed Internet package available from the nation’s leading provider of broadband services.

Apparently this is simply a pricing option for existing U-Verse TV customers that allows them to use more of their pipe for downloading when they aren’t using it for TV. The general data rate of the AT&T pipe is 25 Mb/s without pair bonding, of which 2 – 16 Mb/s is used for TV. Under the old plan, Internet downloads were capped at 12 Mb/s, which generally left enough for two HDTV streams, except when it didn’t, and under those circumstances AT&T borrowed from Internet capacity to make the TV keep looking fairly good. AT&T should be able to offer a 25 Mb/s download tier without changing any hardware, but they don’t.

Generally speaking, we’re all in favor of faster downloads whenever possible, but this announcement is troubling for one very big reason: the only way you can get this service is to buy AT&T’s TV service. This bundling sets the giant of the telcos apart from competitors Verizon, Comcast, and Qwest and raises concerns that should have the consumer groups who’ve promoted the net neutrality agenda hopping mad.

The two aspects of network operation that deserve regulatory scrutiny are disclosure and anti-competitive practices, and this behavior falls squarely in the anti-competitive nexus. The other providers of triple- and quad-play services will gladly sell all tiers of Internet service to anyone in the service areas regardless of which other services they choose to buy. They typically discount Internet service for TV and phone customers, but it’s certainly available without purchasing the other services, and for less than it would cost to buy them as well.

This mandatory bundling is unfortunately consistent with AT&T’s role as the black sheep of net neutrality. It was their CEO’s remarks, after all, that set off the current controversy back in 2005: Ed Whiteacre said Google and Vonage weren’t going to “use his pipes for free.” This got Google engaged in a regulatory program and unleashed a massive infusion of cash into the debate over the regulation of Internet access services, not to mention an army of Google-friendly advocates such as Larry Lessig and Tim Wu’s Free Press organization, the muscle behind the Save the Internet blog. And when the FCC overstepped its authority in and slapped Comcast on the wrist, AT&T insisted the cable company should accept its fate silently and take one for the team instead of challenging the unlawful order in court. Their gall is breathtaking.

The consumer advocates have been strangely silent about this clearly anti-competitive bundling. Why should I have to buy AT&T’s TV service to get the top tier of their Internet access service? For years I bought Internet access from Comcast and TV from DirecTV, and was very pleased with the result. I would probably still do that if DirecTV had not ended their relationship with TiVo and tried to force their sub-standard DVR on me. And if I choose to do so today, I can buy the highest tier Comcast offers in my neighborhood without signing up for their TV service, and at a fairly reasonable price.

So why is AT&T trying to gouge the consumer, and why is the net neutrality movement silent about it? Consumer’s Union is all up in arms about cable companies converting analog customers to digital along with the rest of the country in February, a painfully silly campaign that argues for unfair regulation. Why not address a real issue instead?

No deal

Google has announced an end to its monopolistic advertising agreement with Yahoo!:

However, after four months of review, including discussions of various possible changes to the agreement, it’s clear that government regulators and some advertisers continue to have concerns about the agreement. Pressing ahead risked not only a protracted legal battle but also damage to relationships with valued partners. That wouldn’t have been in the long-term interests of Google or our users, so we have decided to end the agreement.

This is good. But Google didn’t strike out completely yesterday, as it successfully bent the ear of the FCC toward wasting the whitespaces on their hare-brained “Wi-Fi without testosterone” scheme. You win some, you lose some.

Technorati Tags: , ,

The Trouble with White Spaces

Like several other engineers, I’m disturbed by the white spaces debate. The White Space Coalition, and its para-technical boosters, argue something like this: “The NAB is a tiger, therefore the White Spaces must be unlicensed.” And they go on to offer the comparison with Wi-Fi and Bluetooth, arguing as Tom Evslin does on CircleID today that “If we got a lot of innovation from just a little unlicensed spectrum, it’s reasonable to assume that we’ll get a lot more innovation if there’s a lot more [unlicensed] spectrum available.”

According to this argument, Wi-Fi has been an unqualified success in every dimension. People who make this argument haven’t worked with Wi-Fi or Bluetooth systems in a serious way, or they would be aware that there are in fact problems, serious problems, with Wi-Fi deployments.

For one thing, Wi-Fi systems are affected by sources of interference they can’t detect directly, such as FM Baby Monitors, cordless phones, and wireless security cameras. Running Wi-Fi on the same channel as one of these devices causes extremely high error rates. If 2.4 and 5.x GHz devices were required to emit a universally detectable frame preamble much of this nonsense could be avoided.

And for another, we have the problem of newer Wi-Fi devices producing frames that aren’t detectable by older (esp. 802.11 and 802.11b gear) without an overhead frame that reduces throughput substantially. If we could declare anything older than 802.11a and .11g illegal, we could use the spectrum we have much more efficiently.

For another, we don’t have enough adjacent channel spectrum to use the newest version of Wi-Fi, 40 MHz 802.11n, effectively in the 2.4 GHz band. Speed inevitably depends on channel width, and the white spaces offer little dribs and drabs of spectrum all over the place, much of it in non-adjacent frequencies.

But most importantly, Wi-Fi is the victim of its own success. As more people use Wi-Fi, we have share the limited number of channels across more Access Points, and they are not required to share channel space with each other in a particularly efficient way. We can certainly expect a lot of collisions, and therefore packet loss, from any uncoordinated channel access scheme, as Wi-Fi is, on a large geographic scale. This is the old “tragedy of the commons” scenario.

The problem of deploying wireless broadband is mainly a tradeoff of propagation, population, and bandwidth. The larger the population your signal covers, the greater the bandwidth needs to be in order to provide good performance. The nice thing about Wi-Fi is its limited propagation, because it permits extensive channel re-use without collisions. if the Wi-Fi signal in your neighbor’s house propagated twice as far, it has four times as many chances to collide with other users. So high power and great propagation isn’t an unmitigated good.

The advantage of licensing is that the license holder can apply authoritarian rules that ensure the spectrum is used efficiently. The disadvantage is that the license holder can over-charge for the use of such tightly-managed spectrum, and needs to in order to pay off the cost of his license.

The FCC needs to move into the 21st century and develop some digital rules for the use of unlicensed or lightly-licensed spectrum. The experiment I want to see concerns the development of these modern rules. We don’t need another Wi-Fi, we know how it worked out.

So let’s don’t squander the White Spaces opportunity with another knee-jerk response to the spectre of capitalism. I fully believe that people like Evslin, the White Space Coalition, and Susan Crawford are sincere in their belief that unlicensed White Spaces would be a boon to democracy, it’s just that their technical grasp of the subject matter is insufficient for their beliefs to amount to serious policy.