Court protects the right to bluff

In a rare move, the DC Circuit has upheld an FCC decision

The cable industry has won a big legal victory in the fiercely competitive phone services market. An appeals court has supported the Federal Communications Commission in its ruling that phone carriers—in this case Verizon—can’t try to lure back customers after they’ve initiated a service switch but before their number has been transferred.

The FCC rarely prevails in court, of course, so this may be a sign that we’re living in the End Times. But we can take some comfort from the fact that it wasn’t totally unpredictable, given that Kevin Martin was on the losing side.

The case involved Verizon’s efforts to win back customers when notified by the new carrier that they had to release the phone number. Verizon took this as an occasion to offer sweeter deals, which the court ruled an unlawful violation of the customer’s privacy, despite the fact that Google’s entire business is based on this kind of snooping.

It’s a win for consumers because it preserves the right to bluff. In today’s economy, consumers can frequently get better deals on subscription services merely by threatening to cancel, whether we’re serious or not. As it happens, I got lower prices from Sports Illustrated and Illy Coffee by calling up to cancel my subscriptions, and in both cases they were substantial. DirecTV refused to offer me a sweetner last year when I was tired of their crappy DVR, so they lost my TV business to Comcast. It’s not entirely clear to the business whether any of these threats are serious, of course, so it’s in their interest to err on the side of caution and offer the customer a better deal when they have the chance. Efforts to win back a customer who’s already made a switch have to be harder to pull off.

But the Verizon deal stacked the cards a little too far in the company’s favor, because it allowed them to play hardball until it was absolutely clear that the customer wasn’t bluffing. They only get a switchover for phone service when you’ve made a deal and scheduled a hookup date.

No deal, we all have the right to bluff and the company is going to have to guess just like any other poker player. That’s a good deal for the consumer.

Damned if you do, screwed if you don’t

The FCC has finally noticed that reducing the Quality of Service of an Internet access service affects all the applications that use it, including VoIP. They’ve sent a harsh letter to Comcast seeking ammunition with which to pillory the cable giant, in one of Kevin Martin’s parting shots:

Does Comcast give its own Internet phone service special treatment compared to VoIP competitors who use the ISP’s network? That’s basically the question that the Federal Communications Commission posed in a letter sent to the cable giant on Sunday. The agency has asked Comcast to provide “a detailed justification for Comcast’s disparate treatment of its own VoIP service as compared to that offered by other VoIP providers on its network.” The latest knock on the door comes from FCC Wireline Bureau Chief Dana Shaffer and agency General Counsel Matthew Berry.

Readers of this blog will remember that I raised this issue with the “protocol-agnostic” management scheme Comcast adopted in order to comply with the FCC’s over-reaction to the former application-aware scheme, which prevented P2P from over-consuming bandwidth needed by more latency-sensitive applications. My argument is that network management needs to operate in two stages, one that allocates bandwidth fairly among users, and a second that allocates it sensibly among the applications in use by each user. The old Comcast scheme did one part of this, and the new scheme does the other part. I’d like to see both at the same time, but it’s not at all clear that the FCC will allow that. So we’re left with various forms of compromise.

The fundamental error that the FCC is making in this instance is incorrectly identifying the “service” that it seeks to regulate according to a new attempt to regulate services (skip to 13:30) rather than technologies.

Comcast sells Internet service, telephone service, and TV service. It doesn’t sell “VoIP service” so there’s no basis to this complaint. The Commission has made it very difficult for Comcast to even identify applications running over the Internet service, and the Net Neuts have typically insisted it refrain from even trying to do so; recall David Reed’s fanatical envelope-waving exercise at the Harvard hearing last year.

The telephone service that Comcast and the telephone companies sell uses dedicated bandwidth, while the over-the-top VoIP service that Vonage and Skype offer uses shared bandwidth. I certainly hope that native phone service outperforms ad hoc VoIP; I pay good money to ensure that it does.

This action says a lot about what’s wrong with the FCC. Regardless of the regulatory model it brings to broadband, it lacks the technical expertise to apply it correctly. The result is “damned if you do, damned if you don’t” enforcement actions.

This is just plain silly. The only party the FCC has any right to take to task in this matter is itself.

The pirates who congregate at DSL Reports are in a big tizzy over this, naturally.

Canadian regulators smarter than Americans

Canada’s Internet users have won a measure of victory over bandwidth hogs. In a ruling from the CRTC, Canada’s FCC, Bell Canada is permitted to continue managing network over-use:

Bell Canada today won a largely clear victory in an anti-throttling lawsuit filed with the Canadian Radio-television and Telecommunications Commission (CRTC). The government body has issued a ruling dismissing claims by Internet providers using part of Bell’s network that accused the carrier of unfairly throttling the connection speeds of their services while also constricting its own. These rivals, represented by the Canadian Association of Internet Providers (CAIP), had accused Bell of trying to hinder competition and violating the basic concepts of net neutrality by discouraging large transfers.

The CRTC’s dismissal is based on the observation that peer-to-peer usage does appear to have a detrimental impact on Bell’s network and so requires at least some level of control to keep service running properly for all users. It also rejects neutrality concerns by claim that Bell’s throttling system, which uses deep packet inspection to investigate traffic, is adjusting speed and doesn’t restrict the content itself.

Bell hails its successful defense as proof that those running online networks are “in the best position” to judge how their networks are managed.

Canada’s Larry Lessig, a populist/demagogue law professor named Michael Geist, was heart-broken over the decision, and pro-piracy web site Ars Technica shed a few tears as well:

The proceeding was also notable for the frank admissions from other large ISPs like Rogers—they admitted that they throttle traffic on a discriminatory basis, too. It also produced wild allegations from companies like Cisco that “even if more bandwidth were added to the network, P2P file-sharing applications are designed to use up that bandwidth.” Such assertions allow the ISPs to claim that they must be able to throttle specific protocols simply to stay afloat—survival is at stake.

This is (to put it politely) highly debatable.

Actually it’s not debatable, not by sane people anyhow. Residential broadband is as cheap as it is only because ISPs can count on people sharing the wires in a civilized fashion. People who keep their broadband pipes constantly saturated take resources away from their neighbors. There are alternatives, of course. You can buy a T-1 line with a Service Level Agreement that you can saturate with all the traffic you want. In the US, count on paying $400/mo for 1.5 Mb/s upload and download. Want something cheaper? Learn to share.

Canada is widely regarded as a more left wing, business-hostile country than the US. How to account for the fact that the CRTC got this issue right while Bush’s FCC got it wrong in the Comcast case?

Technorati Tags:

Thirty Profiles

Dave Burstein of DSL Prime has posted profiles of 30 FCC candidates to his web site, including one transition team member:

Susan Crawford, now teaching at Michigan, also has enormous respect from her peers and would bring international perspective from her role at ICANN setting world Internet policy

The selection of Crawford to join Kevin Werbach on the FCC transition team has already gotten some of my colleagues on the deregulatory side pretty excited, as she has the image of being a fierce advocate of a highly-regulated Internet. And indeed, she has written some strong stuff in favor of the “stupid network” construct that demands all packets be treated as equals inside the network. The critics are missing something that’s very important, however: both Werbach and Crawford are “Internet people” rather than “telecom people” and that’s a very important thing. While we may not like Crawford’s willingness to embrace a neutral routing mandate in the past, the more interesting question is how she comes down on a couple of issues that trump neutral routing, network management and multi-service routing.

We all know by now that the network management exception is more powerful than Powell’s “Four Freedoms” where the rubber meets the road, but we lack any clear guidance to ISPs as to how their management practices will be evaluated. Clarification of the rules is as much a benefit to carriers as it is to consumers. The one way to ensure that we all lose is to keep lumbering along in the murk of uncertain authority and secret rules. Internet people are going to ask the right questions to their candidates, and anybody who can satisfy both Werbach and Crawford will have to be a good choice. Check Werbach’s web site for his papers. Unfotunately, the most interesting of them is not yet in print, “The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing it Apart”, UC Davis Law Review, forthcoming 2008. Perhaps he’ll post a draft.

The question of multi-service routing is also very important. Crawford has written and testified to the effect that the Internet is the first global, digital, multi-service network, which is substantially correct. The Internet is not fully multi-service today, however, and can’t be unless it exposes multiple service levels at the end points for applications to use easily. The generic public Internet has a single transport service which has to meet the needs of diverse applications today, which is not really an achievable goal in the peer-to-peer world.
Continue reading “Thirty Profiles”

AT&T’s Dubious Behavior

You may not have noticed in the crush of events, but AT&T announced a new broadband service option last week, up to 18 Mb/s DSL:

AT&T Inc. (NYSE:T) today announced it will launch AT&T U-verseSM High Speed Internet Max 18 on Nov. 9, offering speeds of up to 18 Mbps downstream. Exclusively available for AT&T U-verse TV customers, Max 18 is the fastest high speed Internet package available from the nation’s leading provider of broadband services.

Apparently this is simply a pricing option for existing U-Verse TV customers that allows them to use more of their pipe for downloading when they aren’t using it for TV. The general data rate of the AT&T pipe is 25 Mb/s without pair bonding, of which 2 – 16 Mb/s is used for TV. Under the old plan, Internet downloads were capped at 12 Mb/s, which generally left enough for two HDTV streams, except when it didn’t, and under those circumstances AT&T borrowed from Internet capacity to make the TV keep looking fairly good. AT&T should be able to offer a 25 Mb/s download tier without changing any hardware, but they don’t.

Generally speaking, we’re all in favor of faster downloads whenever possible, but this announcement is troubling for one very big reason: the only way you can get this service is to buy AT&T’s TV service. This bundling sets the giant of the telcos apart from competitors Verizon, Comcast, and Qwest and raises concerns that should have the consumer groups who’ve promoted the net neutrality agenda hopping mad.

The two aspects of network operation that deserve regulatory scrutiny are disclosure and anti-competitive practices, and this behavior falls squarely in the anti-competitive nexus. The other providers of triple- and quad-play services will gladly sell all tiers of Internet service to anyone in the service areas regardless of which other services they choose to buy. They typically discount Internet service for TV and phone customers, but it’s certainly available without purchasing the other services, and for less than it would cost to buy them as well.

This mandatory bundling is unfortunately consistent with AT&T’s role as the black sheep of net neutrality. It was their CEO’s remarks, after all, that set off the current controversy back in 2005: Ed Whiteacre said Google and Vonage weren’t going to “use his pipes for free.” This got Google engaged in a regulatory program and unleashed a massive infusion of cash into the debate over the regulation of Internet access services, not to mention an army of Google-friendly advocates such as Larry Lessig and Tim Wu’s Free Press organization, the muscle behind the Save the Internet blog. And when the FCC overstepped its authority in and slapped Comcast on the wrist, AT&T insisted the cable company should accept its fate silently and take one for the team instead of challenging the unlawful order in court. Their gall is breathtaking.

The consumer advocates have been strangely silent about this clearly anti-competitive bundling. Why should I have to buy AT&T’s TV service to get the top tier of their Internet access service? For years I bought Internet access from Comcast and TV from DirecTV, and was very pleased with the result. I would probably still do that if DirecTV had not ended their relationship with TiVo and tried to force their sub-standard DVR on me. And if I choose to do so today, I can buy the highest tier Comcast offers in my neighborhood without signing up for their TV service, and at a fairly reasonable price.

So why is AT&T trying to gouge the consumer, and why is the net neutrality movement silent about it? Consumer’s Union is all up in arms about cable companies converting analog customers to digital along with the rest of the country in February, a painfully silly campaign that argues for unfair regulation. Why not address a real issue instead?

No deal

Google has announced an end to its monopolistic advertising agreement with Yahoo!:

However, after four months of review, including discussions of various possible changes to the agreement, it’s clear that government regulators and some advertisers continue to have concerns about the agreement. Pressing ahead risked not only a protracted legal battle but also damage to relationships with valued partners. That wouldn’t have been in the long-term interests of Google or our users, so we have decided to end the agreement.

This is good. But Google didn’t strike out completely yesterday, as it successfully bent the ear of the FCC toward wasting the whitespaces on their hare-brained “Wi-Fi without testosterone” scheme. You win some, you lose some.

Technorati Tags: , ,

The Trouble with White Spaces

Like several other engineers, I’m disturbed by the white spaces debate. The White Space Coalition, and its para-technical boosters, argue something like this: “The NAB is a tiger, therefore the White Spaces must be unlicensed.” And they go on to offer the comparison with Wi-Fi and Bluetooth, arguing as Tom Evslin does on CircleID today that “If we got a lot of innovation from just a little unlicensed spectrum, it’s reasonable to assume that we’ll get a lot more innovation if there’s a lot more [unlicensed] spectrum available.”

According to this argument, Wi-Fi has been an unqualified success in every dimension. People who make this argument haven’t worked with Wi-Fi or Bluetooth systems in a serious way, or they would be aware that there are in fact problems, serious problems, with Wi-Fi deployments.

For one thing, Wi-Fi systems are affected by sources of interference they can’t detect directly, such as FM Baby Monitors, cordless phones, and wireless security cameras. Running Wi-Fi on the same channel as one of these devices causes extremely high error rates. If 2.4 and 5.x GHz devices were required to emit a universally detectable frame preamble much of this nonsense could be avoided.

And for another, we have the problem of newer Wi-Fi devices producing frames that aren’t detectable by older (esp. 802.11 and 802.11b gear) without an overhead frame that reduces throughput substantially. If we could declare anything older than 802.11a and .11g illegal, we could use the spectrum we have much more efficiently.

For another, we don’t have enough adjacent channel spectrum to use the newest version of Wi-Fi, 40 MHz 802.11n, effectively in the 2.4 GHz band. Speed inevitably depends on channel width, and the white spaces offer little dribs and drabs of spectrum all over the place, much of it in non-adjacent frequencies.

But most importantly, Wi-Fi is the victim of its own success. As more people use Wi-Fi, we have share the limited number of channels across more Access Points, and they are not required to share channel space with each other in a particularly efficient way. We can certainly expect a lot of collisions, and therefore packet loss, from any uncoordinated channel access scheme, as Wi-Fi is, on a large geographic scale. This is the old “tragedy of the commons” scenario.

The problem of deploying wireless broadband is mainly a tradeoff of propagation, population, and bandwidth. The larger the population your signal covers, the greater the bandwidth needs to be in order to provide good performance. The nice thing about Wi-Fi is its limited propagation, because it permits extensive channel re-use without collisions. if the Wi-Fi signal in your neighbor’s house propagated twice as far, it has four times as many chances to collide with other users. So high power and great propagation isn’t an unmitigated good.

The advantage of licensing is that the license holder can apply authoritarian rules that ensure the spectrum is used efficiently. The disadvantage is that the license holder can over-charge for the use of such tightly-managed spectrum, and needs to in order to pay off the cost of his license.

The FCC needs to move into the 21st century and develop some digital rules for the use of unlicensed or lightly-licensed spectrum. The experiment I want to see concerns the development of these modern rules. We don’t need another Wi-Fi, we know how it worked out.

So let’s don’t squander the White Spaces opportunity with another knee-jerk response to the spectre of capitalism. I fully believe that people like Evslin, the White Space Coalition, and Susan Crawford are sincere in their belief that unlicensed White Spaces would be a boon to democracy, it’s just that their technical grasp of the subject matter is insufficient for their beliefs to amount to serious policy.

FCC fills empty job

Kevin Martin’s FCC has hired a new chief technologist, Jon Peha:

Federal Communications Commission chairman Kevin Martin named John Peha chief technologist, the senior adviser post at the commission on technology issues, based out of the Office of Strategic Planning and Policy Analysis.

I’m a bit disappointed. Peha is the guy who delivered strong testimony denouncing the Comcast management of BitTorrent without bothering to study BitTorrent’s use of TCP connections. His testimony was substantially wrong on a factual basis. Perhaps Peha can persuade me that he means well, but his performance so far has not been encouraging.

UPDATE: What am I talking about? Well take a look at the comments Peha filed in the Comcast matter, which are on-line at the FCC’s web site. He understands what’s at stake:

In the debate over network neutrality, both sides can make points that deserve serious consideration from policymakers. Such consideration requires clear and accurate statements of the facts, to say nothing of the broader issues at stake. Unfortunately, the public debate has often been filled with hyperbole and spin from advocates on both sides.1 Such rhetoric, combined with issues of technical complexity and subtlety, has made it unnecessarily difficult for policymakers to make informed decisions.

So what did he do? He misrepresented the facts and engaged in advocacy spin, to wit:

Comcast sends Device A a reset packet, with parameters set such that Device A will believe the reset is coming from Device B. Device A is therefore led to believe (incorrectly) that Device B is unwilling or unable to continue the session. The same may be occurring at Device B. Thus, the devices determine that the session must be ended, and no further packets can be sent.

It is factually incorrect to say that the process described above merely delays P2P traffic.

Bzzzttt, wrong answer. BitTorrent “sessions” consist of multiple TCP connections, so terminating one, or two, or any number less than the total number of TCP connections a given instance of BitTorrent can use at any particular time is in fact “delaying” instead of “blocking.” Peha makes the assumption that BitTorrent “sessions” are the same as TCP “sessions” and they clearly aren’t. Most of what makes BitTorrent troublesome, in fact, is the large number of TCP “sessions” it uses. It’s particularly outrageous that Peha charges Comcast with misrepresentation and then goes on to misrepresent in his own right.

He then goes on to contradict himself and admit that it’s really “delaying” after all:

After the flow of P2P from a given sender and recipient is blocked or terminated, the recipient is likely to seek some other source for the content. If the content is extremely popular, there are many options available. Consequently, this leads to a small delay, somewhat decreasing the rate at which this recipient can gather content.

So which is it, Dr. Peha, “blocking” or “delaying?” He can’t even make up his own mind. He then goes on to whack Comcast for targeting P2P:

Comcast has elected to employ mechanisms that degrade service for a particular application, i.e. P2P, instead of relying only on congestion control mechanisms that deal with traffic of all application types. Central to their justification of this approach has been the assertion that it is specifically P2P that has an adverse impact on other traffic. This assertion is untrue.

…and he goes on talk about blue cars and red cars, a lot of nonsensical fluff. The fact remains that P2P is the only application with such a great ability to consume bandwidth on a non-stop basis as to degrade the Internet experience of web browsing, and that’s what Comcast was trying to protect.

And more significantly, Peha fails to grasp the fact that applications are not created equal in terms of their tolerance for delay. P2P has no particular time constraints when running as a seeder (serving files to the rest of the Internet) but interactive applications like Web browsing and VoIP have very little tolerance for delay. And now we have a standard in place that requires ISPs to ignore these technical distinctions, thanks largely to the inept analysis of people like Peha.

In additional remarks he confesses his ignorance of network management techniques generally, and compares the Comcast method to a “man in the middle attack.” If that’s what he thinks, really and truly, he’s seriously under-informed. A “man in the middle attack” is means of breaking into a system by stealing passwords. What system did Comcast break into, and what password did they use to do so?

In Kevin Martin’s FCC this outlandish foolishness is a job interview. Peha is smarter than Sarah Palin, but he’s no Dave Farber. Surely the FCC can do better than to employ an advocate in the position that requires depth of technical knowledge and a commitment to impartiality. Kevin Martin has failed the American people again.

A more suitable candidate exists: Just a Girl in Short Shorts Talking about Whatever:

Comcast was regulating the download speeds of peer to peer networks, such as BitTorrent. I like to pirate movies as much as next cheapskate, but I do not think it is necessary that it be given equal priority with VoIP (voice over Internet).

That’s the level of insight we need in a Chief Technologist.

Technorati Tags: ,

Secret laws are not law

While looking for the essence of Lessig’s “code is law” formulation, I happened on this little gem:

If there is one thing clear about the value we demand of East Coast Code, it is transparency. Secret laws are not law. And if there is one thing clear about the recent panic about privacy, it is that much of the anxiety was about the secrets hidden within closed code. Closed code hides its systems of control; open code can’t. Any encryption or identification system built into open code is transparent to those who can read the code, just as laws are transparent to those who can read Congress’ code – lawyers.

(“East Coast code” means laws and government regulations) Kinda makes you wonder why Lessig wasn’t critical of the rabbit-out-of-the-hat regulations the FCC imposed on Comcast.

Oh well.

Technorati Tags: , , ,

Comcast files their compliance plan

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted to the Comcast web site shortly.

The filing corrects some of the false allegations made by critics with respect to privacy, making it very clear that the existing system simply inspects protocol headers (“envelopes”) and not personal data. David Reed in particular got himself worked into a tizzy over the idea that Comcast was deciding which streams to delay based on content, but this is clearly not the case. Inside the IP envelope sits a TCP envelope, and inside that sits a BitTorrent envelope. User data is inside the BitTorrent (or equivalent) envelope, and Comcast doesn’t look at it.

The current system sets a bandwidth quota for P2P, and prevents P2P as a group from crossing the threshold from this quota (about 50% of total upstream bandwidth) with new uni-directional upload (AKA, file-server-like) streams by tearing down requested new streams with the TCP Reset bit. The system is a bit heavy-handed, but reserving 50% of the network for one class of application seems pretty reasonable, given that no more than 20% of customers use P2P at all.

Nonetheless, the new system will not look at any headers, and will simply be triggered by the volume of traffic each user puts on the network and the overall congestion state of the network segment. If the segment goes over 70% utilization in the upload direction for a fifteen-minute sample period, congestion management will take effect.

In the management state, traffic volume measurement will determine which users are causing the near-congestion, and only those using high amounts of bandwidth will be managed. The way they’re going to be managed is going to raise some eyebrows, but it’s perfectly consistent with the FCC’s order.

High-traffic users – those who’ve used over 70% of their account’s limit for the last fifteen minutes – will have all of their traffic de-prioritized for the next fifteen minutes. While de-prioritized, they still have access to the network, but only after the conforming users have transmitted their packets. So instead of bidding on the first 70% of network bandwidth, they’ll essentially bid on the 30% that remains. This will be a bummer for people who are banging out files as fast as they can only to have a Skype call come in. Even if they stop BitTorrent, the first fifteen minutes of Skyping are going to be rough. A more pleasant approach would be to let excessive users out of QoS jail with credit for good behavior – if their utilization drops to Skype level, let them out in a few seconds, because it’s clear they’ve turned off their file sharing program. This may be easier said than done, and it may raise the ire of Kevin Martin, given how irrational he is with this anti-cable vendetta.

The user can prevent this situation from arising, of course, if he wants to. All he has to do is set the upload and download limits in BitTorrent low enough that he doesn’t consume enough bandwidth to land in the “heavy user” classification and he won’t have to put up with bad VoIP quality. I predict that P2P applications and home gateways are going to incorporate controls to enforce “Comcast friendly” operation to prevent de-prioritization. There are other more refined approaches to this problem, of course.

At the end of the day, Comcast’s fifteen/fifteen system provides users with the incentive to control their own bandwidth appetites, which makes it an “end-to-end” solution. The neutralitarians should be happy about that, but it remains to be seen how they’re going to react.

It looks pretty cool to me.

UPDATE: Comcast-hater Nate Anderson tries to explain the system at Ars Technica. He has some of it right, but doesn’t seem to appreciate any of its implications. While the new system will not look at protocol headers (the evil “Deep Packet Inspection” that gets network neophytes and cranks so excited) , and it won’t use TCP Resets, that doesn’t mean that P2P won’t be throttled; it will.

That’s simply because P2P contributes most of the load on residential networks. So if you throttle the heaviest users, you’re in effect throttling the heaviest P2P users, because the set of heavy users and the set of heavy P2P users is the same set. So the “disparate impact” will remain even though the “disparate treatment” will end.

But the FCC has to like it, because it conforms to all of Kevin Martin’s rabbit-out-the-hat rules. The equipment Comcast had had to purchase for this exercise in aesthetic reform will have utility down the road, but for now it’s simply a tax imposed by out-of-control regulators.