Are the FCC Workshops Fair?

The FCC has run three days of workshops on the National Broadband Plan now, for the purpose of bringing a diverse set of perspectives on broadband technology and deployment issues to the attention of FCC staff. You can see the workshop agendas here. The collection of speakers is indeed richly varied. As you would expect, the session on eGov featured a number of government people and a larger collection of folks from the non-profit sector, all but one of whom has a distinctly left-of-center orientation. Grass-roots devolution arguments have a leftish and populist flavor, so who better to make the argument than people from left-of-center think tanks?

Similarly, the sessions on technology featured a diverse set of voices, but emphasized speakers with actual technology backgrounds. Despite the technology focus, a good number of non-technologists were included, such as media historian Sascha Meinrath, Dave Burstein, Amazon lobbyist Paul Misener, and veteran telephone regulator Mark Cooper. A number of the technology speakers came from the non-profit or university sector, such as Victor Frost of the National Science Foundation, Henning Schulzrinne of Columbia University and IETF, and Bill St. Arnaud of Canarie. The ISPs spanned the range of big operators such as Verizon and Comcast down to a ISPs with fewer than 2000 customers.

Given these facts, it’s a bit odd that some of the public interest groups are claiming to have been left out. There aren’t more than a small handful of genuine technologists working for the public interest groups; you can practically count them on one hand without using the thumb, and there’s no question that their point of view was well represented on the first three days of panels. Sascha Meinrath’s comments at the mobile wireless session on European hobbyist networks were quite entertaining, although not particularly serious. Claiming that “hub-and-spoke” networks are less scalable and efficient than wireless meshes is not credible.

The complaint has the feel of “working the refs” in a basketball game, not as much a legitimate complaint as a tactical move to crowd out the technical voices in the panels to come.

I hope the FCC rolls its collective eyes and calls the game as it sees it. Solid policy positions aren’t contradicted by sound technical analysis, they’re reinforced by it. The advocates shouldn’t fear the FCC’s search for good technical data, they should embrace it.

Let a thousand flowers bloom, folks.

Cross-posted at CircleID.

Another Net Neutrality Meltdown

Over the weekend, a swarm of allegations hit the Internet to the effect that AT&T was blocking access to the the 4chan web site. This report from Techcrunch was fairly representative:

As if AT&T wasn’t already bad enough. In an act that is sure to spark internet rebellions everywhere, AT&T has apparently declared war on the extremely popular imageboard 4chan.org, blocking some of the site’s most popular message boards, including /r9k/ and the infamous /b/. moot, who started 4chan and continues to run the site, has posted a note to the 4chan status blog indicating that AT&T is in fact filtering/blocking the site for many of its customers (we’re still trying to confirm from AT&T’s side).

4chan, in case you didn’t know, is a picture-sharing site that serves as the on-line home to a lovable band of pranksters who like to launch DOS attacks and other forms of mischief against anyone who peeves them. The infamous “Anonymous” DOS attack on the Scientology cult was organized by 4chan members, which is a feather in their cap from my point of view. So the general reaction to the news that AT&T had black-holed some of 4chan’s servers was essentially “woe is AT&T, they don’t know who they’re messing with.” Poke 4chan, they poke back, and hard.

By Monday afternoon, it was apparent that the story was not all it seemed. The owner of 4chan, a fellow known as “moot,” admitted that AT&T had good reason to take action against 4chan, which was actually launching what amounted to a DOS attack against some AT&T customers without realizing it:

For the past three weeks, 4chan has been under a constant DDoS attack. We were able to filter this specific type of attack in a fashion that was more or less transparent to the end user.

Unfortunately, as an unintended consequence of the method used, some Internet users received errant traffic from one of our network switches. A handful happened to be AT&T customers.

In response, AT&T filtered all traffic to and from our img.4chan.org IPs (which serve /b/ & /r9k/) for their entire network, instead of only the affected customers. AT&T did not contact us prior to implementing the block.

moot didn’t apologize in so many words, but he did more or less admit his site was misbehaving while still calling the AT&T action “a poorly executed, disproportionate response” and suggesting that is was a “blessing in disguise” because it renewed interest in net neutrality and net censorship. Of course, these subjects aren’t far from the radar given the renewed war over Internet regulation sparked by the comments on the FCC’s National Broadband Plan, but thanks for playing.

The 4chan situation joins a growing list of faux net neutrality crises that have turned out to be nothing when investigated for a new minutes:

* Tom Foremski claimed that Cox Cable blocked access to Craig’s List on June 6th, 2006, but it turned out to be a strange interaction between a personal firewall and Craig’s List’s odd TCP settings. Craig’s List ultimately changed their setup, and the software vendor changed theirs as well. Both parties had the power to fix the problem all along.

* Researchers at the U. of Colorado, Boulder claimed on April 9, 2008, that Comcast was blocking their Internet access when in fact it was their own local NAT that was blocking a stream that looked like a DOS attack. These are people who really should know better.

The tendency to scream “censorship” first and ask questions later doesn’t do anyone any good, so before the next storm of protest arises over a network management problem, let’s get the facts straight. There will be web accounts of AT&T “censoring” 4chan for months and years to come, because these rumors never get corrected on the Internet. As long as Google indexes by popularity, and the complaints are more widespread than the corrections, the complaints will remain the “real story.” I’d like to see some blog posts titled “I really screwed this story up,” but that’s not going to happen – all we’re going to see are some ambiguous updates buried at the end of the misleading stories.

UPDATE: It’s worth noting that AT&T wasn’t the only ISP or carrier to block 4chan’s aggressive switch on Sunday. Another network engineer who found it wise to block the site until it had corrected its DDOS counter-attack posted this to the NANOG list:

Date: Sun, Jul 26, 2009 at 11:05 PM
Subject: Re: AT&T. Layer 6-8 needed.

There has been alot of customers on our network who were complaining about ACK scan reports coming from 207.126.64.181. We had no choice but to block that single IP until the attacks let up. It was a decision I made with the gentleman that owns the colo facility currently hosts 4chan. There was no other way around it. I’m sure AT&T is probably blocking it for the same reason. 4chan has been under attack for over 3 weeks, the attacks filling up an entire GigE. If you want to blame anyone, blame the script kiddies who pull this kind of stunt.

Regards,
Shon Elliott
Senior Network Engineer
unWired Broadband, Inc.

Despite the abundance of good reasons for shutting off access to a domain with a misbehaving switch, AT&T continues to face criticism for the action, some of quite strange. David Reed, a highly vocal net neutrality advocate, went black-helicopters on the story:

I’d be interested in how AT&T managed to block *only* certain parts of 4chan’s web content. Since DNS routing does not depend on the characters after the “/” in a URL in *any* way, the site’s mention that AT&T was blocking only certain sub-“directories” of 4chan’s content suggests that the blocking involved *reading content of end-to-end communications”.

If AT&T admits it was doing this, they should supply to the rest of the world a description of the technology that they were using to focus their blocking. Since AT&T has deployed content-scanning-and-recording boxes for the NSA in its US-based switching fabric, perhaps that is how they do it. However, even if you believe that is legitimate for the US Gov’t to do, the applicability of similar technology to commercial traffic blocking is not clearly in the domain of acceptable Internet traffic management.

What happened, of course, was that a single IP address inside 4chan’s network was blocked. This IP address – 207.126.64.181 – hosts the /b/ and /r9k/ discussion and upload boards at 4chan, and DNS has nothing to do with it. Reed is one of the characters who complains about network management practices before all the relevant bodies, but one wonders if he actually understands how IP traffic is routed on the modern Internet.

And as I predicted, new blog posts are still going up claiming that AT&T is censoring 4chan. Click through to Technorati to see some of them.

Is Broadband a Civil Right?

Sometimes you have to wonder if people appreciate the significance of what they’re saying. On Huffington Post this morning, I found an account of a panel at the Personal Democracy Forum gathering on the question of who controls the Internet’s optical core. The writer, Steve Rosenbaum, declares that Broadband is a Civil Right:

If the internet is the backbone of free speech and participation, how can it be owned by corporate interests whose primary concern isn’t freedom or self expression or political dissent? Doesn’t it have to be free?

OK, that’s a reasonable point to discuss. Unfortunately, the example that’s supposed to back up this argument is the role that broadband networks have played in the Iranian protests. Does anyone see the problem here? Narrow-band SMS on private networks was a big problem for the government of Iran in the recent protests, but broadband not so much because they could control it easily through a small number of filters.

If broadband infrastructure isn’t owned by private companies, it’s owned by governments; the networks are too big to be owned any other way. So in the overall scheme of things, if I have to choose who’s more likely to let me protest the government from among: A) The Government; or B) Anybody Else, my choice is pretty obviously not the government.

Isn’t this obvious?

,

Recycling Garbage Abroad

Advocates of network neutrality regulations have been largely unsuccessful in advancing their agenda in the US. The one case in which they claim to have secured a victory was the Vuze vs. Comcast action in the FCC, which was severely tainted by Vuze turning to porn to resuscitate its dying business:

In a bid to increase their revenue, among other things, Vuze has added a catalog of HD adult videos to their BitTorrent client. For a few dollars a month Vuze users can subscribe to the latest hotness. Of course, all torrents on the erotica network are well seeded.

The same FCC commissioners who levied an unlawful fine against CBS for the Janet Jackson wardrobe malfunction ordered Comcast to give free bandwidth to a porn site. (Feeling good about that, Chairman Copps? [ed: OK, that was a cheap shot, but Copps and I know each other.])

Not deterred by this spotty track record, wannabe neutrality regulator Cory Doctorow trots out the well-worn arguments for the overseas audience in a Guardian column that stinks of Dow Chemical’s overseas pesticide dumping:

Take the Telcoms Package now before the EU: among other things, the package paves the way for ISPs and Quangos to block or slow access to websites and services on an arbitrary basis. At the same time, ISPs are instituting and enforcing strict bandwidth limits on their customers, citing shocking statistics about the bandwidth hogs who consume vastly more resources than the average punter.

Between filtering, fiddling connection speeds and capping usage, ISPs are pulling the rug out from under the nations that have sustained them with generous subsidies and regulation.

Doctorow supports his arguments with a series of fanciful metaphors since there aren’t any real abuses for UK subjects to be upset about. Here’s a portion of my reaction in the comments:

Let’s take a closer look at Doctorow’s non-metaphoric claims:

“Between these three factors – (1) reducing the perceived value of the net, (2) reducing the ability of new entrants to disrupt incumbents, and (3) penalizing those who explore new services on the net – we are at risk of scaring people away from the network, of giving competitive advantage to firms in better-regulated nations, of making it harder for people to use the net to weather disasters, to talk to their government and to each other.”

I’ve numbered them for easy reference. So where’s the proof that these things are happening? For (1) we have this:

“ISPs would also like to be able to arbitrarily slow or degrade our network connections depending on what we’re doing and with whom. In the classic “traffic shaping” scenario, a company like Virgin Media strikes a deal with Yahoo…”

How do we know that ISPs want to slow or degrade our access, which would seem to drive us to a different ISP? The metaphoric example is offered as the proof. See the relevance?

For problem (2) , Doctorow offers:

“Unless, that is, the cost of entry into the market goes up by four or five orders of magnitude, growing to encompass the cost of a horde of gladhanding negotiators who must first secure the permission of gatekeepers at the telcoms giants…”

The problem with this, of course, is that the barriers to entry for new search and video services are the edge caches Google would like to install in the ISP networks, which do in fact give them a fast lane to the consumer (why else would Google want them?) and raise obstacles to start-ups. But American neutralists say these entry barriers are good because their friend Google wants to erect them, not a telco. Double standard.

And for (3), the evils of metered billing, we have this lovely little thing:

“Before you clicked on this article, you had no way of knowing how many bytes your computer would consume before clicking on it. And now that you’ve clicked on it, chances are that you still don’t know how many bytes you’ve consumed..”

Please. Metered billing systems aren’t going to operate on the differences between web pages. If Doctorow believed what he said about the Pareto Curve, he’d certainly be able to appreciate the difference between reading a thousand web pages vs watching a thousand videos. High bandwidth consumers aren’t doing anything “innovative,” they’re most likely downloading free porn. Who is this guy kidding?

Doctorow’s fiction may be very enjoyable, but his understanding of the Internet and his policy prescriptions are nonsense. Read the book, take a pass on the law.

What’s especially sad is how Doctorow tries to pander to the overseas audience by using a tonne of Brit slang, going on about “punters,” “Quangos,” pounds and pence, and making a tube reference; NN is all about tribal ID, and he gets just that much of it.

What slows down your Wi-Fi?

The Register stumbled upon an eye-opening report commissioned by the UK telecom regulator, Ofcom, on sources of Wi-Fi interference in the UK:

What Mass discovered (pdf) is that while Wi-Fi users blame nearby networks for slowing down their connectivity, in reality the problem is people watching retransmitted TV in the bedroom while listening to their offspring sleeping, and there’s not a lot the regulator can do about it.

Outside central London that is: in the middle of The Smoke there really are too many networks, with resends, beacons and housekeeping filling 90 per cent of the data frames sent over Wi-Fi. This leaves only 10 per cent for users’ data. In fact, the study found that operating overheads for wireless Ethernet were much higher than anticipated, except in Bournemouth for some reason: down on the south coast 44 per cent of frames contain user data.

When 90% of the frames are overhead, the technology itself has a problem, and in this case it’s largely the fact that there’s such a high backward-compatibility burden in Wi-Fi. Older versions of the protocol weren’t designed for obsolescence, so the newer systems have to take steps to ensure the older systems can see them, expensive ones, or collisions happen, and that’s not good for anybody. Licensed spectrum can deal with the obsolescence problem by replacing older equipment; open spectrum has to bear the costs of compatibility forever. So this is one more example of the fact that “open” is not always better.

Interlocking Directorates

The New York Times reports that regulators have an interest in the structure of the Apple and Google boards of directors:

The Federal Trade Commission has begun an inquiry into whether the close ties between the boards of two of technology’s most prominent companies, Apple and Google, amount to a violation of antitrust laws, according to several people briefed on the inquiry.

I doubt this will go very far, as the interlocking directors (Eric Schmidt and former Genentech CEO Arthur Levinson,) will simply resign before any enforcement action is imminent, but it does raise some interesting questions about the market for mobile phone operating systems, currently split between Apple, Google, Microsoft, Palm, and a few others. These systems are rife with limitations, each of which could be considered a network neutrality violation when viewed in just the right way.

I imagine Apple itself might wish to give Dr. Schmidt his walking papers before he becomes an anti-trust problem, which he actually isn’t at this point. The FTC’s interest in this obscure situation is probably a signal that the Administration wants to be viewed as an anti-trust hawk without doing anything substantial.

But this is what the law calls an “occasion of sin.” Dear me.

Time Warner Cable bides its time

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold:

Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests
on consumption based billing. As a result, we will not proceed with implementation of additional tests until further consultation with our customers and other interested parties, ensuring that community needs are being met. While we continue to believe that consumption based billing may be the best pricing plan for consumers, we want to do everything we can to inform our customers of our plans and have the benefit of their views as part of our testing process.”

Time Warner Cable also announced that it is working to make measurement tools available as quickly as possible. These tools will help customers understand how much bandwidth they consume and aid in the dialog going forward.

The public response was somewhat less public than it may appear, as most of it was ginned-up by a few activist bloggers and the interest groups that are generally in the middle of these things, such as Free Press’ “Save the Internet” blog. In this case, the Internet was saved from a plan that Free Press’ chairman Tim Wu had previously lauded for its fairness in allocating network resources:

“I don’t quite see [metering] as an outrage, and in fact is probably the fairest system going — though of course the psychology of knowing that you’re paying for bandwidth may change behavior,” said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Of course, the “psychology of knowing that you’re paying for bandwidth” is actually meant to change behavior.

Free Press is now crowing that the postponement of the trial signals a great victory for the Internet:

“We’re glad to see Time Warner Cable’s price-gouging scheme collapse in the face of consumer opposition. Let this be a lesson to other Internet service providers looking to head down a similar path. Consumers are not going to stand idly by as companies try to squeeze their use of the Internet.

The Freeps should have chosen their words a bit more carefully. The dilemma that TWC faces does indeed relate to “squeezing,” but that doesn’t actually originate exclusively (or even primarily) at the cable company’s end of the bargain. TWC’s consumption per user has been increasing roughly 40% per year, and there’s no reason to assume it will do anything but increase as more HDTV content becomes available on the web, people connect more devices, and video calling becomes more popular. TWC’s capital expenditures are 20% of income, and the company lost $7.3 billion in the course of spinning out from Time Warner, Inc. last year. Some of TWC’s critics have charged that their bandwidth is free (or nearly so,) citing “high speed data costs of $146 million.” In reality, TWC pays six times that much for the interest on its capital expenditures alone ($923M.)

Heavy users squeeze light users by leaving less bandwidth on the table, and the flat-rate pricing system squeezes them even more by making them pay a larger share of the costs of bandwidth upgrades than those who actually use them. No fair-minded and rational person can look at the costs of operating a network and conclude that flat-rate pricing for a single Quality of Service level is the best we can do.

Continuous upgrades are a fact of life in the broadband business, and aligning their costs with the revenues carriers collect is one of the keys to creating an economically sustainable broadband ecosystem. We’ll take that up in another post.

UPDATE: Dig into the comments for some discussion of transit and peering prices.

,

Thinking about Caps

Time-Warner’s bandwidth metering plan continues to attract attention, in part because a couple of prominent tech journalists are taking an activist position against it: Nate Anderson is unabashedly opposed to most revenue-enhancing plans that come from ISPs and carriers, and Stacey Higginbotham imagines she’ll be personally affected since she lives in one of the trial cities, Austin. The latest development is a threat by Rep. Eric Massa of upstate New York to ban usage-based pricing by law:

Massa has wasted no time backing the issue, sending out two statements last week about his displeasure with TWC’s caps. “I am taking a leadership position on this issue because of all the phone calls, emails and faxes I’ve received from my district and all over the country,” he said in one. “While I favor a business’s right to maximize their profit potential, I believe safeguards must be put in place when a business has a monopoly on a specific region.”

TWC’s plan to meter usage, which differs from Comcast’s cap system in several significant respects*, wouldn’t seem odd in most of the world: volume-based service tiers are the norm for commercial Internet services in the US, and for residential services in most of the world. This is largely because the costs for providing Internet service are significantly related to volume, owing to the interconnect costs born by ISPs (it’s not continuously variable, it’s more like a step function that ratchets upward in chunks as new hardware has to be added to keep up with peak load.) These folks are essentially wholesalers who buy an interconnect to the larger Internet through a transit provider or a carrier. If they’re too small to build an extensive private network, they buy transit and if they’re larger they pay for circuits to and from peering centers, which aren’t free even if you build them yourself (they take parts to build, and parts aren’t free.)

It’s not unreasonable to tie pricing to volume in principle, given that some users consume hundreds or thousands of times more bandwidth than others; we certainly charge 18-wheelers more to use the freeways than Priuses. The argument is over what’s a reasonable fee.

And to answer that question, we have to understand the role that Internet service plays in paying for the infrastructure that supports it. There has never been a case in the United States or any other country where Internet service alone generated enough revenue for a carrier to cover the cost of building an advanced fiber optic network extending all the way from the core to the detached single-family residence, even in the muni fiber networks toward which the neutralists are so partial; in places like Burlington, VT, Lafayette, LA, and Morristown, TN, the service the city offers over fiber is triple play (Internet, TV, and voice.) Without TV and voice, the take-up rate of the service is too low to retire the bonds. It’s simple economics.

So what happens when triple-play customers decide to download all their TV programs from the Internet and replace their phone service with a combination of cell and Skype? Revenues plummet, obviously. So the cable company wants to hedge its bets by replacing triple-play revenue with a higher bill for the higher usage of the remaining indispensable service. That doesn’t seem evil to me, as long as there’s some competition in the market, and the infrastructure is continually upgraded. Over time, the infrastructure will be paid for, and the price per byte will decline.

One of problems that we have with broadband policy in the US is lack of connection between infrastructure costs and service prices. TWC seems to be trying to solve that problem, and I’d like them to have some freedom to experiment without every member of congress within striking distance of a camera crew giving them grief.

In the meantime, TWC would help themselves a great deal if they adopted the policy of printing each customer’s monthly usage on the bill. They shouldn’t do anything about it for the time being, just show the amount for the next six months. At the end of that period, if they want to run a trial or two, the consumers will be able to place the service levels in perspective, and there will be a lot less whining. If service levels are adopted, there also needs to be a policy of re-evaluating them every year. If TWC had done these two things, this whole brouhaha could have been avoided. And yes, I’d be glad to sign on as a consultant and keep them out of trouble.

*Comcast has an elastic cap that can’t be increased by paying higher fees. If you exceed it for three months in a row, you’re ejected. It’s elastic because it takes three simultaneous conditions to activate.

, ,

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.