What slows down your Wi-Fi?

The Register stumbled upon an eye-opening report commissioned by the UK telecom regulator, Ofcom, on sources of Wi-Fi interference in the UK:

What Mass discovered (pdf) is that while Wi-Fi users blame nearby networks for slowing down their connectivity, in reality the problem is people watching retransmitted TV in the bedroom while listening to their offspring sleeping, and there’s not a lot the regulator can do about it.

Outside central London that is: in the middle of The Smoke there really are too many networks, with resends, beacons and housekeeping filling 90 per cent of the data frames sent over Wi-Fi. This leaves only 10 per cent for users’ data. In fact, the study found that operating overheads for wireless Ethernet were much higher than anticipated, except in Bournemouth for some reason: down on the south coast 44 per cent of frames contain user data.

When 90% of the frames are overhead, the technology itself has a problem, and in this case it’s largely the fact that there’s such a high backward-compatibility burden in Wi-Fi. Older versions of the protocol weren’t designed for obsolescence, so the newer systems have to take steps to ensure the older systems can see them, expensive ones, or collisions happen, and that’s not good for anybody. Licensed spectrum can deal with the obsolescence problem by replacing older equipment; open spectrum has to bear the costs of compatibility forever. So this is one more example of the fact that “open” is not always better.

Interlocking Directorates

The New York Times reports that regulators have an interest in the structure of the Apple and Google boards of directors:

The Federal Trade Commission has begun an inquiry into whether the close ties between the boards of two of technology’s most prominent companies, Apple and Google, amount to a violation of antitrust laws, according to several people briefed on the inquiry.

I doubt this will go very far, as the interlocking directors (Eric Schmidt and former Genentech CEO Arthur Levinson,) will simply resign before any enforcement action is imminent, but it does raise some interesting questions about the market for mobile phone operating systems, currently split between Apple, Google, Microsoft, Palm, and a few others. These systems are rife with limitations, each of which could be considered a network neutrality violation when viewed in just the right way.

I imagine Apple itself might wish to give Dr. Schmidt his walking papers before he becomes an anti-trust problem, which he actually isn’t at this point. The FTC’s interest in this obscure situation is probably a signal that the Administration wants to be viewed as an anti-trust hawk without doing anything substantial.

But this is what the law calls an “occasion of sin.” Dear me.

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Here’s the video of the panel I was on at the Congressional Internet Caucus Advisory Committee’s “State of the Mobile Net” conference in DC last Thursday. This was the closing panel of the conference, where all the loose ends were tied together. For those who don’t live and breath Washington politics, I should do what moderator Blair Levin didn’t do and introduce the panel. Levin was the head of the TIGR task force for the Obama transition, the master group for the review of the regulatory agencies and the administration’s use of technology. Kevin Werbach is a professor at the Wharton School, and took part in the FCC review for the transition along with Susan Crawford. He runs the Supernova conference. Larry Irving was part of the review of NTIA for the transition, and is a former Assistant Secretary of Commerce. Ben Scott is the policy guy at Free Press, and Alex Hoehn-Saric is legal counsel to the Senate Committee on Commerce, Science and Transportation.

Regulatory policy needs to be technically grounded, so I emphasized the tech side of things.

What I Did This Morning

While California was sleeping, I enjoyed a bit of broadband politics in the heart of the beast, testifying at the House Subcommittee on Communications, Technology, and the Internet on Communications Networks and Consumer Privacy: Recent Developments

The Subcommittee on Communications, Technology, and the Internet held a hearing titled, “Communications Networks and Consumer Privacy: Recent Developments” on Thursday, April 23, 2009, in 2322 Rayburn House Office Building. The hearing focused on technologies that network operators utilize to monitor consumer usage and how those technologies intersect with consumer privacy. The hearing explored three ways to monitor consumer usage on broadband and wireless networks: deep packet inspection (DPI); new uses for digital set-top boxes; and wireless Global Positioning System (GPS) tracking.

Witness List

* Ben Scott, Policy Director, Free Press
* Leslie Harris, President and CEO, Center for Democracy and Technology
* Kyle McSlarrow, President and CEO, National Cable and Telecommunications Association
* Dorothy Attwood, Chief Privacy Officer and Senior Vice President, Public Policy, AT&T Services, Inc.
* Brian R. Knapp, Chief Operating Officer, Loopt, Inc.
* Marc Rotenberg, Executive Director, The Electronic Privacy Information Center
* Richard Bennett, Publisher, BroadbandPolitics.com

It went pretty well, all in all; it’s really good to be last on a panel, and the Reps aren’t as snarky as California legislators. I’ll have more on this later.

, ,

Google’s Sweet Book Deal

If you read books, you’ll want to know what Robert Darnton has to say about the pending Google book deal, in Google & the Future of Books – The New York Review of Books. Here’s a teaser:

As an unintended consequence, Google will enjoy what can only be called a monopoly—a monopoly of a new kind, not of railroads or steel but of access to information. Google has no serious competitors. Microsoft dropped its major program to digitize books several months ago, and other enterprises like the Open Knowledge Commons (formerly the Open Content Alliance) and the Internet Archive are minute and ineffective in comparison with Google. Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

A policy change of this magnitude should not be negotiated behind closed doors to the detriment of all purveyors of information but Google.

Time Warner Cable bides its time

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold:

Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests
on consumption based billing. As a result, we will not proceed with implementation of additional tests until further consultation with our customers and other interested parties, ensuring that community needs are being met. While we continue to believe that consumption based billing may be the best pricing plan for consumers, we want to do everything we can to inform our customers of our plans and have the benefit of their views as part of our testing process.”

Time Warner Cable also announced that it is working to make measurement tools available as quickly as possible. These tools will help customers understand how much bandwidth they consume and aid in the dialog going forward.

The public response was somewhat less public than it may appear, as most of it was ginned-up by a few activist bloggers and the interest groups that are generally in the middle of these things, such as Free Press’ “Save the Internet” blog. In this case, the Internet was saved from a plan that Free Press’ chairman Tim Wu had previously lauded for its fairness in allocating network resources:

“I don’t quite see [metering] as an outrage, and in fact is probably the fairest system going — though of course the psychology of knowing that you’re paying for bandwidth may change behavior,” said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Of course, the “psychology of knowing that you’re paying for bandwidth” is actually meant to change behavior.

Free Press is now crowing that the postponement of the trial signals a great victory for the Internet:

“We’re glad to see Time Warner Cable’s price-gouging scheme collapse in the face of consumer opposition. Let this be a lesson to other Internet service providers looking to head down a similar path. Consumers are not going to stand idly by as companies try to squeeze their use of the Internet.

The Freeps should have chosen their words a bit more carefully. The dilemma that TWC faces does indeed relate to “squeezing,” but that doesn’t actually originate exclusively (or even primarily) at the cable company’s end of the bargain. TWC’s consumption per user has been increasing roughly 40% per year, and there’s no reason to assume it will do anything but increase as more HDTV content becomes available on the web, people connect more devices, and video calling becomes more popular. TWC’s capital expenditures are 20% of income, and the company lost $7.3 billion in the course of spinning out from Time Warner, Inc. last year. Some of TWC’s critics have charged that their bandwidth is free (or nearly so,) citing “high speed data costs of $146 million.” In reality, TWC pays six times that much for the interest on its capital expenditures alone ($923M.)

Heavy users squeeze light users by leaving less bandwidth on the table, and the flat-rate pricing system squeezes them even more by making them pay a larger share of the costs of bandwidth upgrades than those who actually use them. No fair-minded and rational person can look at the costs of operating a network and conclude that flat-rate pricing for a single Quality of Service level is the best we can do.

Continuous upgrades are a fact of life in the broadband business, and aligning their costs with the revenues carriers collect is one of the keys to creating an economically sustainable broadband ecosystem. We’ll take that up in another post.

UPDATE: Dig into the comments for some discussion of transit and peering prices.

,

Thinking about Caps

Time-Warner’s bandwidth metering plan continues to attract attention, in part because a couple of prominent tech journalists are taking an activist position against it: Nate Anderson is unabashedly opposed to most revenue-enhancing plans that come from ISPs and carriers, and Stacey Higginbotham imagines she’ll be personally affected since she lives in one of the trial cities, Austin. The latest development is a threat by Rep. Eric Massa of upstate New York to ban usage-based pricing by law:

Massa has wasted no time backing the issue, sending out two statements last week about his displeasure with TWC’s caps. “I am taking a leadership position on this issue because of all the phone calls, emails and faxes I’ve received from my district and all over the country,” he said in one. “While I favor a business’s right to maximize their profit potential, I believe safeguards must be put in place when a business has a monopoly on a specific region.”

TWC’s plan to meter usage, which differs from Comcast’s cap system in several significant respects*, wouldn’t seem odd in most of the world: volume-based service tiers are the norm for commercial Internet services in the US, and for residential services in most of the world. This is largely because the costs for providing Internet service are significantly related to volume, owing to the interconnect costs born by ISPs (it’s not continuously variable, it’s more like a step function that ratchets upward in chunks as new hardware has to be added to keep up with peak load.) These folks are essentially wholesalers who buy an interconnect to the larger Internet through a transit provider or a carrier. If they’re too small to build an extensive private network, they buy transit and if they’re larger they pay for circuits to and from peering centers, which aren’t free even if you build them yourself (they take parts to build, and parts aren’t free.)

It’s not unreasonable to tie pricing to volume in principle, given that some users consume hundreds or thousands of times more bandwidth than others; we certainly charge 18-wheelers more to use the freeways than Priuses. The argument is over what’s a reasonable fee.

And to answer that question, we have to understand the role that Internet service plays in paying for the infrastructure that supports it. There has never been a case in the United States or any other country where Internet service alone generated enough revenue for a carrier to cover the cost of building an advanced fiber optic network extending all the way from the core to the detached single-family residence, even in the muni fiber networks toward which the neutralists are so partial; in places like Burlington, VT, Lafayette, LA, and Morristown, TN, the service the city offers over fiber is triple play (Internet, TV, and voice.) Without TV and voice, the take-up rate of the service is too low to retire the bonds. It’s simple economics.

So what happens when triple-play customers decide to download all their TV programs from the Internet and replace their phone service with a combination of cell and Skype? Revenues plummet, obviously. So the cable company wants to hedge its bets by replacing triple-play revenue with a higher bill for the higher usage of the remaining indispensable service. That doesn’t seem evil to me, as long as there’s some competition in the market, and the infrastructure is continually upgraded. Over time, the infrastructure will be paid for, and the price per byte will decline.

One of problems that we have with broadband policy in the US is lack of connection between infrastructure costs and service prices. TWC seems to be trying to solve that problem, and I’d like them to have some freedom to experiment without every member of congress within striking distance of a camera crew giving them grief.

In the meantime, TWC would help themselves a great deal if they adopted the policy of printing each customer’s monthly usage on the bill. They shouldn’t do anything about it for the time being, just show the amount for the next six months. At the end of that period, if they want to run a trial or two, the consumers will be able to place the service levels in perspective, and there will be a lot less whining. If service levels are adopted, there also needs to be a policy of re-evaluating them every year. If TWC had done these two things, this whole brouhaha could have been avoided. And yes, I’d be glad to sign on as a consultant and keep them out of trouble.

*Comcast has an elastic cap that can’t be increased by paying higher fees. If you exceed it for three months in a row, you’re ejected. It’s elastic because it takes three simultaneous conditions to activate.

, ,

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.

Verizon’s Vision of the Internet

Despite the fact that I’ve been trying to explain why companies like Time Warner need to impose broadband usage caps on their systems before going to the capital markets for assistance in beefing up their innards, I’m not a fan of usage caps generally. They’re a very crude tool for imposing an equitable distribution of bandwidth, and one that ensures that the actual infrastructure in any given network will not be used efficiently. The key to network efficiency for a truly multi-service network like the Internet of the future is successful discrimination of application needs and traffic types. If the network can be made smart enough to follow orders, users can control their network usage according to their personal economics with no big surprises in the billing cycle. Network operators don’t need to manage traffic streams all the time, they need to manage them during periods of peak load (which better not be all that often.) And their best guidance in doing this comes from users and applications.

Many cities around the world manage access to the city core with something called congestion pricing: if you want to drive into the very heart of Singapore or London during peak hours, you have a pay a fee, which keeps traffic from gridlocking while permitting access by those who really need it. The Internet should work the same way: if you need low-latency service during peak load hours for Skype, you should be able to get it. And if you want to play P2P at the same time, you should be able to do so, but with higher latency (or at least higher jitter.) Accounts can be provisioned to allow a certain amount of congestion traffic for a flat rate, with additional portions available for an added fee. Users who demand a lot of transit from their networks should be able to get it, but at a reduced rate relative to average loads or for an additional fee.

The point is that networks are never going to be so fat that they can’t be overloaded, and local congestion is always going to occur. So the trick in managing networks is to allocate resources fairly and transparently, and let users control their use of whatever quota they have (not manually, but through home router and application signaling to the network.)

The least congested residential broadband service in the US today is Verizon FiOS. Verizon sells access at up to 50 MB/s, and has the capacity to increase this as consumers demand more.They can do this because they’ve invested money in a total infrastructure that consists of neighborhood loops, second hop infrastructure, and core network links. Their current system can carry 100 Mb/s per user without any contention short of the core, which is rather awesome. This is why you never hear anything about caps or quotas for FiOS: the system can’t be overloaded short of the core.

Despite that, Verizon’s visionaries realize that network management is going to be a part of the Internet of the future:

In part because most of the attention in the early days of the Internet was on connectivity and ensuring networks and devices could interconnect and communicate successfully, security and quality of service techniques were not a focus of the discussions around network protocols and functionality. Such features have instead often been offered “over the top”, usually as attributes in applications or as functionalities in web sites or distributed services.

The complexity and volume of Internet traffic today – and the fact that much more of it than ever before is “real time” or time sensitive – means that the Internet’s traditional routing and processing schemes are challenged more than ever. It is no longer realistic to expect that all of the heavy lifting to make applications and services work well on the Internet in today’s “two-way, heavy content, complex applications” world can be done through the old models. More work needs to be done at all levels to ensure better quality and improved services. This includes the network level as well.

This need not threaten the basic foundation of the Internet – its ability to provide consumers with access to any content they wish to use and connect any device they want to a broadband network. Competition, broad commitment to openness by industry and advocates, and oversight by regulators helps ensure this foundation remains. But it does mean that enhanced network based features and functionalities should not be automatically viewed with concern. Such features can be an important aspect of the Internet’s improvement and future evolution.

Indeed we shouldn’t fear rational and transparent management; it’s part of what has always made these systems work as well as they have for us.

, ,

Pitchforks in Austin: Time-Warner’s Bandwidth Cap

The fledgling high-tech community in the smokey little hipster ghetto called Austin is apoplectic about Time Warner’s announcement that it’s testing bandwidth caps in central Texas:

When it comes to trialing its metered broadband service, Time Warner Cable’s choice to do so in the tech-savvy city of Austin, Texas, was no accident. And residents may not be able to do much about it.

According to TWC spokesman Jeff Simmermon, Austin’s dedication to all things digital was precisely why it was chosen as one of four cities where the company plans to trial consumption-based broadband plans, which range from 5 GB to 40 GB per month (TWC says it has plans for a 100 GB-per-month tier as well). “Austin is a passionate and tech-savvy city, and the spirit that we’re approaching this (metered broadband) test with is that if it’s going to work, it has to work in a tech-savvy market where the use patterns are different,” he told me.

So far, Austin isn’t impressed, but since the local cable franchise it grants only deals with video, there may not be much it can do. Chip Rosenthal, one of seven commissioners on the City of Austin’s Technology and Telecommunications Commission (a strictly advisory body), hopes that concerned citizens will show up at the meeting it’s holding at City Hall this Wednesday and talk about metered broadband. He wants to get the metered bandwidth issue added to the agenda of the commission’s May meeting as well.

Rosenthal, a contract programmer who likes open source, has a blog where he holds forth on the issue, calling its rationale a series of “red herrings,” and complaining that the caps of the present will hurt applications of the future. This is no doubt true, but ultimately another red herring. The caps of the future won’t necessarily be the caps of the present.

The general theory is that TWC wants to stamp out web video in order to keep TV customers in the VoD fold. I don’t doubt that TWC would like to do that, but I doubt they’re dumb enough to believe they could ever get away with it. Austin is a stoner’s throw from San Antonio, the world headquarters of AT&T and the beta site for U-verse, the IPTV service that rides into the home atop VDSL. While U-verse isn’t universally available in Austin yet, it’s under construction so there are alternatives.

TWC’s CEO has issued a blog post by way of clarification that’s not entirely helpful:

With regard to consumption-based billing, we have determined that as broadband usage and penetration grow, there are increasing differences in the amount of bandwidth our customers consume. Our current pricing plans require all users to pay the same amount, whether they check email once a month or download six movies a day. As the amount of usage has dramatically diverged among users, this is becoming inherently unfair and not the way most consumers want to pay for goods they consume.

Like Rosenthal’s post, it’s true as far as it goes, but leaves runners in scoring position. Here’s the real story, as I see it: while Time Warner doesn’t have a large enough network to peer with the big boys (AT&T, Verizon, Qwest, Comcast, and L3,) it does have some peering agreements that protect it from transit charges as long as they deliver their packets to convenient locations, as well as some straight-up transit charges to pay. Their aggregation network – the links that carry data between the Internet exchange points and their CMTS’s – isn’t fat enough to support full-on DOCSIS 3 usage, and neither is its transit budget.

Consequently, they’re being hammered by the small number of high-bandwidth consumers in their network, and they’re looking to cut costs by running them off. While there are other ways to ensure fairness across user accounts, the cap is the best way to address the fraction of a percent who use something like half their available bandwidth.

TWC is betting that they can find a cap level that discourages hogs and doesn’t bother more typical users. They’re going into an area close to the heart of AT&T with the experiment to get a good sense of where that limit is.

VoD has a little bit to do with this, but not all that much. TWC customers with TiVo’s already have unlimited VoD, and the rest of the VoD they provide doesn’t cost transit dollars, it’s delivered over their local tree. DOCSIS 3 also doesn’t have much of anything to do with this, as it’s also a local service, albeit one with the potential to ring up big transit charges if not domesticated.

To a large extent, ISP’s play a marketing game where they advertise super-fast services that aren’t backed up by sufficient transit or peering to sustain a heavy duty cycle. This isn’t a bad thing, of course, as the efficient sharing of capacity is actually the Internet’s secret sauce. If we wanted peak and minimum bandwidth to be the same, we would have stuck with narrow-band modems on the PSTN. But we don’t, so we have to get hip to statistical sharing of network resources.

I’ll go out on a limb here and predict that the typical Austin consumer won’t switch to U-verse on account of TWC’s caps, but the heaviest users of gaming and BitTorrent will. And I’ll further predict that TWC’s bottom line will be glad to see them go.

The arguments against caps ultimately come down to the assertion that there’s some public good in making light users of Internet access capacity subsidize heavy users. Given that most of the heavy uses are either piracy or personal entertainment, I don’t happen to buy that argument, and moreover I find the alternatives to capping are generally less attractive, as they typically involve duty cycle restrictions of other types. The alternative that TWC should explore is peak/off peak handling that allows downloaders to utilize less restrictive bandwidth budgets at off hours.

I’d prefer to have a network that allowed me to label all of my traffic with the service level I expected, and scheduled and charged it appropriately. We don’t have that network yet, but we will one day as long as neutrality regulations don’t get in the way. Alternatively, a fat pipe to a Tier 1 like Verizon would be a better deal, but we can’t all buy one today either.