What Policy Framework Will Further Enable Innovation on the Mobile Net?

Here’s the video of the panel I was on at the Congressional Internet Caucus Advisory Committee’s “State of the Mobile Net” conference in DC last Thursday. This was the closing panel of the conference, where all the loose ends were tied together. For those who don’t live and breath Washington politics, I should do what moderator Blair Levin didn’t do and introduce the panel. Levin was the head of the TIGR task force for the Obama transition, the master group for the review of the regulatory agencies and the administration’s use of technology. Kevin Werbach is a professor at the Wharton School, and took part in the FCC review for the transition along with Susan Crawford. He runs the Supernova conference. Larry Irving was part of the review of NTIA for the transition, and is a former Assistant Secretary of Commerce. Ben Scott is the policy guy at Free Press, and Alex Hoehn-Saric is legal counsel to the Senate Committee on Commerce, Science and Transportation.

Regulatory policy needs to be technically grounded, so I emphasized the tech side of things.

The Privacy Hearing

Here’s some news on Boucher’s privacy campaign:

It’s not clear how broad a law Boucher has in mind, though it’s likely to be some codification of generally accepted data-privacy practices. Those include telling people when you collect data and why, letting them choose to join in or not, using the data only for the reason you collected it, letting people see and correct the information and destroying it when its not longer needed.

But engineer Richard Bennett argued that DPI and network management techniques were getting a bad name and are simply the logical extension of the tools used in the early days of the internet.

Hoping to convince the subcommittee not to write legislation, AT&T’s chief privacy officer Dorothy Atwood said that the committee’s previous hearings and investigations have led to “robust self-regulation,” code-words for “no laws needed.” There’s some truth in that statement, since last summer, the subcommittee single-handedly ended ISPs dreams of letting outside companies spy on their subscribers in exchange for a little more revenue.

If the privacy is the problem, it needs to be the focus of the bill, not one of many techniques that may be used to compromise it, of course.

What I Did This Morning

While California was sleeping, I enjoyed a bit of broadband politics in the heart of the beast, testifying at the House Subcommittee on Communications, Technology, and the Internet on Communications Networks and Consumer Privacy: Recent Developments

The Subcommittee on Communications, Technology, and the Internet held a hearing titled, “Communications Networks and Consumer Privacy: Recent Developments” on Thursday, April 23, 2009, in 2322 Rayburn House Office Building. The hearing focused on technologies that network operators utilize to monitor consumer usage and how those technologies intersect with consumer privacy. The hearing explored three ways to monitor consumer usage on broadband and wireless networks: deep packet inspection (DPI); new uses for digital set-top boxes; and wireless Global Positioning System (GPS) tracking.

Witness List

* Ben Scott, Policy Director, Free Press
* Leslie Harris, President and CEO, Center for Democracy and Technology
* Kyle McSlarrow, President and CEO, National Cable and Telecommunications Association
* Dorothy Attwood, Chief Privacy Officer and Senior Vice President, Public Policy, AT&T Services, Inc.
* Brian R. Knapp, Chief Operating Officer, Loopt, Inc.
* Marc Rotenberg, Executive Director, The Electronic Privacy Information Center
* Richard Bennett, Publisher, BroadbandPolitics.com

It went pretty well, all in all; it’s really good to be last on a panel, and the Reps aren’t as snarky as California legislators. I’ll have more on this later.

, ,

Google’s Sweet Book Deal

If you read books, you’ll want to know what Robert Darnton has to say about the pending Google book deal, in Google & the Future of Books – The New York Review of Books. Here’s a teaser:

As an unintended consequence, Google will enjoy what can only be called a monopoly—a monopoly of a new kind, not of railroads or steel but of access to information. Google has no serious competitors. Microsoft dropped its major program to digitize books several months ago, and other enterprises like the Open Knowledge Commons (formerly the Open Content Alliance) and the Internet Archive are minute and ineffective in comparison with Google. Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

A policy change of this magnitude should not be negotiated behind closed doors to the detriment of all purveyors of information but Google.

Time Warner Cable bides its time

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold:

Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests
on consumption based billing. As a result, we will not proceed with implementation of additional tests until further consultation with our customers and other interested parties, ensuring that community needs are being met. While we continue to believe that consumption based billing may be the best pricing plan for consumers, we want to do everything we can to inform our customers of our plans and have the benefit of their views as part of our testing process.”

Time Warner Cable also announced that it is working to make measurement tools available as quickly as possible. These tools will help customers understand how much bandwidth they consume and aid in the dialog going forward.

The public response was somewhat less public than it may appear, as most of it was ginned-up by a few activist bloggers and the interest groups that are generally in the middle of these things, such as Free Press’ “Save the Internet” blog. In this case, the Internet was saved from a plan that Free Press’ chairman Tim Wu had previously lauded for its fairness in allocating network resources:

“I don’t quite see [metering] as an outrage, and in fact is probably the fairest system going — though of course the psychology of knowing that you’re paying for bandwidth may change behavior,” said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Of course, the “psychology of knowing that you’re paying for bandwidth” is actually meant to change behavior.

Free Press is now crowing that the postponement of the trial signals a great victory for the Internet:

“We’re glad to see Time Warner Cable’s price-gouging scheme collapse in the face of consumer opposition. Let this be a lesson to other Internet service providers looking to head down a similar path. Consumers are not going to stand idly by as companies try to squeeze their use of the Internet.

The Freeps should have chosen their words a bit more carefully. The dilemma that TWC faces does indeed relate to “squeezing,” but that doesn’t actually originate exclusively (or even primarily) at the cable company’s end of the bargain. TWC’s consumption per user has been increasing roughly 40% per year, and there’s no reason to assume it will do anything but increase as more HDTV content becomes available on the web, people connect more devices, and video calling becomes more popular. TWC’s capital expenditures are 20% of income, and the company lost $7.3 billion in the course of spinning out from Time Warner, Inc. last year. Some of TWC’s critics have charged that their bandwidth is free (or nearly so,) citing “high speed data costs of $146 million.” In reality, TWC pays six times that much for the interest on its capital expenditures alone ($923M.)

Heavy users squeeze light users by leaving less bandwidth on the table, and the flat-rate pricing system squeezes them even more by making them pay a larger share of the costs of bandwidth upgrades than those who actually use them. No fair-minded and rational person can look at the costs of operating a network and conclude that flat-rate pricing for a single Quality of Service level is the best we can do.

Continuous upgrades are a fact of life in the broadband business, and aligning their costs with the revenues carriers collect is one of the keys to creating an economically sustainable broadband ecosystem. We’ll take that up in another post.

UPDATE: Dig into the comments for some discussion of transit and peering prices.

,

Thinking about Caps

Time-Warner’s bandwidth metering plan continues to attract attention, in part because a couple of prominent tech journalists are taking an activist position against it: Nate Anderson is unabashedly opposed to most revenue-enhancing plans that come from ISPs and carriers, and Stacey Higginbotham imagines she’ll be personally affected since she lives in one of the trial cities, Austin. The latest development is a threat by Rep. Eric Massa of upstate New York to ban usage-based pricing by law:

Massa has wasted no time backing the issue, sending out two statements last week about his displeasure with TWC’s caps. “I am taking a leadership position on this issue because of all the phone calls, emails and faxes I’ve received from my district and all over the country,” he said in one. “While I favor a business’s right to maximize their profit potential, I believe safeguards must be put in place when a business has a monopoly on a specific region.”

TWC’s plan to meter usage, which differs from Comcast’s cap system in several significant respects*, wouldn’t seem odd in most of the world: volume-based service tiers are the norm for commercial Internet services in the US, and for residential services in most of the world. This is largely because the costs for providing Internet service are significantly related to volume, owing to the interconnect costs born by ISPs (it’s not continuously variable, it’s more like a step function that ratchets upward in chunks as new hardware has to be added to keep up with peak load.) These folks are essentially wholesalers who buy an interconnect to the larger Internet through a transit provider or a carrier. If they’re too small to build an extensive private network, they buy transit and if they’re larger they pay for circuits to and from peering centers, which aren’t free even if you build them yourself (they take parts to build, and parts aren’t free.)

It’s not unreasonable to tie pricing to volume in principle, given that some users consume hundreds or thousands of times more bandwidth than others; we certainly charge 18-wheelers more to use the freeways than Priuses. The argument is over what’s a reasonable fee.

And to answer that question, we have to understand the role that Internet service plays in paying for the infrastructure that supports it. There has never been a case in the United States or any other country where Internet service alone generated enough revenue for a carrier to cover the cost of building an advanced fiber optic network extending all the way from the core to the detached single-family residence, even in the muni fiber networks toward which the neutralists are so partial; in places like Burlington, VT, Lafayette, LA, and Morristown, TN, the service the city offers over fiber is triple play (Internet, TV, and voice.) Without TV and voice, the take-up rate of the service is too low to retire the bonds. It’s simple economics.

So what happens when triple-play customers decide to download all their TV programs from the Internet and replace their phone service with a combination of cell and Skype? Revenues plummet, obviously. So the cable company wants to hedge its bets by replacing triple-play revenue with a higher bill for the higher usage of the remaining indispensable service. That doesn’t seem evil to me, as long as there’s some competition in the market, and the infrastructure is continually upgraded. Over time, the infrastructure will be paid for, and the price per byte will decline.

One of problems that we have with broadband policy in the US is lack of connection between infrastructure costs and service prices. TWC seems to be trying to solve that problem, and I’d like them to have some freedom to experiment without every member of congress within striking distance of a camera crew giving them grief.

In the meantime, TWC would help themselves a great deal if they adopted the policy of printing each customer’s monthly usage on the bill. They shouldn’t do anything about it for the time being, just show the amount for the next six months. At the end of that period, if they want to run a trial or two, the consumers will be able to place the service levels in perspective, and there will be a lot less whining. If service levels are adopted, there also needs to be a policy of re-evaluating them every year. If TWC had done these two things, this whole brouhaha could have been avoided. And yes, I’d be glad to sign on as a consultant and keep them out of trouble.

*Comcast has an elastic cap that can’t be increased by paying higher fees. If you exceed it for three months in a row, you’re ejected. It’s elastic because it takes three simultaneous conditions to activate.

, ,

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.

Verizon’s Vision of the Internet

Despite the fact that I’ve been trying to explain why companies like Time Warner need to impose broadband usage caps on their systems before going to the capital markets for assistance in beefing up their innards, I’m not a fan of usage caps generally. They’re a very crude tool for imposing an equitable distribution of bandwidth, and one that ensures that the actual infrastructure in any given network will not be used efficiently. The key to network efficiency for a truly multi-service network like the Internet of the future is successful discrimination of application needs and traffic types. If the network can be made smart enough to follow orders, users can control their network usage according to their personal economics with no big surprises in the billing cycle. Network operators don’t need to manage traffic streams all the time, they need to manage them during periods of peak load (which better not be all that often.) And their best guidance in doing this comes from users and applications.

Many cities around the world manage access to the city core with something called congestion pricing: if you want to drive into the very heart of Singapore or London during peak hours, you have a pay a fee, which keeps traffic from gridlocking while permitting access by those who really need it. The Internet should work the same way: if you need low-latency service during peak load hours for Skype, you should be able to get it. And if you want to play P2P at the same time, you should be able to do so, but with higher latency (or at least higher jitter.) Accounts can be provisioned to allow a certain amount of congestion traffic for a flat rate, with additional portions available for an added fee. Users who demand a lot of transit from their networks should be able to get it, but at a reduced rate relative to average loads or for an additional fee.

The point is that networks are never going to be so fat that they can’t be overloaded, and local congestion is always going to occur. So the trick in managing networks is to allocate resources fairly and transparently, and let users control their use of whatever quota they have (not manually, but through home router and application signaling to the network.)

The least congested residential broadband service in the US today is Verizon FiOS. Verizon sells access at up to 50 MB/s, and has the capacity to increase this as consumers demand more.They can do this because they’ve invested money in a total infrastructure that consists of neighborhood loops, second hop infrastructure, and core network links. Their current system can carry 100 Mb/s per user without any contention short of the core, which is rather awesome. This is why you never hear anything about caps or quotas for FiOS: the system can’t be overloaded short of the core.

Despite that, Verizon’s visionaries realize that network management is going to be a part of the Internet of the future:

In part because most of the attention in the early days of the Internet was on connectivity and ensuring networks and devices could interconnect and communicate successfully, security and quality of service techniques were not a focus of the discussions around network protocols and functionality. Such features have instead often been offered “over the top”, usually as attributes in applications or as functionalities in web sites or distributed services.

The complexity and volume of Internet traffic today – and the fact that much more of it than ever before is “real time” or time sensitive – means that the Internet’s traditional routing and processing schemes are challenged more than ever. It is no longer realistic to expect that all of the heavy lifting to make applications and services work well on the Internet in today’s “two-way, heavy content, complex applications” world can be done through the old models. More work needs to be done at all levels to ensure better quality and improved services. This includes the network level as well.

This need not threaten the basic foundation of the Internet – its ability to provide consumers with access to any content they wish to use and connect any device they want to a broadband network. Competition, broad commitment to openness by industry and advocates, and oversight by regulators helps ensure this foundation remains. But it does mean that enhanced network based features and functionalities should not be automatically viewed with concern. Such features can be an important aspect of the Internet’s improvement and future evolution.

Indeed we shouldn’t fear rational and transparent management; it’s part of what has always made these systems work as well as they have for us.

, ,

Pitchforks in Austin: Time-Warner’s Bandwidth Cap

The fledgling high-tech community in the smokey little hipster ghetto called Austin is apoplectic about Time Warner’s announcement that it’s testing bandwidth caps in central Texas:

When it comes to trialing its metered broadband service, Time Warner Cable’s choice to do so in the tech-savvy city of Austin, Texas, was no accident. And residents may not be able to do much about it.

According to TWC spokesman Jeff Simmermon, Austin’s dedication to all things digital was precisely why it was chosen as one of four cities where the company plans to trial consumption-based broadband plans, which range from 5 GB to 40 GB per month (TWC says it has plans for a 100 GB-per-month tier as well). “Austin is a passionate and tech-savvy city, and the spirit that we’re approaching this (metered broadband) test with is that if it’s going to work, it has to work in a tech-savvy market where the use patterns are different,” he told me.

So far, Austin isn’t impressed, but since the local cable franchise it grants only deals with video, there may not be much it can do. Chip Rosenthal, one of seven commissioners on the City of Austin’s Technology and Telecommunications Commission (a strictly advisory body), hopes that concerned citizens will show up at the meeting it’s holding at City Hall this Wednesday and talk about metered broadband. He wants to get the metered bandwidth issue added to the agenda of the commission’s May meeting as well.

Rosenthal, a contract programmer who likes open source, has a blog where he holds forth on the issue, calling its rationale a series of “red herrings,” and complaining that the caps of the present will hurt applications of the future. This is no doubt true, but ultimately another red herring. The caps of the future won’t necessarily be the caps of the present.

The general theory is that TWC wants to stamp out web video in order to keep TV customers in the VoD fold. I don’t doubt that TWC would like to do that, but I doubt they’re dumb enough to believe they could ever get away with it. Austin is a stoner’s throw from San Antonio, the world headquarters of AT&T and the beta site for U-verse, the IPTV service that rides into the home atop VDSL. While U-verse isn’t universally available in Austin yet, it’s under construction so there are alternatives.

TWC’s CEO has issued a blog post by way of clarification that’s not entirely helpful:

With regard to consumption-based billing, we have determined that as broadband usage and penetration grow, there are increasing differences in the amount of bandwidth our customers consume. Our current pricing plans require all users to pay the same amount, whether they check email once a month or download six movies a day. As the amount of usage has dramatically diverged among users, this is becoming inherently unfair and not the way most consumers want to pay for goods they consume.

Like Rosenthal’s post, it’s true as far as it goes, but leaves runners in scoring position. Here’s the real story, as I see it: while Time Warner doesn’t have a large enough network to peer with the big boys (AT&T, Verizon, Qwest, Comcast, and L3,) it does have some peering agreements that protect it from transit charges as long as they deliver their packets to convenient locations, as well as some straight-up transit charges to pay. Their aggregation network – the links that carry data between the Internet exchange points and their CMTS’s – isn’t fat enough to support full-on DOCSIS 3 usage, and neither is its transit budget.

Consequently, they’re being hammered by the small number of high-bandwidth consumers in their network, and they’re looking to cut costs by running them off. While there are other ways to ensure fairness across user accounts, the cap is the best way to address the fraction of a percent who use something like half their available bandwidth.

TWC is betting that they can find a cap level that discourages hogs and doesn’t bother more typical users. They’re going into an area close to the heart of AT&T with the experiment to get a good sense of where that limit is.

VoD has a little bit to do with this, but not all that much. TWC customers with TiVo’s already have unlimited VoD, and the rest of the VoD they provide doesn’t cost transit dollars, it’s delivered over their local tree. DOCSIS 3 also doesn’t have much of anything to do with this, as it’s also a local service, albeit one with the potential to ring up big transit charges if not domesticated.

To a large extent, ISP’s play a marketing game where they advertise super-fast services that aren’t backed up by sufficient transit or peering to sustain a heavy duty cycle. This isn’t a bad thing, of course, as the efficient sharing of capacity is actually the Internet’s secret sauce. If we wanted peak and minimum bandwidth to be the same, we would have stuck with narrow-band modems on the PSTN. But we don’t, so we have to get hip to statistical sharing of network resources.

I’ll go out on a limb here and predict that the typical Austin consumer won’t switch to U-verse on account of TWC’s caps, but the heaviest users of gaming and BitTorrent will. And I’ll further predict that TWC’s bottom line will be glad to see them go.

The arguments against caps ultimately come down to the assertion that there’s some public good in making light users of Internet access capacity subsidize heavy users. Given that most of the heavy uses are either piracy or personal entertainment, I don’t happen to buy that argument, and moreover I find the alternatives to capping are generally less attractive, as they typically involve duty cycle restrictions of other types. The alternative that TWC should explore is peak/off peak handling that allows downloaders to utilize less restrictive bandwidth budgets at off hours.

I’d prefer to have a network that allowed me to label all of my traffic with the service level I expected, and scheduled and charged it appropriately. We don’t have that network yet, but we will one day as long as neutrality regulations don’t get in the way. Alternatively, a fat pipe to a Tier 1 like Verizon would be a better deal, but we can’t all buy one today either.

Baseball Forecast

Now for something really important, Major League Baseball. The key matchup between the A’s and the Anaheim team looks like it’s going to tilt in favor of the A’s this year; the machine predictions back this up, even if the human predictions don’t. Last year the A’s suffered from an acute lack of offense, which placed the pitching staff under undue stress. The situation is reversed this year as the A’s have a more potent offense than Anaheim, but there’s no denying that the Anaheim pitching staff is stronger than the A’s mostly-rookie rotation and mostly-discard bullpen. But the season covers 162 games and most of the results depend on which team’s key players stay healthy. This is where the A’s have the edge this year.

The Anaheim outfield is composed of players who are all on the down side of their careers, aging stars Bobby Abreu, Torii Hunter, and Vladdy Guerrero. While Vladdy is the only one with a significant injury history, the others are at the point where hamstrings, ACLs, and quads are more fragile. The A’s outfield, on the other hand, features slugger Matt Holliday at the peak of his career and youngsters Travis Buck and Ryan Sweeney on the verge of breakout. The A’s outfield is also stronger defensively, which is fortunate given the age of the pitching staff.

In the infield, the A’s have rectified their three major deficits by signing Orlando Cabrera at short and Jason Giambi at first and getting Eric Chavez healthy. Giambi is an old-timer, but he’s so happy to be back in Oakland after securing his retirement in New York that he’s going to have a second childhood. Cabrera is one of my favorite shortstops because he’s an all-around star, even if he’s been labeled a clubhouse cancer by bitter fans who’ve seen their favorite player benched when OC came along. One of those guys, Nomar Garciaparra (he of the annoying tics) is a backup infielder for the A’s this year. Eric Chavez appears to be healthy, hitting two homers in three games against the AAAA Giants last weekend, and his defense is stellar as ever. Jack Cust appears to be moving into a full-time DH role, suitable to his defensive skills and appropriate to his offensive ones.

Landon Powell, the new backup catcher, has injury problems but is a superior thrower and hitter to starter Kurt Suzuki. I’d take the A’s infield over the under-performing, weak hitting Anaheims any day of the week.

The pitching staff is the big question mark, however. Justin Duchsherer continues to have problems with his elbow, shoulder, and hip indicative of bad mechanics. These problems don’t tend to sort themselves out without the intervention of a Rick Peterson. Two of the A’s starters haven’t pitched above AA, so they’re a wild card in big leagues at best. The remaining starters, Braden, Eveland, and Outman have thus far failed to distinguish themselves at the big league level, and the nominal closer is on the disabled list. Anaheim, on the other hand, has the most formidable starting five in baseball, when healthy. But fortunately for the A’s, every single one is an injury risk (the season starts with the three best on the disabled list) so we can reasonably expect that the Anaheim farm system will end up carrying a great deal of the load this year.

If the pitching staffs perform consistent with past results, including injury history, the A’s will win the division. If Anaheim’s witch doctors cast healing spells in favor of their geriatric outfield and rickety starting rotation, the A’s will once again sit out the post-season. The Anaheims will most likely go far in the post-season if they make it, but that’s another story. For the time being, it looks like the A’s will win the division by 4 games.