Thinking about Caps

Time-Warner’s bandwidth metering plan continues to attract attention, in part because a couple of prominent tech journalists are taking an activist position against it: Nate Anderson is unabashedly opposed to most revenue-enhancing plans that come from ISPs and carriers, and Stacey Higginbotham imagines she’ll be personally affected since she lives in one of the trial cities, Austin. The latest development is a threat by Rep. Eric Massa of upstate New York to ban usage-based pricing by law:

Massa has wasted no time backing the issue, sending out two statements last week about his displeasure with TWC’s caps. “I am taking a leadership position on this issue because of all the phone calls, emails and faxes I’ve received from my district and all over the country,” he said in one. “While I favor a business’s right to maximize their profit potential, I believe safeguards must be put in place when a business has a monopoly on a specific region.”

TWC’s plan to meter usage, which differs from Comcast’s cap system in several significant respects*, wouldn’t seem odd in most of the world: volume-based service tiers are the norm for commercial Internet services in the US, and for residential services in most of the world. This is largely because the costs for providing Internet service are significantly related to volume, owing to the interconnect costs born by ISPs (it’s not continuously variable, it’s more like a step function that ratchets upward in chunks as new hardware has to be added to keep up with peak load.) These folks are essentially wholesalers who buy an interconnect to the larger Internet through a transit provider or a carrier. If they’re too small to build an extensive private network, they buy transit and if they’re larger they pay for circuits to and from peering centers, which aren’t free even if you build them yourself (they take parts to build, and parts aren’t free.)

It’s not unreasonable to tie pricing to volume in principle, given that some users consume hundreds or thousands of times more bandwidth than others; we certainly charge 18-wheelers more to use the freeways than Priuses. The argument is over what’s a reasonable fee.

And to answer that question, we have to understand the role that Internet service plays in paying for the infrastructure that supports it. There has never been a case in the United States or any other country where Internet service alone generated enough revenue for a carrier to cover the cost of building an advanced fiber optic network extending all the way from the core to the detached single-family residence, even in the muni fiber networks toward which the neutralists are so partial; in places like Burlington, VT, Lafayette, LA, and Morristown, TN, the service the city offers over fiber is triple play (Internet, TV, and voice.) Without TV and voice, the take-up rate of the service is too low to retire the bonds. It’s simple economics.

So what happens when triple-play customers decide to download all their TV programs from the Internet and replace their phone service with a combination of cell and Skype? Revenues plummet, obviously. So the cable company wants to hedge its bets by replacing triple-play revenue with a higher bill for the higher usage of the remaining indispensable service. That doesn’t seem evil to me, as long as there’s some competition in the market, and the infrastructure is continually upgraded. Over time, the infrastructure will be paid for, and the price per byte will decline.

One of problems that we have with broadband policy in the US is lack of connection between infrastructure costs and service prices. TWC seems to be trying to solve that problem, and I’d like them to have some freedom to experiment without every member of congress within striking distance of a camera crew giving them grief.

In the meantime, TWC would help themselves a great deal if they adopted the policy of printing each customer’s monthly usage on the bill. They shouldn’t do anything about it for the time being, just show the amount for the next six months. At the end of that period, if they want to run a trial or two, the consumers will be able to place the service levels in perspective, and there will be a lot less whining. If service levels are adopted, there also needs to be a policy of re-evaluating them every year. If TWC had done these two things, this whole brouhaha could have been avoided. And yes, I’d be glad to sign on as a consultant and keep them out of trouble.

*Comcast has an elastic cap that can’t be increased by paying higher fees. If you exceed it for three months in a row, you’re ejected. It’s elastic because it takes three simultaneous conditions to activate.

, ,

FCC finally issues Comcast memo

Kevin Martin and his Democratic Party colleagues at the FCC have issued their Comcast order, available at this link. They find some novel sources of authority and apply some interesting interpretations of the facts. I’ll have some detailed commentary after I’ve read it all and checked the footnotes. It’s an amusing exercise, if you like that sort of thing.

For a good summary of the order, see IP Democracy.

The future of P2P

Nate Anderson writes an interesting blurb on the P2P Next research project in the Netherlands. The researchers hope to build a platform suitable for live TV delivery over the Internet:

Dutch academic Dr. Johan Pouwelse knows BitTorrent well, having spent a year of his life examining its inner workings. Now, as the scientific director of the EU-funded P2P-Next team, Pouwelse and his researchers have been entrusted with €19 million from the EU and various partners, and what they want in return is nothing less than a “4th-generation” peer-to-peer system that will one day be tasked with replacing over-the-air television broadcasts.

P2P-Next is the largest publicly-funded team in the world working on such technology (though plenty of researchers at Microsoft, IBM, and countless tiny startups are also racing to deliver a better P2P experience), and today the team launched a trial program designed to test its progress to date.

What sets the project apart from the traditional BitTorrent architecture is its focus not on downloadable video, but on live streaming. Current BitTorrent implementations, focused as they are on offering easy access to downloadable content, aren’t well suited to delivering live streaming TV across the Internet, but Pouwelse is convinced that this is the future. There’s “no doubt that TV will come through the Internet in a few years,” he told Ars earlier this week. Obviously, deployment of such a system depends on consumer electronics firms and broadcasters, but Pouwelse’s job is to make sure that the technology is ready when they are.

P2P has a lot of issues and problems as a delivery vehicle for live TV, so I don’t think this is a good approach, but a system that caches popular content in numerous places has the potential to distribute large and popular files with little redundant delivery. The important feature of such a system is its caching capability, however, not its “peer-to-peerness.”

See Torrent Freak for many more details.

TiVo rolling out YouTube support

Another sign of the ongoing convergence is TiVo new software enabling Series 3 and HD customers to play YouTube directly from TiVo in the latest software:

As I’d suspected, TiVo support for YouTube is indeed hidden within the 9.4 software update. Series 3 and TiVo HD subscribers should start seeing the application show up as early as tomorrow (Thursday), though the rollout will be completed over the next few weeks. And in some form of meta-irony, I’ve shot a brief video of YouTube on TiVo… on YouTube.

Switched digital video and TCP remote control are also parts of this release. TiVo is evolving into a bit of a nano data center, albeit very limited one.

Let’s make data centers obsolete

We currently get most of our Internet content, especially video, from large data centers. The high cost of these data centers, and their data comm lines, is a huge barrier to entry to new content providers. This is why 20% of the Internet’s traffic today comes from a single source. So what options to we network architects have to bring about a shift in the Internet’s content architecture such that a few large companies don’t monopolize content?

One is the approach taken by NADA in Europe to create a universal network of P2P-enabled Nano Data Centers:

NADA is seeking to leverage advancements in Peer-to-Peer technology to connect the Nano Data Centers to enable them to work together to provide services to end users.

The set top box would essentially be split in two – one half facing the end user with all the typical functionality and services, while the other half acts as the Peer, or Nano Data Center.

“They isolate it using virtualization technologies, and that secure compartment is now talking to all the other set top boxes, co-ordinating and shifting stuff around. Each of the set top boxes has plenty of storage in it so we can put them together and build a massive data store for all those YouTube videos, Flickr pictures or whatever. We’re using Peer-to-Peer under the hood to provide a service,” Dr Ott said.

This approach, or something like it, has tremendous promise.

The server farm replacement needs to be an always-on device, separate from display machines like PCs and TV sets, inexpensive, easily expandable, and easily manageable. The devices that most resemble it today are home gateways and set top boxes, and the home gateway is actually a better leverage point than the set top box we have today.

I think I’ll build a prototype and see what happens.