Steal These Policies

ITIF released a report today on digital piracy, Steal These Policies: Strategies for Reducing Digital Piracy co-authored by Dan Castro, Scott Andes, and yours truly. Here’s the blurb:

It is time for the U.S. government to take global theft of U.S. intellectual property, especially digital content, much more seriously. A new ITIF report finds that the U.S. government can and should do more to support industry efforts to reduce digital piracy, a growing problem that threatens not only the robust production of digital content, but U.S. jobs. While there are no “silver bullets” to reducing digital piracy, there are a number of “lead bullets” that can and should be implemented. Specifically, ITIF calls on the federal government to not preclude those impacted by digital piracy, including copyright holders and ISPs, from taking steps, including implementing technical controls like digital fingerprinting, to reduce piracy. In addition, industry and government should consider bold steps to limit the revenue streams of those profiting from piracy by encouraging ISPs, search engines, ad networks and credit card companies to block piracy websites and refuse to do business with them. These options should be part of a broad dialogue that engages all stakeholders, including government, content owners, website operators, technology developers, and ISPs and other intermediaries, on how to improve the global response to piracy. Toward that end, this report recommends that policymakers:

And here’s the video of the launch event:

One point that comes across better from the live event than from the paper is that piracy isn’t simply something that takes place between close personal friends, it’s a business that profits from the unauthorized sale of other people’s material. Whatever your views on Internet privacy and intellectual property rights may be, I think we can all agree that the business of piracy is wrong.

, ,

Guest Blog at GigaOm

My guest blog at GigaOm deals with paid peering and the net neutrality regulations, How Video Is Changing the Internet:

But paid peering may be forbidden by Question 106 of the FCC’s proposed Open Internet rules because it’s essentially two-tiered network access, Norton points out.

Paid peering illustrates how hard it is to write an anti-discrimination rule for the Internet that doesn’t have harmful side effects for all but the largest content networks. Paid peering is a better level of access to an ISP’s customers for a fee, but the fee is less than the price of generic access to the ISP via a transit network. The practice of paid peering also reduces the load on the Internet core, so what’s not to like? Paid peering agreements should be offered for sale on a non-discriminatory basis, but they certainly shouldn’t be banned.

There’s another good treatment of the subject at Digital Society, inspired by the same conversation with peering maven Bill Norton.

UPDATE: There’s an incredible whine-a-thon in the comments to this article by Google’s Director of Network Operations Vijay Gill and some of his friends from a network operators’ IRC channel. Gill says I’ve got all the facts wrong because paid peering existed in a very limited way ten years ago under a different name. I don’t dispute that, but simply note its potential problems with net neutrality regulations in some guises. The issue is whether the Internet of the Future will be a slave to the Internet of the Past’s supposed insistence on a single service level for all peering agreements, not that there ever has been such a regulation.

UPDATE 2: One thing I definitely was unclear about is whether Arbor’s estimates of traffic growth, 47%, are in line with the MINTS estimates. I conclude that overall growth is much higher than the MINTS figure because Arbor measures only inter-domain traffic at Internet Exchanges. There’s obviously been a great deal of growth in the Akamai and Limelight CDNs, neither of which is measured by MINTS or Arbor, and growth in private peering (paid and unpaid) as well. MINTS measures more than public IX traffic, yet their figures are in line with Arbor’s data from public sources only; this difference in method and similarity of measurements suggests that MINTS may be understating the total of the inter-domain case, depending on how the load falls out between public and private sources. Private connections are increasing, according to IX operators and heavy users.

, , , ,

Pitchforks in Austin: Time-Warner’s Bandwidth Cap

The fledgling high-tech community in the smokey little hipster ghetto called Austin is apoplectic about Time Warner’s announcement that it’s testing bandwidth caps in central Texas:

When it comes to trialing its metered broadband service, Time Warner Cable’s choice to do so in the tech-savvy city of Austin, Texas, was no accident. And residents may not be able to do much about it.

According to TWC spokesman Jeff Simmermon, Austin’s dedication to all things digital was precisely why it was chosen as one of four cities where the company plans to trial consumption-based broadband plans, which range from 5 GB to 40 GB per month (TWC says it has plans for a 100 GB-per-month tier as well). “Austin is a passionate and tech-savvy city, and the spirit that we’re approaching this (metered broadband) test with is that if it’s going to work, it has to work in a tech-savvy market where the use patterns are different,” he told me.

So far, Austin isn’t impressed, but since the local cable franchise it grants only deals with video, there may not be much it can do. Chip Rosenthal, one of seven commissioners on the City of Austin’s Technology and Telecommunications Commission (a strictly advisory body), hopes that concerned citizens will show up at the meeting it’s holding at City Hall this Wednesday and talk about metered broadband. He wants to get the metered bandwidth issue added to the agenda of the commission’s May meeting as well.

Rosenthal, a contract programmer who likes open source, has a blog where he holds forth on the issue, calling its rationale a series of “red herrings,” and complaining that the caps of the present will hurt applications of the future. This is no doubt true, but ultimately another red herring. The caps of the future won’t necessarily be the caps of the present.

The general theory is that TWC wants to stamp out web video in order to keep TV customers in the VoD fold. I don’t doubt that TWC would like to do that, but I doubt they’re dumb enough to believe they could ever get away with it. Austin is a stoner’s throw from San Antonio, the world headquarters of AT&T and the beta site for U-verse, the IPTV service that rides into the home atop VDSL. While U-verse isn’t universally available in Austin yet, it’s under construction so there are alternatives.

TWC’s CEO has issued a blog post by way of clarification that’s not entirely helpful:

With regard to consumption-based billing, we have determined that as broadband usage and penetration grow, there are increasing differences in the amount of bandwidth our customers consume. Our current pricing plans require all users to pay the same amount, whether they check email once a month or download six movies a day. As the amount of usage has dramatically diverged among users, this is becoming inherently unfair and not the way most consumers want to pay for goods they consume.

Like Rosenthal’s post, it’s true as far as it goes, but leaves runners in scoring position. Here’s the real story, as I see it: while Time Warner doesn’t have a large enough network to peer with the big boys (AT&T, Verizon, Qwest, Comcast, and L3,) it does have some peering agreements that protect it from transit charges as long as they deliver their packets to convenient locations, as well as some straight-up transit charges to pay. Their aggregation network – the links that carry data between the Internet exchange points and their CMTS’s – isn’t fat enough to support full-on DOCSIS 3 usage, and neither is its transit budget.

Consequently, they’re being hammered by the small number of high-bandwidth consumers in their network, and they’re looking to cut costs by running them off. While there are other ways to ensure fairness across user accounts, the cap is the best way to address the fraction of a percent who use something like half their available bandwidth.

TWC is betting that they can find a cap level that discourages hogs and doesn’t bother more typical users. They’re going into an area close to the heart of AT&T with the experiment to get a good sense of where that limit is.

VoD has a little bit to do with this, but not all that much. TWC customers with TiVo’s already have unlimited VoD, and the rest of the VoD they provide doesn’t cost transit dollars, it’s delivered over their local tree. DOCSIS 3 also doesn’t have much of anything to do with this, as it’s also a local service, albeit one with the potential to ring up big transit charges if not domesticated.

To a large extent, ISP’s play a marketing game where they advertise super-fast services that aren’t backed up by sufficient transit or peering to sustain a heavy duty cycle. This isn’t a bad thing, of course, as the efficient sharing of capacity is actually the Internet’s secret sauce. If we wanted peak and minimum bandwidth to be the same, we would have stuck with narrow-band modems on the PSTN. But we don’t, so we have to get hip to statistical sharing of network resources.

I’ll go out on a limb here and predict that the typical Austin consumer won’t switch to U-verse on account of TWC’s caps, but the heaviest users of gaming and BitTorrent will. And I’ll further predict that TWC’s bottom line will be glad to see them go.

The arguments against caps ultimately come down to the assertion that there’s some public good in making light users of Internet access capacity subsidize heavy users. Given that most of the heavy uses are either piracy or personal entertainment, I don’t happen to buy that argument, and moreover I find the alternatives to capping are generally less attractive, as they typically involve duty cycle restrictions of other types. The alternative that TWC should explore is peak/off peak handling that allows downloaders to utilize less restrictive bandwidth budgets at off hours.

I’d prefer to have a network that allowed me to label all of my traffic with the service level I expected, and scheduled and charged it appropriately. We don’t have that network yet, but we will one day as long as neutrality regulations don’t get in the way. Alternatively, a fat pipe to a Tier 1 like Verizon would be a better deal, but we can’t all buy one today either.

FCC fills empty job

Kevin Martin’s FCC has hired a new chief technologist, Jon Peha:

Federal Communications Commission chairman Kevin Martin named John Peha chief technologist, the senior adviser post at the commission on technology issues, based out of the Office of Strategic Planning and Policy Analysis.

I’m a bit disappointed. Peha is the guy who delivered strong testimony denouncing the Comcast management of BitTorrent without bothering to study BitTorrent’s use of TCP connections. His testimony was substantially wrong on a factual basis. Perhaps Peha can persuade me that he means well, but his performance so far has not been encouraging.

UPDATE: What am I talking about? Well take a look at the comments Peha filed in the Comcast matter, which are on-line at the FCC’s web site. He understands what’s at stake:

In the debate over network neutrality, both sides can make points that deserve serious consideration from policymakers. Such consideration requires clear and accurate statements of the facts, to say nothing of the broader issues at stake. Unfortunately, the public debate has often been filled with hyperbole and spin from advocates on both sides.1 Such rhetoric, combined with issues of technical complexity and subtlety, has made it unnecessarily difficult for policymakers to make informed decisions.

So what did he do? He misrepresented the facts and engaged in advocacy spin, to wit:

Comcast sends Device A a reset packet, with parameters set such that Device A will believe the reset is coming from Device B. Device A is therefore led to believe (incorrectly) that Device B is unwilling or unable to continue the session. The same may be occurring at Device B. Thus, the devices determine that the session must be ended, and no further packets can be sent.

It is factually incorrect to say that the process described above merely delays P2P traffic.

Bzzzttt, wrong answer. BitTorrent “sessions” consist of multiple TCP connections, so terminating one, or two, or any number less than the total number of TCP connections a given instance of BitTorrent can use at any particular time is in fact “delaying” instead of “blocking.” Peha makes the assumption that BitTorrent “sessions” are the same as TCP “sessions” and they clearly aren’t. Most of what makes BitTorrent troublesome, in fact, is the large number of TCP “sessions” it uses. It’s particularly outrageous that Peha charges Comcast with misrepresentation and then goes on to misrepresent in his own right.

He then goes on to contradict himself and admit that it’s really “delaying” after all:

After the flow of P2P from a given sender and recipient is blocked or terminated, the recipient is likely to seek some other source for the content. If the content is extremely popular, there are many options available. Consequently, this leads to a small delay, somewhat decreasing the rate at which this recipient can gather content.

So which is it, Dr. Peha, “blocking” or “delaying?” He can’t even make up his own mind. He then goes on to whack Comcast for targeting P2P:

Comcast has elected to employ mechanisms that degrade service for a particular application, i.e. P2P, instead of relying only on congestion control mechanisms that deal with traffic of all application types. Central to their justification of this approach has been the assertion that it is specifically P2P that has an adverse impact on other traffic. This assertion is untrue.

…and he goes on talk about blue cars and red cars, a lot of nonsensical fluff. The fact remains that P2P is the only application with such a great ability to consume bandwidth on a non-stop basis as to degrade the Internet experience of web browsing, and that’s what Comcast was trying to protect.

And more significantly, Peha fails to grasp the fact that applications are not created equal in terms of their tolerance for delay. P2P has no particular time constraints when running as a seeder (serving files to the rest of the Internet) but interactive applications like Web browsing and VoIP have very little tolerance for delay. And now we have a standard in place that requires ISPs to ignore these technical distinctions, thanks largely to the inept analysis of people like Peha.

In additional remarks he confesses his ignorance of network management techniques generally, and compares the Comcast method to a “man in the middle attack.” If that’s what he thinks, really and truly, he’s seriously under-informed. A “man in the middle attack” is means of breaking into a system by stealing passwords. What system did Comcast break into, and what password did they use to do so?

In Kevin Martin’s FCC this outlandish foolishness is a job interview. Peha is smarter than Sarah Palin, but he’s no Dave Farber. Surely the FCC can do better than to employ an advocate in the position that requires depth of technical knowledge and a commitment to impartiality. Kevin Martin has failed the American people again.

A more suitable candidate exists: Just a Girl in Short Shorts Talking about Whatever:

Comcast was regulating the download speeds of peer to peer networks, such as BitTorrent. I like to pirate movies as much as next cheapskate, but I do not think it is necessary that it be given equal priority with VoIP (voice over Internet).

That’s the level of insight we need in a Chief Technologist.

Technorati Tags: ,

Your broadband service is going to get more expensive

See my article in The Register to understand why your broadband bill is going to rise:

Peer-to-peer file sharing just got a lot more expensive in the US. The FCC has ordered Comcast to refrain from capping P2P traffic, endorsing a volume-based pricing scheme that would “charge the most aggressive users overage fees” instead. BitTorrent, Inc. reacted to the ruling by laying-off 15 per cent of its workforce, while network neutrality buffs declared victory and phone companies quietly celebrated. Former FCC Chairman Bill Kennard says the legal basis of the order is “murky.”

Comcast will probably challenge on grounds that Congress never actually told the regulator to micro-manage the Internet. In the absence of authority to regulate Internet access, the Commission has never had a need to develop rules to distinguish sound from unsound management practice. The order twists itself into a pretzel in a Kafka-esque attempt to justify sanctions in the absence of such rules.
Technically speaking, they’re very confused

The FCC’s technical analysis is puzzling, to say the least.

The order describes an all-powerful IP envelope, seeking to evoke an emotional response to Deep Packet Inspection. The order claims the DPI bugaboo places ISPs on the same moral plane as authoritarian regimes that force under-aged athletes into involuntary servitude. But this is both uninformed and misleading. Network packets actually contain several “envelopes”, one for each protocol layer, nested inside one another like Russian dolls. Network management systems examine all envelopes that are relevant, and always have, because there’s great utility in identifying protocols.

The FCC’s order is especially bad for people who use both P2P and Skype. The comments lack the usual snarkiness, and I don’t know if that’s good or bad.

UPDATE: Right on cue, a price war is breaking out between cable and phone companies, according to the Wall St. Journal. I wonder if the converts are going to be the high-volume users worried about the caps, or the nice, low volume grannies every carrier wants.

Technorati Tags: , ,

Comcast defines “excessive use”

Comcast has modified its terms of use to clarify that “excessive use” is 250 GB per month. If you download more than this, and are on the list of heaviest users, you’ll get a letter from Comcast telling you to dial it back. If you don’t you’ll be canned. Over-limit fees are not part of the deal:

In May when the cap was first rumored, there was also buzz that Comcast might try to charge customers $15 for every 10GB they went over the limit. As far as we can tell from Comcast’s announcement and the accompanying FAQ page, that is not the case… yet, anyway. Even so, Comcast’s honesty with the 250GB cap will probably only go so far, and customers with the option to do so may end up turning to an ISP such as AT&T, Verizon, or Qwest that has the infrastructure available to offer broadband without bandwidth limits.

Most people will never come anywhere close to 250 GB, but there’s a lot of adverse reaction to this plan, mainly on the issue of Comcast making it hard for you to know how much you’ve used in the current month. You may be able to get this info from your home router, but there’s no guarantee.

Regarding the threat of heaviest users to take their business elsewhere, I;m guessing Comcast won’t miss them a whole lot.

Technorati Tags: , , ,

FCC finally issues Comcast memo

Kevin Martin and his Democratic Party colleagues at the FCC have issued their Comcast order, available at this link. They find some novel sources of authority and apply some interesting interpretations of the facts. I’ll have some detailed commentary after I’ve read it all and checked the footnotes. It’s an amusing exercise, if you like that sort of thing.

For a good summary of the order, see IP Democracy.