What’s the real cost of internet video?

Check this article from The Register on the cost of Internet video:

Users want it, but today, the business models give operators the incentive to throttle, rather than encourage, high-bandwidth uses of the internet. MIT calls this the ‘Broadband Incentive Problem’.

Last July, my company IP Development published research into the cost of 1080p HDTV [PDF, 128k] delivered over a UK LLU network and came to a figure of £2.10 per two hour film. This research was of interest to a wide community, from ISPs who bear this cost to internet evangelists who believed that we were somehow in the pocket of the big telcos in the Net Neutrality debate.

(We were not paid by anyone for that research – but the conclusions then and now clearly support the view that Net Neutrality is likely to neuter the internet.)

The point is that such figures are not economically viable, and if this is the best the net can do, then so long and thanks for all the fish…

And note the utilization graph, with spikes at odd times of the day.

Tim Wu’s Bafflement

Writing on the AT&T discussion of pirated material, Professor Tim Wu professes amazement:

No one knows exactly what AT&T is proposing to build. But if the company means what it says, we’re looking at the beginnings of a private police state. That may sound like hyperbole, but what else do you call a system designed to monitor millions of people’s Internet consumption? That’s not just Orwellian; that’s Orwell.

We’d like to help. Wu assumes, incorrectly I think, that AT&T would implement this plan by snooping traffic and separating legal from illegal packets on the network. This accounts for the fears he expresses about police states and Orwellian jack boots and that sort of thing.

But let’s suppose AT&T went about finding the pirated movies the same way downloaders do, at the source. They would check Mininova and Pirate’s Bay every day for top downloads, and the associated trackers would gladly give them a list of all the uploaders, because that’s what trackers do. From that information, all AT&T would have to do is match IP addresses with their customers and take appropriate action.

I’m not saying that AT&T should do this, simply that if they were to target pirated content they wouldn’t do it the way Wu imagines; snooping packets won’t work in the general case because they can be encrypted. But the whole purpose of a BitTorrent tracker is to point you to the source of the pirated stuff, so the information is just sitting there, ripe for the picking.

Toll Roads

I’ll be speaking at the Toll Roads Symposium on the legal and political debate over network neutrality in Frisco on the 26th. My panel is:

Panel 3: Social Control: Issues of Privacy & Autonomy

• Scott Cleland, Founder of Precursor LLC and NetCompetition.org
• Kevin Bankston, Staff Attorney, Electronic Frontier Foundation
• Bob Frankston, Net Neutrality Squad, Project of People for Internet Responsibility
• Colette Vogele, Vogele and Associates
• Richard Bennett

Moderator: Susan Freiwald, Professor, University of San Francisco School of Law

Cleland is a solid guy, Frankston is a character, and I don’t know the others.

Net Neutrality Backlog

I need to find some time to respond to the Electronic Frontier Foundation’s defense of packet-drop in some detail, but for now it’s sufficient to note that they refuse to admit that Layer Two networks have traffic problems.

The last Register piece has some people at the Net Neutrality Squad excited, most notably Bob Frankston, who says: “a great engineer questions the constraints and gets fired because the constraints serve a policy need higher than mere science or reality.” Far be it for me to let “mere reality” bias my perspective.

The folks at Public Knowledge and Open Left (Matt Stoller) are up in arms about the success of Connect Kentucky, a perfectly benign public interest bringing broadband to the rural residents of the namesake state. The fuss is explained by Dr. Robert Atkinson. It illustrates the principle that no good deed should go unpunished.

Scott Cleland has a fascinating piece on the parallels between process scheduling in operating systems and packet scheduling in cloud computing that’s amazingly insightful. Scott’s not a software engineer, but he thinks like one of us.

Andrew Odlyzko’s latest paper on net neutrality is severely flawed, with significant errors of fact. It needs a serious examination, but nobody’s happy with it on either side of the debate.

Tim Swanson has an interesting piece on the economics of networking on the Ludwig Mises web site that’s fine reading. Market solutions to these problems are always preferable.

Ok, back to work.

Who Manages the First Mile?

A discussion at CES concerning the load that pirated movies place on carrier networks has generated a bit of controversy, beginning at the NY Times:

For the last 15 years, Internet service providers have acted – to use an old cliche – as wide-open information super-highways, letting data flow uninterrupted and unimpeded between users and the Internet.

But I.S.P.’s may be about to embrace a new metaphor: traffic cop.

At a small panel discussion about digital piracy at NBC’s booth on the Consumer Electronics Show floor, representatives from NBC, Microsoft, several digital filtering companies and the telecom giant AT&T said discussed whether the time was right to start filtering for copyrighted content at the network level.

Of course, most of us do know that the Internet and its related privately-owned carrier networks has never really been as wild and free as our network-romantic set would have us believe, but yes, carriers are dealing with extraordinary loads in the upstream direction today, and as most of the “content” is illegal, there is a convergence of interests between carriers and copyright holders.

As far as I gather, this was a hypothetical discussion, but that doesn’t stop the prophets of doom from printing currency against it. The most interesting discussion turned up in the comments at David Weinberger’s blog, in a conversation between Seth Finkelstein, David Isenberg, and Harold Feld. The conclusion that the Davids and Harold reached is that end users should administer the Layer Two network:

So rather than turn traffic shaping and QoS over to the carriers, or to third parties whose choices will distort the market away from true user preferences, why not turn QoS decisions over to the users themselves?

We have already seen the most primitive forms of this idea in the development of edge-based QoS solutions and metered pricing. Things like caching technology (move the content closer), distributed computing (distribute the work among many more computers), and virtual private networks (control of security and privacy by the communicating machines at the edges) are all ways in which end users of various kinds achieve the quality of service they want. Certainly these are not perfect solutions, and network operators can replicate them. But, rather like the magicians of Pharoh replicating the trick of Moses and Aaron of turning a staff into a snake, the fact that network operators can replicate these technologies is not the point. The point is that these primitive first steps at end-user managed QoS rather than network provided QoS are a sign that the folks on the edge do not need to remain in bondage to the telcos and cable cos in order to enjoy QoS. Let end users go and they will provide for themselves.

I don’t see that as practical, but there is a way to deal with this that’s not completely retarded. More on that later.

Net Neutrality Loses New Hampshire

Matt Stoller, the most intense of the pro-regulation, net neutrality advocates, crowed after the un-democratic Iowa caucus. Net Neutrality Wins Iowa:

Right now the telecom lobbyists that control the Republican Party and the Clinton wing of the Democratic Party should be extremely worried. On Thursday, they were soundly thumped in the most important caucus of the year, in both parties.

If that were true, then surely the New Hampshire victories for Clinton and McCain must be the death-knell of the regulation he desires. CNN’s Ann Broache nails it in fine piece on the distinct lack of passion New Hampshirites have for obscure technical regulation, New Hampshire voters: Net neutrality? Huh?

At a booth across the chrome-accented restaurant, Kelly Parsons, 32, cradled her infant son, Christian, and admitted she’d never heard of Net neutrality either. Parsons professed to be reasonably tech-savvy but said technology policy issues had nothing to do with her decision to support Mitt Romney. Illegal immigration and terrorism were among her top concerns for the next president to confront.

Stoller famously ranked net neutrality as the number one issue for the Democrats in Congress following their takeover of the House, ahead of wages, health care, the environment, or Iraq:

On the one hand, we have no legislative agenda except for net neutrality. Since we locked that down as an important issue before the election, our chances are pretty good (though it’s not by any means a slam dunk. This means that we are free to pick our fights, flexible, and not bogged down by a long list of people to satisfy. We can ride public opinion to get what we want, with agility and intelligence.

New Hampshire voters clearly don’t share his agenda, nor do Democrats generally.

How silly is this?

An Op-Ed about net neutrality in last week’s Seattle Times by Avis Yates Rivers makes all the obvious points: solution in search of a problem, treatment worse than the disease, etc., including this one:

Because a network’s bandwidth is a finite resource, the management tools function like traffic lights and yield signs. They seek an orderly way to allow heavy P2P-like traffic to flow without interfering with other users. At peak times, these tools send a signal to a high-bandwidth user that they will find the requested content when a lane opens on the information highway.

But wonders never cease, and David Isenberg found it wrong and offensive:

So mostly Yates Rivers is wrong when she says that bandwidth is finite. Where it is finite, the blame lies at the feet of the telcos . . . well, not really, they wouldn’t be so stupid as to build such abundance that they have nothing to sell anymore. The blame lies with our limited vision — we have affordable, mature technology that would make bandwidth scarcity as obsolete as horsepower from horses.

Can Isenberg really be this stupid? He worked for Bell Labs for 12 years, presumably doing something more technical than sweeping floors, but he still makes bonehead statements like this. I can only conclude that he’s lying deliberately.

Yes, Virginia, bandwidth is finite and it always will be. Even when we have gigabit access connections, we’re still counting on everybody not using theirs full-tilt at the same time. For every consumer of data there’s a producer, and for every pair of consumer/producers there’s a carrier, and every link has its limit. Beef up the core, and the access network becomes a bottleneck. Beef up the access network and the core becomes a bottleneck. That’s life.

Internet Fairness, or Not

My latest piece for The Register is up: Dismantling a Religion: The EFF’s Faith-Based Internet. In it, I explore the difference between the way the EFF wants to manage the Internet and the new way the IETF folks are discussing.

Bottom line: the Internet has never had a user-based fairness system, and it needs one. All networks need one, actually.

On that note, the TCP-Friendly folks remind us:

The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network (probably in the form of preferentially dropping their packets during times of congestion).

An actively-managed Internet is a functional Internet.

Faster, More Symmetric Networking

Would you like to have a fat Internet connection to your home? If we can agree that 100 Mb/s in both directions would qualify as “fat”. you should be able to have your way in a year or two, three at the most. Here’s a quick survey of the alternatives.

First. we have a clue as to why Comcast still uses relatively pokey DOCSIS 1.1: it’s skipping the faster and more symmetric DOCSIS 2.0 and going straight to the nirvana of even faster connections with DOCSIS 3.0:

SANTA CLARA, Calif. — Comcast Corp. (Nasdaq: CMCSA, CMCSK) plans to have a Docsis 3.0 infrastructure in place in about 20 percent of its footprint by the end of 2008, teeing up cable modem services capable of delivering shared Internet speeds in excess of 100 Mbit/s.

The nation’s largest MSO will be 3.0-capable in one-in-five homes by the end of next year, according to Comcast Chief Technology Officer Tony Werner, the keynoter here Wednesday morning at the first CableNEXT conference.

(H/T Engadget)

This should make them competitive with FTTH for a good while, but not forever:

While we’ve seen all sorts of blazing feats over fiber here lately, it’s not often that such wide open bandwidth gets piped directly to a home, but a 75-year old Swede recently changed all that when she had a 40Gbps connection installed in her domicile.

She can download a DVD in two seconds.

Closer to home, Verizon is going faster and more symmetric with FiOS:

With the help of the symmetrical services, users can benefit from equally fast downstream and upstream connections of up to 15 megabits per second (Mbps) or up to 20 Mbps, based on the state where the service is sold.

DSL over copper isn’t sitting still either:

University of Melbourne research fellow Dr John Papandriopoulos is in the throes of moving to Silicon Valley after developing an algorithm to reduce the electromagnetic interference that slows down ADSL connections.

Most ADSL services around the world are effectively limited to speeds between 1 to 20Mbps, but if Dr Papandriopoulos’s technology is successfully commercialised that speed ceiling would be closer to 100Mbps.

Papandriopoulos is joining ASSIA, a company founded by DSL inventor John Cioffi (and named after his wife and EVP.) ASSIA currently develops tools, but I suspect that will change. (Assia, on the other hand, is an aesthete.)

And wireless is on the move as well. Corporate products conforming to the new ~100 Mb/s (real speed) 802.11n standard are starting to roll out in trials, 4G cellular network deployments are starting, and UWB for the home is available at multi-hundred megabit/sec rates.

One has the feeling that the residential network regulators are already fighting yesterday’s war, and that the network pessimists have no basis for their fears.