Guest Blog at GigaOm

My guest blog at GigaOm deals with paid peering and the net neutrality regulations, How Video Is Changing the Internet:

But paid peering may be forbidden by Question 106 of the FCC’s proposed Open Internet rules because it’s essentially two-tiered network access, Norton points out.

Paid peering illustrates how hard it is to write an anti-discrimination rule for the Internet that doesn’t have harmful side effects for all but the largest content networks. Paid peering is a better level of access to an ISP’s customers for a fee, but the fee is less than the price of generic access to the ISP via a transit network. The practice of paid peering also reduces the load on the Internet core, so what’s not to like? Paid peering agreements should be offered for sale on a non-discriminatory basis, but they certainly shouldn’t be banned.

There’s another good treatment of the subject at Digital Society, inspired by the same conversation with peering maven Bill Norton.

UPDATE: There’s an incredible whine-a-thon in the comments to this article by Google’s Director of Network Operations Vijay Gill and some of his friends from a network operators’ IRC channel. Gill says I’ve got all the facts wrong because paid peering existed in a very limited way ten years ago under a different name. I don’t dispute that, but simply note its potential problems with net neutrality regulations in some guises. The issue is whether the Internet of the Future will be a slave to the Internet of the Past’s supposed insistence on a single service level for all peering agreements, not that there ever has been such a regulation.

UPDATE 2: One thing I definitely was unclear about is whether Arbor’s estimates of traffic growth, 47%, are in line with the MINTS estimates. I conclude that overall growth is much higher than the MINTS figure because Arbor measures only inter-domain traffic at Internet Exchanges. There’s obviously been a great deal of growth in the Akamai and Limelight CDNs, neither of which is measured by MINTS or Arbor, and growth in private peering (paid and unpaid) as well. MINTS measures more than public IX traffic, yet their figures are in line with Arbor’s data from public sources only; this difference in method and similarity of measurements suggests that MINTS may be understating the total of the inter-domain case, depending on how the load falls out between public and private sources. Private connections are increasing, according to IX operators and heavy users.

, , , ,

What Will the Internet of the Future Look Like?

If you’re in Washington, stop by the Cannon Bldg. next Monday for a discussion of the Internet of the Future:

Internet regulations pending in the United States can either facilitate or impede Internet evolution depending on detailed definitions of packet discrimination, traffic shaping, network management, and carrier business models. A panel of distinguished engineers share their views on the changes that must be made to Internet service to support such applications as pervasive networking, video-conferencing, immersive gaming, telemedicine, and the Internet of Things. Join us for a discussion of the tension between regulation and innovation in the Internet context.

Date: Monday, November 2, 2009
Time: 10:00 AM – 11:30 AM
Location: Cannon Office Building, Room 121

* Richard Bennett
Research Fellow, Information Technology and Innovation Foundation; co-inventor of the Ethernet hub and elements of the Wi-Fi protocols

* Dr. David Farber
Distinguished Professor of Computer Science and Public Policy at the School of Computer Science, Heinz College at Carnegie Mellon University

* Dr. Charles Jackson
Adjunct Professor of Electrical Engineering, George Washington University; formerly with the FCC and House Communications Subcommittee

* Dr. Jon Peha
Chief Technologist, Federal Communications Commission and Professor of Computer Science at Carnegie Mellon University

The issues we’re going to discuss will include changes to Internet architecture and operation to support new apps and the impact, positive and negative, of pending regulations.

, ,

New Media, New Networks

I’m doing an event at the George Washington University Graduate School of Political Management in Washington, DC, on the 29th, and I expect you all to attend. It’s called New Media, New Networks: The Evolution of Content on the Internet:

In wake of the FCC Broadband NOI, broadband workshops on broadband, content and cybersecurity – as well as Genachowski’s recent announcement on net neutrality – several well-respected experts will gather to talk about their viewpoints on network policy – problems, opportunities and common ground. These are the most important string of events centered around this topic in over a year, and we encourage you to take part in the discussion.

My panel features Dave Farber, Harold Feld, and Robb Topolski. We’ll be discussing “Network Management and Delivering for the Consumer – The evolving role of the networks – better, smarter, faster.”

I think this should be a lively panel and a morning well-spent.

, ,

Harold Feld’s Response to “Designed for Change”

Public Knowledge’s legal director, Harold Feld, has posted an interesting response to my paper, Designed for Change: End to End Arguments, Internet Innovation, and the Net Neutrality Debate on his personal blog, Harold Feld’s Tales of the Sausage Factory. This isn’t PK’s official response, in other words.

Harold grasps my argument tolerably well, which is kind of gratifying as the paper is more technical than the typical policy tome:

Bennett’s essential argument, if I grasp it correctly, is that certain difficulties most agree are substantial problems would be far easier to solve if we gave the network operators greater freedom to manipulate traffic. While possibly true in the abstract, I am much less convinced it will play out that way in reality. For one thing, when Comcast was forced to disclose its network management practices, it turned out that Comcast was not actually experiencing significant network congestion. Instead, it was proactively solving the fear of future network congestion by going after the top 1000 users every month and targeting what it considered the most significant applications that could cause congestion in the future. That had the virtue of cheap efficiency for Comcast, but it had significant costs to others.

Here’s my response:

Thanks for the write-up Harold, you seem to grasp the points I tried to make in the paper extremely well. I’m trying to add some technical depth to the net neutrality discussion, not necessarily answer all the questions. And I do say in the paper that the NN debate encompasses a number of issues about equities and ownership that are far outside the skill set of engineers. I’m urging caution with respect to being too eager to offer regulatory prescriptions that aggravate the emerging technical issues. While the Internet is 35 years old, we’re facing some issues today that have never been faced before, so in some respects it might as well be only a few months old. There are increasingly diverse uses of the Internet today in terms of applications, and an increasingly diverse user population than ever before. So some regulatory frameworks that seemed sensible in the past may not have great utility in the future, and could have the effect of limiting utility of the Internet as an engine of free speech and political organizing.

We already have a situation on our hands where effective video distribution requires the use of a CDN like Akamai or YouTube, and even YouTube doesn’t deliver consistently good streaming quality. There are underlying technical issues that cause this to be case, and we can’t resolve them merely by clamping down on the ISPs. Developing sensible two-way communication between national telecom regulators such as the FCC and its counterparts and the IETF may help move the ball down the road. Adding services to the network core in a principled and sound way should actually increase the value of the Internet for users as well as operators.

, , ,

Designed for Change

I released a report on the Internet’s technical history Friday at the ITIF World Headquarters in Washington, DC. Thanks to a great turnout and a fantastic panel of very smart people commenting on the paper, we kicked off a lively new thread in the Net Neutrality discussion, so the launch was a success.

FCC Chairman Genachowski helped create interest in the report by unveiling his net neutrality program earlier in the week. I take issue with the Chairman’s notion that the Internet is fair to all applications; explaining why leads to a discussion of the weakness of the net neutrality arguments and the need to keep innovation alive in the network itself. You can download the report and get a video of the discussion by clicking this link to the ITIF’s web site. Enjoy.

, ,

Net Neutrality Regulations Coming

In FCC Chairman Genachowski’s long-anticipated statement on net neutrality rulemaking today, the Chairman made the claim that the Internet architecture is both unbiased and future-proof. However, as ITIF notes in a forthcoming report, “Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate”, the Internet’s architecture doesn’t make it future-proof, the process of experimentation and continual improvement does; rule making can seriously jeopardize Internet flexibility unless it’s undertaken with great care. In addition, it’s important to note that the Internet has always preferred some applications over others; it favors content over communication, for example. Network management is necessary as a means to overcome the Internet’s structural bias, so strict rules limiting network management to the mitigation of spam, malware, and attacks are not good enough. Carriers must be empowered to enable communications applications to compete equitably with content applications; only the carriers can provide fair access to diverse applications and users.

The approach to Internet regulation that focuses exclusively on the rights of consumers and the responsibilities of carriers belies the fact that the Internet invests substantial network control at the intelligent edge; the Internet gives each of us the power to be a producer as well as a consumer, and with that power comes responsibility. We can innovate without permission, but we all have to behave responsibly. It goes without saying that open access networks are desirable, so the real test of the FCC’s rulemaking will come from its assessment of both user behavior and operator management practices. We have every confidence that the Commission will undertake a serious, rigorous and fact-based rule making. The Internet enables innovation to the extent that carriers provide robust and reliable transport services to applications; if this capability is preserved and enhanced by a sensible network management framework, innovation will win.

How Markey III Hurts the Internet

Take a look at my analysis of Congressman Markey’s latest foray into Internet management on Internet Evolution. It’s the Big Report that will be up for a week or so. Here’s a teaser:

Reading the latest version of Congressman Ed Markey’s (D-MA) Internet Freedom Preservation Act of 2009 is like going to your high school reunion: It forces you to think about issues that once appeared to be vitally important but which have faded into the background with time.

When the first version of this bill appeared, in 2005, the Internet policy community was abuzz with fears that the telcos were poised to make major changes to the Internet. Former SBC/AT&T chairman Ed Whiteacre was complaining about Vonage and Google “using his pipes for free,” and former BellSouth CEO Bill Smith was offering to accelerate Internet services for a fee.

Our friends in the public interest lobby warned us that, without immediate Congressional action, the Internet as we knew it would soon be a thing of the past.

In the intervening years, Congress did exactly nothing to shore up the regulatory system, and the Internet appears to be working as well as it ever has: New services are still coming online, the spam is still flowing, and the denial-of-service attacks are still a regular occurrence.


, ,

Nostalgia Blues

San Jose Mercury News columnist Troy Wolverton engaged in a bit of nostalgia in Friday’s paper. He pines for the Golden Age of dial-up Internet access, when Internet users had a plethora of choices:

A decade ago, when dial-up Internet access was the norm, you could choose from dozens of providers. With so many rivals, you could find Internet access at a reasonable price all by itself, without having to buy a bundle of other services with it.

There was competition because regulators forced the local phone giants to allow such services on their networks. But regulators backed away from open-access rules as the broadband era got under way. While local phone and cable companies could permit other companies to use their networks to offer competing services, regulators didn’t require them to do so and cable providers typically didn’t.

Wolverton’s chief complaint is that the DSL service he buys from Earthlink is slow and unreliable. He acknowledges that he could get cheaper service from AT&T and faster service from Comcast, but doesn’t choose to switch because he doesn’t want to “pay through the nose.”

The trouble with nostalgia is that the past never really was as rosy as we tend remember it, and the present is rarely as bad as it appears through the lens of imagination. Let’s consider the facts.

Back in the dial-up days, there were no more than three first-class ISPs in the Bay Area: Best Internet, Netcom, and Rahul. They charged $25-30/month, over the $15-20 we also paid for a phone line dedicated to Internet access; we didn’t want our friends to get a busy signal when we were on-line. So we paid roughly $45/month to access the Internet at 40 Kb/s download and 14 Kb/s or so upstream.

Now that the nirvana of dial-up competition (read: several companies selling Twinkies and nobody selling steak) has ended, what can we get for $45/month? One choice in the Bay Area is Comcast, who will gladly provide you with a 15 Mb/s service for a bit less than $45 ($42.95 after the promotion ends,) or a 20 Mb/s service for a bit more, $52.95. If this is “paying through the nose,” then what were we doing when we paid the same prices for 400 times less performance back in the Golden Age? And if you don’t want or need this much speed, you can get reasonable DSL-class service from a number of ISPs that’s 40 times faster and roughly half the price of dial-up.

Wolverton’s column is making the rounds of the Internet mailing lists and blogs where broadband service is discussed, to mixed reviews. Selective memory fails to provide a sound basis for broadband policy, and that’s really all that Wolverton provides.

, ,