Remaking the Internet

The Time Warner Research Program has published my essay on the Internet architecture, Remaking the Internet: Taking Network Architecture to the Next Level. The essay examines the issues that make it difficult if not impossible for the Internet to continue to grow and meet the challenges that innovation throws at it. The main issues, as I see them, are:

  1. IPv4 address exhaustion
  2. Routing scalability
  3. Security
  4. Congestion management

Let’s look at them one at a time.

Addressing

The problem with IPv4 addressing isn’t so much the fact that the 32-bit IPv4 can only address 4 billion systems, it’s the fact that each device that provides a service on the Internet (or on any network connected to the Internet) needs to publish its address within the context of a routing system that knows how to reach it with no preconditions and doesn’t adapt well to variations in routing. The alternative is for each system to have an address of a system within its own administrative domain that knows how to reach the other system. This sort of arrangement is similar to what we do with our home gateways today, but it can be made much more sophisticated.

Routing

The Internet’s routing problem is related to its addressing problem. It should be possible for a system to be either mobile or multi-homed (directly connected to more than one network at a time.) Multi-homed systems would be capable of surviving failures on any of their connections, so a server attached to both Global Crossing and Verizon, for example, would have more resilience and greater capacity than one with a single connection. Large servers to this today, but they require a unique IP address for each network interface, and that defeats much of the potential of multi-homing. This is an example of the benefit giving each system its own identity on the Internet independent of location. The phone network manages to do this with portable phone numbers but the Internet doesn’t.

Security

Internet security is troublesome for a number of reasons, but the main one is the idea that security should be enforced only at the network endpoints. Security at this stage is valuable (and probably indispensable) but it’s better to check for authorization and authenticity at multiple points in the path between one end system and another. Denial of Service attacks are a key vulnerability that the present system of end-to-end security creates. If it were necessary to communicate authorization at each network boundary, DoS attacks could be stifled at the edge of the source network. This doesn’t impose an end to free speech, but it does provide a way for networks to limit inbound traffic and to throttle systems that are known to be abusive.

Congestion Management

As Internet applications become more diverse, the variations in their cost sensitivity and performance requirements becomes apparent, but the current convention of moving packets across network boundaries at the same service level prevents rational pricing from becoming a reality. This harms consumers, especially those who are first to adopt innovative new gaming and communication platforms and those who use high-volume data backup services and the like. There is some great work being done in the Internet Engineering Task Force (IETF) on congestion management that may never see the light of day because the Internet’s architecture makes it difficult to explore alternative approaches.

There is a lot of talk about the horrors of monopoly among Internet policy scholars in the law schools, but I’ve yet to see any of them address the disadvantages of the “design monopoly” represented by TCP/IP. Like any other technical system, this one was created by human beings with limited knowledge of the future, so there are bound to be mistakes. When there is one and only one protocol to rule them all (IP) every mistake is magnified to the utmost possible degree. That’s why the things that I’ve mentioned have serious consequences even though they may seem minor to the untutored eye.

The paper addresses a number of other issues that are outside the scope of TCP/IP but nonetheless have big side-effects on network cost and utility. It’s probably hard reading, but it covers some important ground. Once again, the link is Remaking the Internet: Taking Network Architecture to the Next Level.

Has the FCC Created a Stone Too Heavy for It to Lift?

After five years of bickering, the FCC passed an Open Internet Report & Order on a partisan 3-2 vote this week. The order is meant to guarantee that the Internet of the future will be just as free and open as the Internet of the past. Its success depends on how fast the Commission can transform itself from an old school telecom regulator wired to resist change into an innovation stimulator embracing opportunity. One thing we can be sure about is that the order hasn’t tamped down the hyperbole that’s fueled the fight to control the Internet’s constituent parts for all these years.

Advocates of net neutrality professed deep disappointment that the FCC’s rules weren’t more proscriptive and severe. Free Press called the order “fake net neutrality,” Public Knowledge said it “fell far short,” Media Access Project called it “inadequate and riddled with loopholes,” and New America Foundation accused the FCC of “caving to telecom lobbyists.” These were their official statements to the press; their Tweets were even harsher.

Free marketers were almost as angry: Cato denounced the order as “speech control,” Washington Policy Center said it “fundamentally changes many aspects of the infrastructure of the Internet,” and the Reason Foundation said it will lead to “quagmire after quagmire of technicalities, which as they add up will have a toll on investment, service and development.”

Republican Congressional leaders made no secret of their displeasure with the FCC’s disregard for their will: Rep. Fred Upton (R, Michigan,) the incoming Commerce Committee Chairman called it a “hostile action against innovation that can’t be allowed to stand,” Rep. Greg Walden (R, Oregon,) incoming Chairman of the Subcommittee on Communications and Technology called it a “power grab,” and vowed to hold hearings to overturn it, while Sen. Kay Bailey Hutchison (R, Texas,) Ranking Member of the Senate Commerce, Science, and Transportation Committee said the order “threatens the future economic growth of the Internet.” Setting Internet policy is indeed a Congressional prerogative rather than an agency matter, so the longer-term solution must come from the Hill, and sooner would be better than later.

Contrary to this criticism and to snarky blogger claims, not everyone was upset with the FCC’s action, coming as it did after a year-long proceeding on Internet regulation meant to fulfill an Obama campaign pledge to advance net neutrality. The President himself declared the FCC action an important part of his strategy to “advance American innovation, economic growth, and job creation,” and Senator John Kerry (D, Massachusetts) applauded the FCC for reaching consensus.

Technology industry reaction ranged from positive to resigned: Information Technology Industry Council President and CEO Dean Garfield declared the measure “ensures continued innovation and investment in the Internet,” TechNet supported it, and National Cable and Telecommunications Association head Kyle McSlarrow said it could have been much worse. At the Information Technology and Innovation Foundation, we were pleased by the promises of a relatively humble set of the rules, less so with the final details; we remain encouraged by the robust process the FCC intends to create for judging complaints, one that puts technical people on the front lines. In the end, the order got the support of the only majority that counts, three FCC commissioners.

Most of us who reacted favorably acknowledged the FCC’s order wasn’t exactly as we would have written it, but accepted it as a pragmatic political compromise that produces more positives than negatives. The hoped-for closing of the raucous debate will have immense benefits on its own, as simply bringing this distracting chapter in the Internet’s story to an end will allow more time for sober discussion about the directions we’d like the Internet to take in its future development. There is no shortage of policy issues that have been cramped by the tendency to view net neutrality as the one great magic wand with the power to solve all the Internet’s problems: The FCC has work to do on freeing up spectrum for mobile networking, the Universal Service Fund needs to be reformed, and the National Broadband Plan needs to be implemented.

If the FCC’s approach proves sound, it might well be exported to other countries, forming the basis of a consistent international approach to the oversight of an international network developed on consistent standards of its own. Such an outcome would have positive consequences for the Internet standards community, which has its own backlog of unfinished business such as scalable routing, congestion management, security, and the domestication of peer-to-peer file sharing and content delivery networks to resolve. This outcome is far from inevitable; last minute rule changes make it less likely than it might have been.

The most important thing the FCC can do in implementing its system of Internet oversight is to elevate process over proscriptive rules. The traditional approach to telecom regulation is to develop a thick sheath of regulations that govern everything from the insignias on the telephone repair person’s uniform to the colors of the insulators on RJ11 cables and apply them in top-down, command-and-control fashion. Many of those on the pro-net neutrality side are steeped in telecom tradition, and they expected such an approach from the FCC for the Internet; theirs are the angry reactions.

But the Internet isn’t a telecom network, and a foot-high stack of regulations certainly would produce the negative consequences for innovation and progress the FCC’s critics have forecast. The appropriate way to address Internet regulation as to follow the model that the Internet has developed for itself, based on a small number of abstract but meaningful principles (each of which is subject to change for good reason) applied by a broad-based community of experts in a collaborative, consultative setting. Internet standards are not devised in an adversarial setting populated by angels and devils locked into mortal combat; they come from a process that values “rough consensus and running code.”

The specifics of the FCC’s order nevertheless give pause to those well-schooled in networking. A few hours before the Commission’s vote, Commissioner Copps persuaded Chairman Genachowski to reverse the Waxman Bill’s presumption regarding the premium transport services that enable Internet TV and video conferencing to enjoy the same level of quality as cable TV. Where the early drafts permitted these services as long as they were offered for sale on a non-discriminatory basis, the final rule arbitrarily presumes them harmful.

The order makes hash of the relationship of the content accelerators provided by Akamai and others to the presumptively impermissible communication accelerators that ISPs might provide one day in order to enable HD group video conferencing and similar emerging applications. The Commission majority fears that allowing network operators to offer premium transport to leading edge apps will put the squeeze on generic transport, but fails to consider that such potential downsides of well-accepted technical practices for Quality of Service can be prevented by applying a simple quota limit on the percentage of a pipe that can be sold as “premium.” This fact, which is obvious to skilled protocol engineers, goes unmentioned in the order.

The poor reasoning for this rule casts doubt on the FCC’s ability to enforce it effectively without outside expertise. By rejecting Internet standards such as RFC 2475 and IEEE standards such as 802.1Q that don’t conform to the telecom activists’ nostalgic, “all packets are equal” vision of the Internet, the FCC chose to blind itself to one of the central points in Tim Wu’s “Network Neutrality, Broadband Discrimination” paper that started the fight: A neutral Internet favors content applications, as a class, over communication applications and is therefore not truly an open network. The only way to make a network neutral among all applications is to differentiate loss and delay among applications; preferably, this is done by user-controlled means. That’s not always possible, so other means are sometimes necessary as well.

All in all, the Commission has built a stone too heavy for it to lift all by itself. The rules have just enough flexibility that the outside technical advisory groups that will examine complaints may be able to correct the order’s errors, but to be effective, the advisors need much deeper technical knowledge than the FCC staffers who wrote the order can provide.

It’s difficult to ask the FCC – an institution with its own 75 year tradition in which it has served as the battleground for bitter disputes between monopolists and public interest warriors – to turn on a dime and embrace a new spirit of collaboration, but without such a far-reaching institutional transformation its Internet regulation project will not be successful. Those of us who work with the FCC are required to take a leap of faith to the effect that the Commission is committed to transforming itself from a hidebound analog regulator into a digital age shepherd of innovation. Now that the Open Internet Report & Order has passed, we have no choice but to put our shoulders to the rock to help push it along. There’s no turning back now.

[cross-posted from the Innovation Policy Blog]

Premium Services

See my post at High Tech Forum, A Question of Priorities on a discussion Jerry Brito of Mercatus started yesterday:

A very interesting part of Jerry’s argument is that as the Internet is a best-efforts network, it must be impossible to prioritize across it. That leads to this speculation about changing the “dozens or hundreds of networks a packet traverses in its travels from sender to recipient.” In fact, the typical Internet packet crosses about 18 hops between source and destination, but only three or four networks on average.

A lot of people have the idea that the Internet is some sort of warm and fuzzy cloud of altruism in which people carry packets as a public service; Jonathan Zittrain promotes the idea that it’s like passing hot dogs down the line at a baseball game. According to this notion, when a Verizon customer in Boston sends a packet to an AT&T customer in California, a completely unrelated group of organizations carry the packet without any economic interest in it. So the prioritization scheme would need to be endorsed by all of them or it wouldn’t work.

This is wrong, actually.

That’s clear enough, isn’t it?

We’ve heard this before

Check out this essay from The Atlantic, “Closing the Digital Frontier”:

Digital freedom, of the monetary and First Amendment varieties, may in retrospect have become our era’s version of Manifest Destiny, our Turner thesis. Embracing digital freedom was an exaltation, a kind of noble calling. In a smart essay in the journal Fast Capitalism in 2005, Jack Shuler shows how similar the rhetoric of the 1990s digital frontier was to that of the 19th-century frontier era. It’s a short jump from John L. O’Sullivan in 1839—“The far-reaching, the boundless will be the era of American greatness. In its magnificent domain of space and time, the nation of many nations is destined to manifest to mankind the excellence of divine principles”—to Kevin Kelly, the pioneering conceptualizer of the “hive mind” and a founding editor of Wired, writing in Harper’s in 1994, “A recurring vision swirls in the shared mind of the Net, a vision that nearly every member glimpses, if only momentarily: of wiring human and artificial minds into one planetary soul.” Two years later Barlow, a self- described advocate for “online colonists,” got down on bended knee, doublet unbraced, to beseech us mere analog mortals: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone … You have no sovereignty where we gather.”

Are there any new ideas?

Internet Congestion

I posted the second part of my Internet congestion article on High Tech Forum:

This is the second part of an examination of the nature of congestion on packet switched networks such as the Internet. In the first part, Internet Congestion 101, we looked the at an idea expressed on Chris Marsden’s blog regarding the assumption of a “reasonable level of backhaul.” As Chris acknowledges in a comment, the task of pinning down the level of shared capacity (backhaul is shared by its nature) that’s reasonable falls on the regulator rather than the engineer. The reason for this is that the way supply and demand are brought into balance on packet switched networks is dynamic; on a circuit switched network, demand is static per call, so the operator simply has to provision enough shared capacity to supply the number of subscribers that are likely to make calls at the network peak (probably Mother’s Day afternoon in the US.) The consequence of demand exceeding supply is the inability to make calls, and that’s clearly unacceptable.

Read the whole thing, slacker.

Wrong Way

The FCC’s “Third Way” rhetoric is especially interesting to ITIF because the notion that a third way was needed is something ITIF president Rob Atkinson and current Obama advisor Phil Weiser introduced in a 2006 paper. The rhetoric of the third way doesn’t align with the use of a Title II classification, however, because Section 202 has the simplistic “anti-discrimination” construction that’s telephone-specific. Packet-switched networks employ discrimination to do constructive things, so the policy issues are around the sale and transparency of discrimination as a service, not the mere fact of its existence.

The FCC is also usurping the Congressional role and defining its own mandate. See the ITIF statement:

The Federal Communications Commission, the government agency charged by Congress with regulating communications by air and wire, announced today a sweeping new program that goes far beyond its mandate. The FCC’s move is likely to lead to a lengthy and unnecessary legal battle, create needless uncertainty in the market, and detract from the FCC’s important work in implementing the recently unveiled national Broadband Plan. While the FCC is attempting to create a regulatory framework suitable for the ever changing Internet ecosystem, its proposal is tantamount to going duck hunting with a cannon.

This is a story that has become all too familiar. In the recent past, the courts have struck down punitive FCC orders against the Super Bowl “wardrobe malfunction” and on, April 6, an overwrought ruling against cable operator Comcast, who sought to preserve good Internet performance for those of its customers who use Voice over Internet Protocol (VoIP) services such as Skype and Vonage. This most recent example of FCC over-reach is a proposal that would take broadband Internet services out of their present status as lightly-regulated “information services” (Title I) and plunk them into a regulatory system devised for the monopoly telephone networks of the 1930s (Title II).

Read the whole thing.

FCC Regulates Internet, Film Here

News leaked out earlier today to the effect that the FCC has decided to pursue a Title II regulatory program for the Internet, treating it in effect as if it were a telephone network. Others have called this approach “the nuclear option,” but I think it’s less severe, more like the 9/11 attacks on New York and Washington. Telecom lawyers will prosper from it, as a move of this kind is likely to take many years of court battles to squelch. Here’s a little discussion I had with a small circle of friends at the TechCrunch pad this afternoon.

Enjoy.

Going Mobile: Technology and Policy Issues in the Mobile Internet

I’m presenting a report on the Mobile Internet at the ITIF Global Command Center in Washington bright and early Tuesday morhing:

The Internet is changing. In a few short years, Internet use will come predominately from mobile devices such as smartphones and tablets rather than traditional PCs using fixed broadband. A fully mobile broadband Internet offers exciting opportunities for innovation in networks, devices, and applications with enormous benefits for the economy and society.

The shift from a wire-centric Internet to a mobile one has profound implications for technology, policy, and applications. A new report by ITIF Research Fellow Richard Bennett explains how mobile networks are changing as they become part of the Internet, the implications mobile networking has for public policy, and how policymakers can facilitate the transition to mobile broadband.

Join us for the presentation of the report and a panel discussion among leading representatives of diverse viewpoints on Internet policy.

Date: Tuesday, March 2, 2010
Time: 9:00am- 10:30am
Location: 1101 K Street Suite 610A Washington, DC 20005

Presenter

Richard Bennett
Research Fellow, The Information Technology and Innovation Foundation
Respondents

Harold Feld
Legal Director, Public Knowledge

Morgan Reed
Executive Director, Association for Competitive Technology

Barbara Esbin
Senior Fellow and Director, Center for Communications and Competition Policy, PFF

Click here to RSVP.

Open Internet Rules

Incidentally, ITIF filed comments with the FCC in the Open Internet rule-making:

The FCC should proceed with caution in conducting its inquiry into Open Internet rules, according to comments filed by the Information Technology and Innovation Foundation today. All the evidence suggests that the Internet is thriving: network operators are investing and new applications, devices, services, and content are emerging at a dizzying rate. While there is a need to clarify the confused state of Internet regulation in the United States, there’s no compelling public interest for the FCC to adopt a stringent new regulatory framework. The Commission would do well to follow the example of fellow regulators in Canada and Europe who have recently concluded that the most sensible course for national regulators is to emphasize disclosure of terms of service and oversight of business and technical practices.

ITIF rejects the argument that the FCC lacks jurisdiction to regulate the Internet, but urges the Commission to carefully consider the evidence before enacting new regulations on Internet access services. The Internet is a complex “virtual network” designed to serve a variety of needs, and as such it does not readily lend itself to traditional telecom regulatory models. The Internet requires regulators to take a fresh approach. The first step for the Commission is to conduct a fair and probing analysis about how the Internet works today.

ITIF applauds the Commission for committing to an open process and feels that careful examination will lead to the conclusion that the Internet is fundamentally healthy.

The big issues here are that we’re not done with network engineering, nor are we done with developing the business models that make the most of network investments. So the companies who develop the insides of the Internet need to continue cooperating with the people who develop the outsides. The Verizon/Google, Comcast/BitTorrent and AT&T/Apple partnerships are instructive.

, ,