What is the Internet?

In a defensive mode following FCC Commissioner Ajit Pai’s press conference, agency spokesperson Kim Hart repeated a common misconception to the effect that the service provided by Internet Service Providers is not part of the Internet, but simply provides access to it. “Broadband providers are not the Internet,” Hart told reporters. “The draft order ensures cable companies and broadband providers don’t have the power to act as gatekeepers to the Internet.”

Rhetorically, the false dichotomy that ISPs are either “the Internet” or “gatekeepers to the Internet” is easily discarded because they are both – and neither. Internet Service Providers are in fact parts of the Internet, just as content delivery networks, transit networks, and office networks are parts of the Internet. This is a fact of technology.

The Internet Society says the Internet is a network of networks that, in their totality, interconnect computers with each other. When we browse the web from a laptop, the software on the laptop interacts with software on the web server to fetch text and pictures from various sources (not just the server that hosts the web site), display them on the laptop, and potentially inform other sites (such as advertisers) about the interaction.

When we conference with Skype, both parties use laptops that form parts of the vast international mesh of networks that connect us. When we share files with peer-to-peer software, our computers function both as clients and servers of files.  We attach our computers to the Internet when we plug in our Ethernet cables or when we connect to our Wi-Fi access points. The purpose of the connection is to join the Internet, not simply to consume it.

The Internet it not a unified bulletin board or a centralized system like AOL; it’s a flat, distributed mesh of co-equal systems and networks that cooperate to move information from points of origin to points of interest.  The Internet is therefore not a single physical object, but an information mesh in which all end points are equal. If this were not the case, the concept of neutral networks would have no meaning.

As users, it’s important to understand what the Internet is and what it isn’t, or we will never be able to perform the tasks demanded of us in terms of network security or realize the opportunities afforded by the Internet’s unique organization. It’s obviously vital for those who regulate the Internet in whole or in part to understand how it is organized, even if doing so may erode their authority.

The FCC’s proposed “Open Internet” rules address interconnection agreements between networks, and surely even a naïve press spokesperson should acknowledge that these agreements apply to mechanisms deeply embedded in the fabric of the network of networks.

If the FCC isn’t addressing the Internet’s norms, practices, and standards of conduct, why does it call its rules “Open Internet” orders?

 

The Agenda for the Fall

What’s the House Energy and Commerce Committee doing? Check The Hill’s Hillicon Valley:

Cybersecurity, process reform at the Federal Communications Commission, privacy and spectrum policy issues will dominate the technology agenda for the House Energy and Commerce Committee this fall.

“The American public is desperate for more jobs, and Energy and Commerce Republicans understand the fundamental economic principle that reducing regulatory burdens is imperative to spur job growth and economic recovery,” said Chairman Fred Upton in a statement accompanying the fall agenda.

“Our future depends on our economic growth, and our economic growth depends on our regulatory future.”

Spectrum, spectrum, spectrum. And more spectrum.

Remaking the Internet

The Time Warner Research Program has published my essay on the Internet architecture, Remaking the Internet: Taking Network Architecture to the Next Level. The essay examines the issues that make it difficult if not impossible for the Internet to continue to grow and meet the challenges that innovation throws at it. The main issues, as I see them, are:

  1. IPv4 address exhaustion
  2. Routing scalability
  3. Security
  4. Congestion management

Let’s look at them one at a time.

Addressing

The problem with IPv4 addressing isn’t so much the fact that the 32-bit IPv4 can only address 4 billion systems, it’s the fact that each device that provides a service on the Internet (or on any network connected to the Internet) needs to publish its address within the context of a routing system that knows how to reach it with no preconditions and doesn’t adapt well to variations in routing. The alternative is for each system to have an address of a system within its own administrative domain that knows how to reach the other system. This sort of arrangement is similar to what we do with our home gateways today, but it can be made much more sophisticated.

Routing

The Internet’s routing problem is related to its addressing problem. It should be possible for a system to be either mobile or multi-homed (directly connected to more than one network at a time.) Multi-homed systems would be capable of surviving failures on any of their connections, so a server attached to both Global Crossing and Verizon, for example, would have more resilience and greater capacity than one with a single connection. Large servers to this today, but they require a unique IP address for each network interface, and that defeats much of the potential of multi-homing. This is an example of the benefit giving each system its own identity on the Internet independent of location. The phone network manages to do this with portable phone numbers but the Internet doesn’t.

Security

Internet security is troublesome for a number of reasons, but the main one is the idea that security should be enforced only at the network endpoints. Security at this stage is valuable (and probably indispensable) but it’s better to check for authorization and authenticity at multiple points in the path between one end system and another. Denial of Service attacks are a key vulnerability that the present system of end-to-end security creates. If it were necessary to communicate authorization at each network boundary, DoS attacks could be stifled at the edge of the source network. This doesn’t impose an end to free speech, but it does provide a way for networks to limit inbound traffic and to throttle systems that are known to be abusive.

Congestion Management

As Internet applications become more diverse, the variations in their cost sensitivity and performance requirements becomes apparent, but the current convention of moving packets across network boundaries at the same service level prevents rational pricing from becoming a reality. This harms consumers, especially those who are first to adopt innovative new gaming and communication platforms and those who use high-volume data backup services and the like. There is some great work being done in the Internet Engineering Task Force (IETF) on congestion management that may never see the light of day because the Internet’s architecture makes it difficult to explore alternative approaches.

There is a lot of talk about the horrors of monopoly among Internet policy scholars in the law schools, but I’ve yet to see any of them address the disadvantages of the “design monopoly” represented by TCP/IP. Like any other technical system, this one was created by human beings with limited knowledge of the future, so there are bound to be mistakes. When there is one and only one protocol to rule them all (IP) every mistake is magnified to the utmost possible degree. That’s why the things that I’ve mentioned have serious consequences even though they may seem minor to the untutored eye.

The paper addresses a number of other issues that are outside the scope of TCP/IP but nonetheless have big side-effects on network cost and utility. It’s probably hard reading, but it covers some important ground. Once again, the link is Remaking the Internet: Taking Network Architecture to the Next Level.

Has the FCC Created a Stone Too Heavy for It to Lift?

After five years of bickering, the FCC passed an Open Internet Report & Order on a partisan 3-2 vote this week. The order is meant to guarantee that the Internet of the future will be just as free and open as the Internet of the past. Its success depends on how fast the Commission can transform itself from an old school telecom regulator wired to resist change into an innovation stimulator embracing opportunity. One thing we can be sure about is that the order hasn’t tamped down the hyperbole that’s fueled the fight to control the Internet’s constituent parts for all these years.

Advocates of net neutrality professed deep disappointment that the FCC’s rules weren’t more proscriptive and severe. Free Press called the order “fake net neutrality,” Public Knowledge said it “fell far short,” Media Access Project called it “inadequate and riddled with loopholes,” and New America Foundation accused the FCC of “caving to telecom lobbyists.” These were their official statements to the press; their Tweets were even harsher.

Free marketers were almost as angry: Cato denounced the order as “speech control,” Washington Policy Center said it “fundamentally changes many aspects of the infrastructure of the Internet,” and the Reason Foundation said it will lead to “quagmire after quagmire of technicalities, which as they add up will have a toll on investment, service and development.”

Republican Congressional leaders made no secret of their displeasure with the FCC’s disregard for their will: Rep. Fred Upton (R, Michigan,) the incoming Commerce Committee Chairman called it a “hostile action against innovation that can’t be allowed to stand,” Rep. Greg Walden (R, Oregon,) incoming Chairman of the Subcommittee on Communications and Technology called it a “power grab,” and vowed to hold hearings to overturn it, while Sen. Kay Bailey Hutchison (R, Texas,) Ranking Member of the Senate Commerce, Science, and Transportation Committee said the order “threatens the future economic growth of the Internet.” Setting Internet policy is indeed a Congressional prerogative rather than an agency matter, so the longer-term solution must come from the Hill, and sooner would be better than later.

Contrary to this criticism and to snarky blogger claims, not everyone was upset with the FCC’s action, coming as it did after a year-long proceeding on Internet regulation meant to fulfill an Obama campaign pledge to advance net neutrality. The President himself declared the FCC action an important part of his strategy to “advance American innovation, economic growth, and job creation,” and Senator John Kerry (D, Massachusetts) applauded the FCC for reaching consensus.

Technology industry reaction ranged from positive to resigned: Information Technology Industry Council President and CEO Dean Garfield declared the measure “ensures continued innovation and investment in the Internet,” TechNet supported it, and National Cable and Telecommunications Association head Kyle McSlarrow said it could have been much worse. At the Information Technology and Innovation Foundation, we were pleased by the promises of a relatively humble set of the rules, less so with the final details; we remain encouraged by the robust process the FCC intends to create for judging complaints, one that puts technical people on the front lines. In the end, the order got the support of the only majority that counts, three FCC commissioners.

Most of us who reacted favorably acknowledged the FCC’s order wasn’t exactly as we would have written it, but accepted it as a pragmatic political compromise that produces more positives than negatives. The hoped-for closing of the raucous debate will have immense benefits on its own, as simply bringing this distracting chapter in the Internet’s story to an end will allow more time for sober discussion about the directions we’d like the Internet to take in its future development. There is no shortage of policy issues that have been cramped by the tendency to view net neutrality as the one great magic wand with the power to solve all the Internet’s problems: The FCC has work to do on freeing up spectrum for mobile networking, the Universal Service Fund needs to be reformed, and the National Broadband Plan needs to be implemented.

If the FCC’s approach proves sound, it might well be exported to other countries, forming the basis of a consistent international approach to the oversight of an international network developed on consistent standards of its own. Such an outcome would have positive consequences for the Internet standards community, which has its own backlog of unfinished business such as scalable routing, congestion management, security, and the domestication of peer-to-peer file sharing and content delivery networks to resolve. This outcome is far from inevitable; last minute rule changes make it less likely than it might have been.

The most important thing the FCC can do in implementing its system of Internet oversight is to elevate process over proscriptive rules. The traditional approach to telecom regulation is to develop a thick sheath of regulations that govern everything from the insignias on the telephone repair person’s uniform to the colors of the insulators on RJ11 cables and apply them in top-down, command-and-control fashion. Many of those on the pro-net neutrality side are steeped in telecom tradition, and they expected such an approach from the FCC for the Internet; theirs are the angry reactions.

But the Internet isn’t a telecom network, and a foot-high stack of regulations certainly would produce the negative consequences for innovation and progress the FCC’s critics have forecast. The appropriate way to address Internet regulation as to follow the model that the Internet has developed for itself, based on a small number of abstract but meaningful principles (each of which is subject to change for good reason) applied by a broad-based community of experts in a collaborative, consultative setting. Internet standards are not devised in an adversarial setting populated by angels and devils locked into mortal combat; they come from a process that values “rough consensus and running code.”

The specifics of the FCC’s order nevertheless give pause to those well-schooled in networking. A few hours before the Commission’s vote, Commissioner Copps persuaded Chairman Genachowski to reverse the Waxman Bill’s presumption regarding the premium transport services that enable Internet TV and video conferencing to enjoy the same level of quality as cable TV. Where the early drafts permitted these services as long as they were offered for sale on a non-discriminatory basis, the final rule arbitrarily presumes them harmful.

The order makes hash of the relationship of the content accelerators provided by Akamai and others to the presumptively impermissible communication accelerators that ISPs might provide one day in order to enable HD group video conferencing and similar emerging applications. The Commission majority fears that allowing network operators to offer premium transport to leading edge apps will put the squeeze on generic transport, but fails to consider that such potential downsides of well-accepted technical practices for Quality of Service can be prevented by applying a simple quota limit on the percentage of a pipe that can be sold as “premium.” This fact, which is obvious to skilled protocol engineers, goes unmentioned in the order.

The poor reasoning for this rule casts doubt on the FCC’s ability to enforce it effectively without outside expertise. By rejecting Internet standards such as RFC 2475 and IEEE standards such as 802.1Q that don’t conform to the telecom activists’ nostalgic, “all packets are equal” vision of the Internet, the FCC chose to blind itself to one of the central points in Tim Wu’s “Network Neutrality, Broadband Discrimination” paper that started the fight: A neutral Internet favors content applications, as a class, over communication applications and is therefore not truly an open network. The only way to make a network neutral among all applications is to differentiate loss and delay among applications; preferably, this is done by user-controlled means. That’s not always possible, so other means are sometimes necessary as well.

All in all, the Commission has built a stone too heavy for it to lift all by itself. The rules have just enough flexibility that the outside technical advisory groups that will examine complaints may be able to correct the order’s errors, but to be effective, the advisors need much deeper technical knowledge than the FCC staffers who wrote the order can provide.

It’s difficult to ask the FCC – an institution with its own 75 year tradition in which it has served as the battleground for bitter disputes between monopolists and public interest warriors – to turn on a dime and embrace a new spirit of collaboration, but without such a far-reaching institutional transformation its Internet regulation project will not be successful. Those of us who work with the FCC are required to take a leap of faith to the effect that the Commission is committed to transforming itself from a hidebound analog regulator into a digital age shepherd of innovation. Now that the Open Internet Report & Order has passed, we have no choice but to put our shoulders to the rock to help push it along. There’s no turning back now.

[cross-posted from the Innovation Policy Blog]

Premium Services

See my post at High Tech Forum, A Question of Priorities on a discussion Jerry Brito of Mercatus started yesterday:

A very interesting part of Jerry’s argument is that as the Internet is a best-efforts network, it must be impossible to prioritize across it. That leads to this speculation about changing the “dozens or hundreds of networks a packet traverses in its travels from sender to recipient.” In fact, the typical Internet packet crosses about 18 hops between source and destination, but only three or four networks on average.

A lot of people have the idea that the Internet is some sort of warm and fuzzy cloud of altruism in which people carry packets as a public service; Jonathan Zittrain promotes the idea that it’s like passing hot dogs down the line at a baseball game. According to this notion, when a Verizon customer in Boston sends a packet to an AT&T customer in California, a completely unrelated group of organizations carry the packet without any economic interest in it. So the prioritization scheme would need to be endorsed by all of them or it wouldn’t work.

This is wrong, actually.

That’s clear enough, isn’t it?

Does Free Press Coordinate Strategy with Google?

The ethics complaints against former Google lobbyist Andrew McLaughlin aren’t going away.

Over a two-week period in February 2010, McLaughlin exchanged numerous emails with Free Press director Ben Scott, another prominent advocate for Net neutrality who has coordinated policy strategy with Google and attended joint meetings with Google at the FCC and White House on numerous occasions. They agreed to meet outside the White House at a nearby coffee shop to discuss Internet policy.

It’s interesting to see this level of coordination, given all of Free Press’ claims of independence. But it’s not unusual for members of a coalition to talk to each other; they’re often directed to do so by the government. In a former life, I was often in that position, so this still has a “gambling in Casablanca” feel.

We’ve heard this before

Check out this essay from The Atlantic, “Closing the Digital Frontier”:

Digital freedom, of the monetary and First Amendment varieties, may in retrospect have become our era’s version of Manifest Destiny, our Turner thesis. Embracing digital freedom was an exaltation, a kind of noble calling. In a smart essay in the journal Fast Capitalism in 2005, Jack Shuler shows how similar the rhetoric of the 1990s digital frontier was to that of the 19th-century frontier era. It’s a short jump from John L. O’Sullivan in 1839—“The far-reaching, the boundless will be the era of American greatness. In its magnificent domain of space and time, the nation of many nations is destined to manifest to mankind the excellence of divine principles”—to Kevin Kelly, the pioneering conceptualizer of the “hive mind” and a founding editor of Wired, writing in Harper’s in 1994, “A recurring vision swirls in the shared mind of the Net, a vision that nearly every member glimpses, if only momentarily: of wiring human and artificial minds into one planetary soul.” Two years later Barlow, a self- described advocate for “online colonists,” got down on bended knee, doublet unbraced, to beseech us mere analog mortals: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone … You have no sovereignty where we gather.”

Are there any new ideas?

Internet Congestion

I posted the second part of my Internet congestion article on High Tech Forum:

This is the second part of an examination of the nature of congestion on packet switched networks such as the Internet. In the first part, Internet Congestion 101, we looked the at an idea expressed on Chris Marsden’s blog regarding the assumption of a “reasonable level of backhaul.” As Chris acknowledges in a comment, the task of pinning down the level of shared capacity (backhaul is shared by its nature) that’s reasonable falls on the regulator rather than the engineer. The reason for this is that the way supply and demand are brought into balance on packet switched networks is dynamic; on a circuit switched network, demand is static per call, so the operator simply has to provision enough shared capacity to supply the number of subscribers that are likely to make calls at the network peak (probably Mother’s Day afternoon in the US.) The consequence of demand exceeding supply is the inability to make calls, and that’s clearly unacceptable.

Read the whole thing, slacker.

Check out my New Blog

I’ve got a new technology blog called High Tech Forum where I publish news and analysis of technology developments that affect networking and communications.

It’s a multi-user enterprise, so I’m happy to run articles by others, regardless of point of view, as long as they’re informative. It’s not a policy-oriented blog, it’s a “just the facts, ma’am” blog. So far I’ve got articles by Larry Roberts and Chuck Jackson, so you could be next.