Does the Internet need saving?

Doc Searls is writing a follow-up on last year’s Saving the Net piece and he wants your suggestions:

So I just decided I’ll run a first aniversary follow-up on the piece, over at Linux Journal. But first I’d like to hear from the rest of ya’ll. Tag your posts savingthenet and I’ll find them.

Mine is simple: what makes us think the Internet needs saving? All the empirical measures say it’s thriving: there are more users than ever before, more web sites, more blogs, more broadband, lower prices, and more ways to get broadband thanks to EVDO, public WiFi, and WiMax (coming soon to an ISP near you).

The biggest and only threat to the Internet is the misguided attempt to regulate ISPs in order to prevent the imaginary threat to the imaginary principle of net neutrality, but it’s unlikely to go anywhere, even if the Dems take back the Senate.

I’d be looking at things like terrorist and criminal uses of the Internet, including spam and phishing, because we’re more likely to see a real encroachment on personal freedom of expression over the Internet in response to the real abuses of bad actors than for any other reason.

But the bottom line is that the Internet is fundamentally healthy, and anybody who tells you otherwise probably has a personal agenda because the only way to sustain the “Internet at Risk” argument is to give more weight to the future than to the present. And as we’ve been hearing “Internet at Risk” arguments for ten years (if not longer) and nothing of that nature has come to pass, it’s simply crying wolf at this point, so get back to me when you have evidence of harm and not just imagination.

A Tube Full of News

Doc Searls is puzzling over new ways to do old things on the Internet, prompted by Dave Winer’s river of news concept:

“River of news” usefully combines three metaphorial frames: place, transport and publishing. Using all three, it proposes an approach to publishing that respects the fact that more and more people are going to want to get fresh newsy information on handheld Web devices.

The River of News metaphor not only speaks a new kind of sense to the NYTimes and BBCs of the world. It speaks to a new blog sensibility as well. I’m starting to think about how I might want to change my blog to be more Webphone-friendly. Can I live without all the junk on the left and right margins, for example? (Probably. They’re worse than useless to readers with Treos and Blackberries.) Alternatively, should I have a special feed just for Webphones?

Whatever the answers, I’m not thinking about my blog, or what it does, as a “site”. Meanwhile, that’s how most big publishers think about what they do on the Web. That’s why their sites are often so chock full of… stuff. They’re all about being sticky and holding your eyeballs inside the sitewalls. That might be fine on a computer screen, which is big and placelike in the sense that it usually isn’t moving around when you’re using it. But a Blackberry or a Treo or a Nokia 770 is different. It’s mobile. It’s going somewhere. You use it in a much different way.

This is an interesting concept, but it’s too much like old wine in new bottles for me. The World Wide Web, the TCP/IP protocol suite, and the personal computer are dead, in the sense that they’re tapped-out for innovation and have been passed-by as new technologies of mobile communication mature. The Web has never been much more than an easier way to access archives of stored information than its precursors Archie and Gopher. The TCP/IP suite assumes that devices never move, and desktop PC don’t know how to move.

But people, you see, do move. So the electronic devices that mean the most to us these days are the ones that either move with us or enable themselves to be accessed wherever we are.

That pretty much means that content is less important than communication, web sites that serve up static page are less valuable than feeds that give us updates to topics of interest (valuable idea), and modes of communication that depend on our being in a set location – like old-fashioned phone numbers and e-mail accounts – are less important than those that know how to reach us wherever we are.

The “river of news” is a crude first step toward realizing that our communication and networking needs changed about 10 years ago, but better late than never. If only the name didn’t remind me of the great Fugs song, Wide Wide River.

PS – Why is it that every idea that comes along with the promise to take us to a better world is invariably wrapped-up in attacks on the cluelessness of the establishment? That kind of stuff bores the hell out of me, even in the cases where it’s actually true. If you have a great new idea, it stands on its own. Don’t worry about the NY Times or the BBC, they’ll come to see your brilliance in the due course of things, if you have any. Just lay out your plan and go do it.

The trouble with end-to-end

We’ve all heard that the Internet is an end-to-end network. This means that it’s different in some major way from the telephone network, and different from other types of packet networks that we might very well be dependent upon had it not been for the success of the Internet, which of course is a result of its superior, end-to-end architecture. And end-to-end network is one in which “intelligence” is concentrated at the end points, with the network itself operating as a “magic cloud” that simply delivers packets as the end-points dictate. End points control delivery destination and rate, and the network simply passes them through.

But is this really true? In many important respects the Internet actually gives up much less end-user control than the telephone network. Routing, for example, is not conducted under end-point control. It could be, if we used the technique known as “source routing” where the transmitter of a packet doesn’t just specify where a packet should go but how to get there, hop-by-hop. The IBM Token Ring used this technique, and there’s a form of it in the RFCs that describe the Internet’s operation but it’s never been more than experimental. The phone network actually allows the user much more control over the routing of calls than the Internet does. I can choose any long-distance carrier I want for each call that I make by dialing a certain prefix before the phone number. So I can use one carrier for regional long distance, another for national long-distance, and different ones for each country I dial. That’s end-user control.

If I had that kind of end-to-end control on the Internet, I could select one NSP for bulk data transfers such as BitTorrent that would be really cheap and another NSP for VoIP that had to be really regular.

The Internet puts control of network congestion at the end-points, but that doesn’t do anything for the user as it’s all a magic cloud to him. It compromises the integrity of the network, however, as the health of thousands of internal links – selected by the network and not by the user – is dependent on good behavior at all of the end points. We’ve talked about how this works before. When a queue overflows, TCP eventually notices packet loss and throttles back its send rate, which eventually alleviates the overload condition. It’s the same logic that’s supposed to operate when the electric grid is overloaded because we’re all air-conditioning like mad. The power company tells us to turn off our air-conditioners and enjoy the heat. Some do, and others don’t. TCP’s good neighbor policy is just as easily defeated as the power company’s, so the good neighbors have to throttle back twice as hard to make up for those who don’t throttle back at all.

So it’s actually quite easy to argue that the Internet has botched the location of major control functions. Routing has great significance to the user and less to the network, but it’s all under network control, while congestion is just the opposite.

This dubious assignment of functions is exactly what net neutrality is meant to protect. It has real impact on future applications. We use a lot of mobile devices today, a big departure from the way we did things in the 70s when the Internet was designed and the PC was not even a pipe dream. Mobile devices – laptops and phones – should be reachable wherever they’re attached, but the Internet doesn’t allow this as their physical location is encoded into the addresses they use. Your IP address isn’t just a device address, it’s a network attachment point address. This is not a sound way to do things today, but having the network keep track of where you are is a “smart network” feature, a heresy in the religion of end-to-end.

These tradeoffs may have appeared sensible in the 1970s, but they don’t any longer, and no religion should force us to accept them indefinitely.

Cisco weighs in on net neutrality

This is historical by now, but I was curious about it so I checked:

“We strongly support the principle of an open Internet,” Cisco CEO John Chambers wrote in a letter to Congressman Joe Barton, who chairs the House Energy and Commerce Committee. “We must, however, balance the fact that innovation inside the network is just as important as innovation in services and devices connected to the Internet. Broadband Internet access service providers should remain free to engage in pro-competitive network management techniques to alleviate congestion, ameliorate capacity constraints and enable new services.”

Chambers makes one very excellent point: most of the talk about Internet innovation in DC is about services attached to the Internet, and not the system of lines and routers itself. The neutrality regulations would stifle innovation in the core structure of the Internet, which will eventually lead to stagnation in the services space, even worse than the stagnation we’ve seen since the Bubble burst. That can’t be good.

Full text of COPE Act

The COPE Act is in Thomas now, and the lies about it fly fast and furious in Nutrialand. See the bill here, and notice this:

‘‘(3) ADJUDICATORY AUTHORITY.—The Commission shall have exclusive authority to adjudicate any complaint alleging a violation of the broadband policy statement and the principles incorporated therein. The Commission shall complete an adjudicatory proceeding under this subsection not later than 90 days after receipt of the complaint. If, upon completion of an adjudicatory proceeding pursuant to this section, the Commission determines that such a violation has occurred, the Commission shall have authority to adopt an order to require the entity subject to the complaint to comply with the broadband policy statement and the principles incorporated therein. Such authority shall be in addition to the authority specified in paragraph (1) to enforce this section under titles IV and V. In addition, the Commission shall have authority to adopt procedures for the adjudication of complaints alleging a violation of the broadband policy statement or principles incorporated herein.

Nutria claim this means the FCC lacks the authortity to punish broadband abuse. Right.

The rules they’ll enforce are in Appropriate Framework for Broadband Access to the Internet over Wireline Facilities:

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to access the lawful Internet content of their choice.

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement.

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to connect their choice of legal devices that do not harm the network.

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to competition among network providers, application and service providers, and content providers.

That doesn’t seem too complicated.

NRO likes the new network

Here’s a great editorial from National Review Online on the cool new network:

Where the telecoms predict bold innovations, advocates of net neutrality see Orwellian nightmares. They argue that if telecoms are allowed to speed up the delivery of some content, there is nothing to stop them from slowing down or blocking content they don’t like. But such anti-consumer behavior is unlikely in a competitive market. Let’s say George Soros somehow took over Verizon and made troublemaking websites like National Review Online disappear from his network. Competition from other broadband providers would discourage him from thus breaking his customers’ hearts.

Net-neutrality advocates argue that there isn’t enough competition among broadband providers to ensure that service degradation would be punished or that telecoms would charge Internet companies fair prices for faster service. Most U.S. broadband consumers are forced to choose between their local cable and local phone companies, they argue, giving these telecoms a “virtual duopoly” in the broadband market.

Leave aside the FCC’s finding, noted in the Supreme Court’s ruling on this matter, of “robust competition . . . in the broadband market,” including not just cable and DSL, but burgeoning satellite, wireless, and broadband-over-powerline technologies. Ignore also the argument that net-neutrality legislation could actually entrench the bigger players at the expense of new technologies that might otherwise compete by differentiating their services.

They make an important point: there’s a lot of doubt about the business plan for new broadband, but that’s all the more reason to let the market sort it out. The Nutria want to abort it before we ever get a chance to see what it can do. That would be a dreadful mistake.

The Daily Neut – Part II

Recent developments on the neut front have the New York Times showing a failure to grasp the concept:

“Net neutrality” is a concept that is still unfamiliar to most Americans, but it keeps the Internet democratic. Cable and telephone companies that provide Internet service are talking about creating a two-tiered Internet, in which Web sites that pay them large fees would get priority over everything else. Opponents of these plans are supporting Net-neutrality legislation, which would require all Web sites to be treated equally. Net neutrality recently suffered a setback in the House, but there is growing hope that the Senate will take up the cause.

And Web inventor Tim Berners-Lee flying off into a socialist Neverland:

It is of the utmost importance that, if I connect to the Internet, and you connect to the Internet, that we can then run any Internet application we want, without discrimination as to who we are or what we are doing. We pay for connection to the Net as though it were a cloud which magically delivers our packets. We may pay for a higher or a lower quality of service. We may pay for a service which has the characteristics of being good for video, or quality audio. But we each pay to connect to the Net, but no one can pay for exclusive access to me.

There’s actually nothing magical about how the Internet delivers packets, it’s a machine that follows a strict set of rules. The Net Neutrality advocates are indeed hostile to levels of service that are good for video or good for audio, and nobody is even thinking about a service that blocks access to anybody; in actual fact the COPE Act that was passed by the Energy and Commerce Committee expressly forbids that. So this is simply another strawman argument from somebody who should know better.

Porn drives the web

The porn business may be the decider in the Blu-Ray vs. HD-DVD standards battle for hi-def DVDs, according to PC World. But some aren’t so sure:

“I love the whole pornography concept simply because porn is still the number one money-making use of the Internet,” Duplessie said. But I don’t believe the porn industry will drive the format. Like any other industry, it will supply what the consumer wants.”

Nonetheless, it’s taken for granted that porn drives the web. Work that out for net neutrality if you can.

The Daily Neut

Here are two items from the neut front. First an article in Salon by Big Neut Tim Wu in which he makes a little sense:

None of this is to say that a good network-neutrality rule must be absolute, or even close to absolute. It’s an open secret that AT&T and Verizon want to become more like cable television companies. If Verizon wants to build a private network to sell TV, that would justify broad powers to control the network, a precondition to providing the service at all. No neutrality rule should be a bar to building better networks that do more.

OK, so why are you trying to do just that, prevent American companies from building better networks?

And the other is a piece by Mumon, a very prominent figure in the world of wireless networks:

The real issue for “net neutrality” is that an advanced internet needs to be built, financed, and initiated through the government help, like it is in Korea, Japan, and China. That’s why our access charges are so steep relative to these places. Put big pipes everwhere, and the high class QoS services can easily coexist with the best effort folks. That’s an issue of capital infrastructure deployment and build-out, which in the US, with its lack of centralized planning for such things, doesn’t exist. Hopefully rapid deployment of true competitive access schemes (Broadband Power Line, WiMax) might alleviate this problem. But that takes a new policy, committment, and intervention, with a quid-pro-quo of warranties of operability.

Mumon cites the WaPo’s editorial today, and essentially agrees with it:

Yet perhaps without realizing it, those who are now advocating “net neutrality”– the notion that those who shell out the big bucks to build new much higher speed networks can’t ask the websites that will use the networks intensively to help pay for them– could keep this new world from becoming a reality. Further, they could deprive the websites themselves of the benefits of being able to use the networks to deliver their data-heavy content.

That’s all I have time for today.