Internet History Lesson

See Vanity Fair for a nice synopsis of Internet history, based in interviews with key contributors like Paul Baran and Larry Roberts down to social networking people. Here’s their article summary:

Fifty years ago, in response to the surprise Soviet launch of Sputnik, the U.S. military set up the Advanced Research Projects Agency. It would become the cradle of connectivity, spawning the era of Google and YouTube, of Amazon and Facebook, of the Drudge Report and the Obama campaign. Each breakthrough—network protocols, hypertext, the World Wide Web, the browser—inspired another as narrow-tied engineers, long-haired hackers, and other visionaries built the foundations for a world-changing technology. Keenan Mayo and Peter Newcomb let the people who made it happen tell the story.

It’s long, but parts of it are very interesting, and there are audio clips and a nice little slideshow.

Internet Fairness, or Not

My latest piece for The Register is up: Dismantling a Religion: The EFF’s Faith-Based Internet. In it, I explore the difference between the way the EFF wants to manage the Internet and the new way the IETF folks are discussing.

Bottom line: the Internet has never had a user-based fairness system, and it needs one. All networks need one, actually.

On that note, the TCP-Friendly folks remind us:

The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network (probably in the form of preferentially dropping their packets during times of congestion).

An actively-managed Internet is a functional Internet.

The Internet’s Big Idea

Because of current events, it strikes me that we need to review the fundamental idea that the Internet was based on, packet switching. This goes back to the dawn of time in the 1960s, before any corporations were involved, and is the pure essence of the thing. Only by understanding the basic idea can we see who’s true to it today and who isn’t.

Packet switching is not a hard notion to grasp, as it involves the spirit of cooperation, a commons, and mutual benefit. Recall that communications networks of the earlier sort allocated bandwidth strictly. On the telephone network you always got the same slice of bandwidth, neither more nor less. On some rare occasions like Mothers’ Day you couldn’t make a call right away, but for all practical purposes it was always there and always the same.

This isn’t a very efficient way to allocate bandwidth, however, because much of it goes to waste. When you’re on a call, you consume just as much bandwidth when you’re speaking as when you’re not, and a great deal of bandwidth is idle for most of the day because it’s simply a reserve for peak calling times. So the designers of the early Internet – it was called ARPANET back then – wondered what would happen if they built a network where bandwidth was a common pool that each user would draw from when he needed it, as he needed it, instead of being strictly divided in the old-fashioned way. In this scheme, during periods of low usage, each user would get tons of bandwidth so the network would appear to be really, really fast, and during periods of high demand it would partition up fairly just like the phone network, or so it seemed. So they launched this great experiment to see what had to be done to make a network that would scale up in performance under light load and scale down to fairness under heavy load. The method is called “packet switching” to differentiate it from the “circuit switching” technology in the phone network, and the ARPANET became the Internet in its second incarnation of protocols.

Packet switching is the single most important idea in the design of the Internet, even more than universal connectivity; after all, the phone network gave us the ability to reach out and annoy anyone on the planet long ago. Packet switching as a way to manage bandwidth is the Big Idea.

It always strikes me as odd that there’s so little understanding of the Big Idea at the base of the Internet’s design pyramid among our would-be Internet regulators and ISP critics. They’re always complaining about the deceptiveness of “unlimited access” rates and “all you can eat” deals that don’t guarantee any constant or minimum rate. (Duh, we tried that already.) This is an experiment in another direction, where the deal is that it’s going to be faster at some times than at other times, but overall it’s going to be much better and much cheaper than guaranteed bandwidth. And sure enough, it works: you can even make phone calls over the Internet of exceptional quality anywhere in the world for peanuts. It’s marvelous.

Well, mostly marvelous. Throughout the Internet’s history, even when it was a closed garden for the research world and long before the great unwashed were allowed on it, the “fairness” problem has proved very difficult to resolve, because each user and each application has a different appetite for bandwidth and a different demand for response time. In the early days, the interactive terminal protocol “telnet” was often stymied by the bulk data transfer protocol “ftp”, and today Skype has to work around BitTorrent.

In theory, it shouldn’t be hard fit the needs of programs that communicate small chunks of data on a tight time line around programs that move massive amounts of data with no particular time requirement around any one chunk. In theory, we should be able to design networks that do that, either by booking reservations for the call or by giving Skype priority over BitTorrent. And in fact we have a number of experimental protocols that will do just that, especially within the confines of a private network in a business, an organization, or a home. And they all depend on a prioritizing or scheduling function having a clear idea of which packets belong to which program, and of the programs being willing to settle for less than what they want for various periods of time. And that’s the way things were on the Internet before it went commercial.

In the mid-80s, we saw Congestion Collapse (“Internet meltdown”) during periods of heavy ftp usage. The quick fix that was cobbled together required TCP to voluntarily throttle-back on the amount of data it transmitted when messages weren’t delivered. This “overloaded” dropped packets, giving them two meanings: either a packet was hit by noise and corrupted, or a network queue was full and the packet was discarded because there was no more room in the line for it. Error rates were low (there was no WiFi back then) so it was fine to react as if the network was overloaded. And we could count on everybody being polite and graciously accepting slow response time until the overall load went down.

This could have been a fairly crappy solution as it didn’t distinguish application requirements between our interactive application and our bulk data application, but implementation did what design failed to do: in practice, telnet data came in much shorter packets than ftp data, so when you combine that with the fact that the packet droppers are looking for space in network queues, you obviously get more space out of dropping long packets than short ones. So voila, in one step you’ve got priority enforcement and congestion control.

And it’s all going to be fine until the next generation of protocols comes around and our assumptions once again have to be revised. I’ll go into that tomorrow.

(Updated by removing some stuff about a blog post that inspired me to write this stuff. That material now has its own post, right below.)

Try this article for a little insight into changes afoot inside the Internet.

Only on the Internet

From the annals of modern technology:

A Bosnian couple are getting divorced after finding out they had been secretly chatting each other up online under fake names.

Sana Klaric, 27, and husband Adnan, 32, from Zenica, poured out their hearts to each other over their marriage troubles, and both felt they had found their real soul mate…

“To be honest I still find it hard to believe that the person, Sweetie, who wrote such wonderful things to me on the internet, is actually the same woman I married and who has not said a nice word to me for years.”

What can I say?

Cuban ruffles feathers

Mark Cuban is both entertaining and insightful, the latest example being his remarks on the dead and boring Internet:

A lot of people are all up and upset about my comments that the Internet is dead and boring. Well guess what, it is. Every new technological, mechanical or intellectual breakthrough has its day, days, months and years. But they don’t rule forever. That’s the reality.

Every generation has its defining breakthrough. Cars, TV, Radio, Planes,highways, the wheel, the printing press, the list goes on forever. I’m sure in each generation to whom the invention was a breakthrough it may have been heretical to consider those inventions “dead and boring”. The reality is that at some point they stop changing. They stop evolving. They become utilities or utilitarian and are taken for granted.

Some of you may not want to admit it, but that’s exactly what the net has become. A utility. It has stopped evolving. Your Internet experience today is not much different than it was 5 years ago.

Cuban is right, of course, and even Om Malik (broadband cheerleader extraordinaire) admits as much:

But the bigger question Cuban is asking is whether the Internet’s infrastructure is sufficient to keep the innovation cycle moving forward. And the answer is no.

And why? Om quotes Nortel’s CTO:

John Roese, chief technology officer of Nortel (NT), is of the same school of thought. “If you look at the progress made from 300-baud modems to 10-Gigabit Ethernet,” said Roese, “the cost per bit has declined by a factor of 22 million to one. But that isn’t reflected in the consumer Internet experience.”

The fault is in the asymmetric nature of the Internet. The downstream speeds are getting higher, but upstream speeds are still being controlled in a miserly fashion by ISPs, thus acting as a break for truly interactive applications.

An asymmetric Internet is good for disseminating information – after all pulling down information (or YouTube videos) moves packets in one direction. This is perhaps the point Cuban is trying to make when he says that the Internet is like a utility and therefore boring. Electricity, after-all, also works as a one-way service — it comes into our house and we use it for everything from stereos and air conditioners. Today’s Web and Internet applications are doing precisely the same on our desktops.

I don’t know that a symmetrical Internet is any less boring than the one we have, as it’s probably just harder to administer. Today we upload our videos to services like You Tube to asymmetrically distribute them, and I don’t see them getting more interesting if we distribute them directly from our homes and offices. Maybe somebody can explain that to me.

What is Wikipedia?

I hope this clears things up:

Wikipedia is a Massively Multiplayer Online Role-Playing Game (MMORPG) in which participants play editors of a hypothetical online encyclopedia, where they try to insert misinformation that they are randomly assigned when they create their accounts, while preventing contrary information from being entered by others. Players with similar misinformation to promote will generally form “guilds” in order to aid each other.

The source is a very rude little wiki, the Encyclopedia Dramatica.

Spreading money

Viacom sues Google over YouTube for a cool billion bucks and Jeff Jarvis is predictably upset:

I’ve been reading Viacom’s boneheaded $1 billion complaint against YouTube. Viacom complains about YouTube but, in truth, they’re complaining about their own viewers. They whine about theft but, in fact, they’re whining about recommendation, about their audience finding them more audience. Viacom is trying, singlehandedly, to turn the TV industry into the music industry. They are trying to spread stupid.

Let me suggest another point of view. I believe Viacom is upset over the fact that the TV programming they produce has been appropriated by another company for the purpose of substituting the other company’s ads for Viacom’s. Viacom depends on ad sales to cover the costs of production and delivery of their programming, and when their shows end up on YouTube, Google makes all the money for the ads they sell alongside Viacom’s programming. Presumably, if Google were willing to equitably share their ad revenues with Viacom this case would never have done to court.

So who’s entitled to this ad revenue, Google or Viacom? And who’s “spreading stupid” here?

Open Patent Office

This is a promising application of Wiki technology for a knowledgeable group of real people:

The government is about to start opening up the process of reviewing patents to the modern font of wisdom: the Internet.

The Patent and Trademark Office is starting a pilot project that will not only post patent applications on the Web and invite comments but also use a community rating system designed to push the most respected comments to the top of the file, for serious consideration by the agency’s examiners. A first for the federal government, the system resembles the one used by Wikipedia, the popular user-created online encyclopedia.

“For the first time in history, it allows the patent-office examiners to open up their cubicles and get access to a whole world of technical experts,” said David J. Kappos, vice president and assistant general counsel at IBM.

This will be good if and only if the citizen reviewers are expert and accountable, and under those conditions I’m enthusiastically for it, especially since professional reviewers have the last word.

Ericsson buys Redback

Now here’s a “holy mother of god” moment:

Telefon AB L.M. Ericsson’s agreement to buy Redback Networks Inc. for $2.1 billion reflects how the explosion in video and other multimedia services over the Internet and a surge in broadband subscribers are driving phone and cable companies to upgrade their networks and spurring big makers of networking hardware to unite.

Ericsson, a Swedish provider of equipment and services for telecommunications infrastructure, said it will pay holders of the Silicon Valley seller of routing equipment $25 a share, or 18% more than Redback’s share price before the announcement. Ericsson characterized the price as a 60% premium above the 90-day average for Redback stock.

The companies said the deal will make them more effective suppliers for the fast-moving market. It also could help Redback contend with larger competitors such as Cisco Systems Inc. and Juniper Networks Inc.

At the end of the day, don’t the phone companies always buy the Internet companies?

Techdirt reader explains the Internet

Finally, after all these years, I understand the Internet thanks to a comment on Techdirt:

Woot! First! by Rstr5105 on Nov 2nd, 2006 @ 8:00pm

This appears to be yet another case of the telcos trying to tell us how the internet is supposed to be withot bothering to take a second to trace the roots of the net.

For those of us that don’t know, the internet started as a way for universities to transmit data back and forth faster than the ol’ sneaker net method. This worked well so DARPA signed on and funded it for a while. Eventually the DoD built it’s own net, and DARPA funding ceased.

It was at this point that AT&T (as well as a few others) signed on and formed the W3C (World Wide Web Consortium(Don’t quote me on the consortium part) The W3C stated very clearly that the internet was to be used specifically for non-commercial gain. (IE Even E-Bay would not be allowed to operate under the original paramaters of the W3C.)

Then the Internet went public, I believe, although I’m not sure if this is correct, it started with a few professors and business men saying something along the lines of “Hey, this is a good thing, now if only I could connect to my computer at work from my computer at home”. It spiraled out from there.

I don’t know what caused the massive build up of the web that we saw in the nineties, but now everyone is “On Line” and looking to make a few bucks. It seems to me that although we have this powerful tool at our disposal, we are corrupting it by allowing it to remain in the hands of the telco’s.

It also seems to me, that under the terms of the original W3C, (I don’t know what it’s current rules are) the telco’s weren’t allowed to charge for the ability to connect to the net. YES, they had to run the cables to feed it, YES they have to run the servers we all log into and NO i don’t have a problem paying them to be able to connect to the net, but it seems against what the net started as for them to be able to say, “Unless you pay this much a month you’re going to be limited to seeing websites at a slower speed than somebody who pays $XX.YY a month.”

Okay sorry for the long post, but it’s my two (four?) cents on this issue.

Don’t quote me on that, of course, because none of it is true. This comment is an illustration of how net neutrality became a political issue in the US in this election year: a bunch of drolling morons have been empowered to run around spouting spew and not enough people are shooting them down. And where would you start anyway?