Verizon’s Vision of the Internet

Despite the fact that I’ve been trying to explain why companies like Time Warner need to impose broadband usage caps on their systems before going to the capital markets for assistance in beefing up their innards, I’m not a fan of usage caps generally. They’re a very crude tool for imposing an equitable distribution of bandwidth, and one that ensures that the actual infrastructure in any given network will not be used efficiently. The key to network efficiency for a truly multi-service network like the Internet of the future is successful discrimination of application needs and traffic types. If the network can be made smart enough to follow orders, users can control their network usage according to their personal economics with no big surprises in the billing cycle. Network operators don’t need to manage traffic streams all the time, they need to manage them during periods of peak load (which better not be all that often.) And their best guidance in doing this comes from users and applications.

Many cities around the world manage access to the city core with something called congestion pricing: if you want to drive into the very heart of Singapore or London during peak hours, you have a pay a fee, which keeps traffic from gridlocking while permitting access by those who really need it. The Internet should work the same way: if you need low-latency service during peak load hours for Skype, you should be able to get it. And if you want to play P2P at the same time, you should be able to do so, but with higher latency (or at least higher jitter.) Accounts can be provisioned to allow a certain amount of congestion traffic for a flat rate, with additional portions available for an added fee. Users who demand a lot of transit from their networks should be able to get it, but at a reduced rate relative to average loads or for an additional fee.

The point is that networks are never going to be so fat that they can’t be overloaded, and local congestion is always going to occur. So the trick in managing networks is to allocate resources fairly and transparently, and let users control their use of whatever quota they have (not manually, but through home router and application signaling to the network.)

The least congested residential broadband service in the US today is Verizon FiOS. Verizon sells access at up to 50 MB/s, and has the capacity to increase this as consumers demand more.They can do this because they’ve invested money in a total infrastructure that consists of neighborhood loops, second hop infrastructure, and core network links. Their current system can carry 100 Mb/s per user without any contention short of the core, which is rather awesome. This is why you never hear anything about caps or quotas for FiOS: the system can’t be overloaded short of the core.

Despite that, Verizon’s visionaries realize that network management is going to be a part of the Internet of the future:

In part because most of the attention in the early days of the Internet was on connectivity and ensuring networks and devices could interconnect and communicate successfully, security and quality of service techniques were not a focus of the discussions around network protocols and functionality. Such features have instead often been offered “over the top”, usually as attributes in applications or as functionalities in web sites or distributed services.

The complexity and volume of Internet traffic today – and the fact that much more of it than ever before is “real time” or time sensitive – means that the Internet’s traditional routing and processing schemes are challenged more than ever. It is no longer realistic to expect that all of the heavy lifting to make applications and services work well on the Internet in today’s “two-way, heavy content, complex applications” world can be done through the old models. More work needs to be done at all levels to ensure better quality and improved services. This includes the network level as well.

This need not threaten the basic foundation of the Internet – its ability to provide consumers with access to any content they wish to use and connect any device they want to a broadband network. Competition, broad commitment to openness by industry and advocates, and oversight by regulators helps ensure this foundation remains. But it does mean that enhanced network based features and functionalities should not be automatically viewed with concern. Such features can be an important aspect of the Internet’s improvement and future evolution.

Indeed we shouldn’t fear rational and transparent management; it’s part of what has always made these systems work as well as they have for us.

, ,

Pitchforks in Austin: Time-Warner’s Bandwidth Cap

The fledgling high-tech community in the smokey little hipster ghetto called Austin is apoplectic about Time Warner’s announcement that it’s testing bandwidth caps in central Texas:

When it comes to trialing its metered broadband service, Time Warner Cable’s choice to do so in the tech-savvy city of Austin, Texas, was no accident. And residents may not be able to do much about it.

According to TWC spokesman Jeff Simmermon, Austin’s dedication to all things digital was precisely why it was chosen as one of four cities where the company plans to trial consumption-based broadband plans, which range from 5 GB to 40 GB per month (TWC says it has plans for a 100 GB-per-month tier as well). “Austin is a passionate and tech-savvy city, and the spirit that we’re approaching this (metered broadband) test with is that if it’s going to work, it has to work in a tech-savvy market where the use patterns are different,” he told me.

So far, Austin isn’t impressed, but since the local cable franchise it grants only deals with video, there may not be much it can do. Chip Rosenthal, one of seven commissioners on the City of Austin’s Technology and Telecommunications Commission (a strictly advisory body), hopes that concerned citizens will show up at the meeting it’s holding at City Hall this Wednesday and talk about metered broadband. He wants to get the metered bandwidth issue added to the agenda of the commission’s May meeting as well.

Rosenthal, a contract programmer who likes open source, has a blog where he holds forth on the issue, calling its rationale a series of “red herrings,” and complaining that the caps of the present will hurt applications of the future. This is no doubt true, but ultimately another red herring. The caps of the future won’t necessarily be the caps of the present.

The general theory is that TWC wants to stamp out web video in order to keep TV customers in the VoD fold. I don’t doubt that TWC would like to do that, but I doubt they’re dumb enough to believe they could ever get away with it. Austin is a stoner’s throw from San Antonio, the world headquarters of AT&T and the beta site for U-verse, the IPTV service that rides into the home atop VDSL. While U-verse isn’t universally available in Austin yet, it’s under construction so there are alternatives.

TWC’s CEO has issued a blog post by way of clarification that’s not entirely helpful:

With regard to consumption-based billing, we have determined that as broadband usage and penetration grow, there are increasing differences in the amount of bandwidth our customers consume. Our current pricing plans require all users to pay the same amount, whether they check email once a month or download six movies a day. As the amount of usage has dramatically diverged among users, this is becoming inherently unfair and not the way most consumers want to pay for goods they consume.

Like Rosenthal’s post, it’s true as far as it goes, but leaves runners in scoring position. Here’s the real story, as I see it: while Time Warner doesn’t have a large enough network to peer with the big boys (AT&T, Verizon, Qwest, Comcast, and L3,) it does have some peering agreements that protect it from transit charges as long as they deliver their packets to convenient locations, as well as some straight-up transit charges to pay. Their aggregation network – the links that carry data between the Internet exchange points and their CMTS’s – isn’t fat enough to support full-on DOCSIS 3 usage, and neither is its transit budget.

Consequently, they’re being hammered by the small number of high-bandwidth consumers in their network, and they’re looking to cut costs by running them off. While there are other ways to ensure fairness across user accounts, the cap is the best way to address the fraction of a percent who use something like half their available bandwidth.

TWC is betting that they can find a cap level that discourages hogs and doesn’t bother more typical users. They’re going into an area close to the heart of AT&T with the experiment to get a good sense of where that limit is.

VoD has a little bit to do with this, but not all that much. TWC customers with TiVo’s already have unlimited VoD, and the rest of the VoD they provide doesn’t cost transit dollars, it’s delivered over their local tree. DOCSIS 3 also doesn’t have much of anything to do with this, as it’s also a local service, albeit one with the potential to ring up big transit charges if not domesticated.

To a large extent, ISP’s play a marketing game where they advertise super-fast services that aren’t backed up by sufficient transit or peering to sustain a heavy duty cycle. This isn’t a bad thing, of course, as the efficient sharing of capacity is actually the Internet’s secret sauce. If we wanted peak and minimum bandwidth to be the same, we would have stuck with narrow-band modems on the PSTN. But we don’t, so we have to get hip to statistical sharing of network resources.

I’ll go out on a limb here and predict that the typical Austin consumer won’t switch to U-verse on account of TWC’s caps, but the heaviest users of gaming and BitTorrent will. And I’ll further predict that TWC’s bottom line will be glad to see them go.

The arguments against caps ultimately come down to the assertion that there’s some public good in making light users of Internet access capacity subsidize heavy users. Given that most of the heavy uses are either piracy or personal entertainment, I don’t happen to buy that argument, and moreover I find the alternatives to capping are generally less attractive, as they typically involve duty cycle restrictions of other types. The alternative that TWC should explore is peak/off peak handling that allows downloaders to utilize less restrictive bandwidth budgets at off hours.

I’d prefer to have a network that allowed me to label all of my traffic with the service level I expected, and scheduled and charged it appropriately. We don’t have that network yet, but we will one day as long as neutrality regulations don’t get in the way. Alternatively, a fat pipe to a Tier 1 like Verizon would be a better deal, but we can’t all buy one today either.

eComm Spectrum 2.0 Panel Video

Here’s the licensing panel from eComm live and in color. Seeing yourself on TV is weird; my immediate reaction is to fast for about a month.

On a related note, see Saul Hansell’s musings on spectrum.

The issue I wanted to raise at eComm and couldn’t due to lack of time and the meandering speculations about collision-free networks is spectrum sharing. Two-way communications systems all need a shared pipe at some level, and the means by which access to the pipe are mediated distinguish one system from another. So far, the debate on white spaces in particular and open spectrum in general is about coding and power levels, the easy parts of the problem. The hard part is how the system decides which of a number of competing transmitters can access the pipe at any given time. The fact that speculative coding systems might permit multiple simultaneous connections on the same frequency in the same space/time moment doesn’t make this question go away, since they only help point-to-point communications. Internet access is inherently a point-to-multipoint problem as theses system all aggregate wireless systems in order to move them to the fiber backbone.

The advantage of licensing is that it provides the spectrum with an authorized bandwidth manager who can mediate among the desires of competing users and ensure fairness per dollar (or some similar policy.) The idea that we can simply dispense with a bandwidth manager in a wide-area network access system remains to be proved.

So I would submit that one of the principles that regulators need to consider when deciding between licensed and unlicensed uses is the efficiency of access. The notion that efficiency can be discarded in favor of ever-fatter pipes is obviously problematic in relation to wireless systems; they’re not making more spectrum.

Obama’s Missed Opportunity

According to National Journal, Susan Crawford is joining the Obama administration in a significant new role:

Internet law expert Susan Crawford has joined President Barack Obama’s lineup of tech policy experts at the White House, according to several sources. She will likely hold the title of special assistant to the president for science, technology, and innovation policy, they said.

This does not make me happy. Crawford is not a scientist, technologist, or innovator, and the job that’s been created for her needs to be filled by someone who is; and an exceptional one at that, a person with deep knowledge of technology, the technology business, and the dynamics of research and business that promote innovation. A life as a legal academic is not good preparation for this kind of a job. Crawford is a sweet and well-meaning person, who fervently believes that the policy agenda she’s been promoting is good for the average citizen and the general health of the democracy and that sort of thing, but she illustrates the adage that a little knowledge is a dangerous thing.

As much as she loves the Internet and all that it’s done for modern society, she has precious little knowledge about the practical realities of its operation. Her principal background is service on the ICANN Board, where she listened to debates on the number of TLDs that can dance on the head of pin and similarly weighty matters. IETF engineers generally scoff at ICANN as a bloated, inefficient, and ineffective organization that deals with issues no serious engineer wants anything to do with. Her other qualification is an advisory role at Public Knowledge, a big player on the Google side of the net neutrality and copyright debates.

At my recent net neutrality panel discussion at MAAWG, I warned the audience that Crawford’s selection to co-manage the Obama transition team’s FCC oversight was an indication that extreme views on Internet regulation might become mainstream. It appears that my worst fears have been realized. Crawford has said that Internet traffic must not be shaped, managed, or prioritized by ISPs and core networking providers, which is a mistake of the worst kind. While work is being done all over the world to adapt the Internet to the needs of a more diverse mix of applications than it’s traditionally handled, Crawford harbors the seriously misguided belief that it already handles diverse applications well enough. Nothing could be farther from the truth, of course: P2P has interesting uses, but it degrades the performance of VoIP and video calling unless managed.

This is an engineering problem that can be solved, but which won’t be if the constraints on traffic management are too severe. People who harbor the religious approach to network management that Crawford professes have so far been an interesting sideshow in the network management wars, but if their views come to dominate the regulatory framework, the Internet will be in serious danger.

Creating a position for a special adviser on science, technology and innovation gave President Obama the opportunity to to lay the foundation of a strong policy in a significant area. Filling it with a law professor instead of an actual scientist, technologist, or innovator simply reinforces the creeping suspicion that Obama is less about transformational change than about business as usual. That’s a shame.

Cross-posted at CircleID.

, , , ,

Shutting down the Internet

The Internet is dying, according to advocacy group Free Press. The organization has published a report, Deep Packet Inspection: The End of the Internet as We Know It? that claims technology has evolved to the point that Internet carriers can control everything that we read, see, and hear on the Internet, something they’ve never been able to do before. It’s the backdrop of a just so story Free Press’s network guru, Robb Topolski, delivered to a House of Lords roundtable in the UK recently. It’s an outlandish claim which echoes the Groundhog’s Day predictions about the Internet’s imminent demise Free Press has been making since 2005.

Suffice it to say it hasn’t exactly happened. Internet traffic continues to grow at the rate of 50-100% per year, more people than ever – some 1.5 billion – are using the Internet in more places and with more devices, and there hasn’t been an incident of an American ISP choking traffic since the dubiously alarming case of Comcast’s rationing of P2P bandwidth – mainly used for piracy – in 2007.

There are multiple errors of fact and analysis in the Free Press report, pretty much the same ones that the organization has been pumping since they jumped on the net neutrality bandwagon. There’s been no new breakthrough in Internet management. While it’s true that Moore’s Law makes computer chips run faster year after year, it’s also true that it makes networks run faster. So any reduction in the time it takes to analyze a packet on a network has to be balanced against the number of packets that cross the network in a given unit of time. Machines work faster. Some machines analyze Internet packets, and other machines generate Internet packets. They’re both getting faster, and neither is getting faster faster.

Network operators have been analyzing packets and rationing bandwidth as long as there have been IP networks. The first one to go live was at Ford Aerospace, where the discovery was made, more or less instantly, that user access to the network had to be moderated so that users of bulk data transfer applications didn’t crowd out interactive uses. More sophisticated forms of this kind of helpful “discrimination” are the principle uses of DPI today.

The complaint by Free Press is more or less on par with the shocking discovery that the sun has both good and bad effects: it causes plants to grow, and it can also cause skin cancer. Shall we now pass a legislative ban on sunlight?

The important new trend on the Internet is an increasing diversity of applications. Until fairly recently, the Internet’s traffic management system was occupied almost exclusively with a set of applications that had very similar requirements: e-mail, web browsing, and short file transfers are all concerned about getting exact copies of files from point A to point B, with no particular concern for how long it took, within seconds. Now we’ve added Skype to mix, which needs millisecond delivery, and P2P transactions that can run for hours and involve gigabytes of data. Add in some gaming and some video calling, and you’ve got a healthy diversity of applications with unique requirements.

The sensible way to manage Internet diversity is to identify application needs and try to meet them, to create “the greatest good for the greatest number” of people. DPI is really, really good at this, and it’s a win for all Internet users when it’s used properly.

Free Press’s jihad against helpful technologies echoes their previous war against newspaper consolidation. With the recent closures and printing plant shutdowns of daily papers in Seattle, Denver, and elsewhere, it’s clear that these efforts at media reform have been less than helpful.

Let’s not send the Internet the way of the Seattle Post-Intelligencer. Rather than buying Free Press’s shameless scare tactics, reflect on your own experience. Do you see even the slightest shred of evidence to support the wild claim that the Internet is withering on the vine? I certainly don’t.

Congrats to Harold Feld

DC wonks are by now aware that Harold Feld has left MAP and joined Public Knowledge as Legal Director. While there’s no doubt that Harold is a card-carrying communist, he’s my favorite pinko so I’m glad to see he’s secured gainful employment. With any luck, he can slap a little sense into the more fanatical members of the PK staff and make them act a little bit more like adults. So congrats, Harold, and good luck to you. A little, anyway.

Speaking of communists, check this breath-taking exercise in spin at Technology Liberation Front. Tim Lee trots out that sick “GNU/Linux operating system” trope. Nope, GNU and Linux are two different things created by two different communities under very different assumptions. The FSF tried to create its own OS for many years and failed, but Torvalds did it right away because he’s a brainy and practical dude. Don’t count in fire-breathing ideologues to create your technology for you, there will be so many strings attached you won’t want to use it.

Notable debates in the House of Lords

We’re quite fond of Sir Tim Berners-Lee. As the first web designer, he personally converted the Internet from an odd curiosity of network engineering into a generally useful vehicle for social intercourse, changing the world. That this was a contribution of inestimable value goes without saying. It’s therefore distressing to read that he’s been mumbling nonsense in public fora about Internet management practices.

For all his brilliance, Sir Tim has never really been on top of the whole traffic thing. His invention, HTTP 1.0, did strange things to the Internet’s traffic handling system: his decision to chunk segments into 512 byte pieces tripled the number of packets the Internet had to carry per unit of information transfer, and his decision to open a unique TCP stream for every object (section of text or graphic image) on a web page required each part of each page to load in TCP’s “slow start” mode. Carriers massively expanded the capacity of their pipes in a vain attempt to speed up web pages, as poor performance was designed into Sir Tim’s protocol. Hence the term “world-wide wait” had to be coined to describe the system, and more experienced engineers had to produce HTTP 1.1 to eliminate the tortured delay. This is not to bash His Eminence, but rather to point out that all of us, even the geniuses, have limited knowledge.

At a House of Lords roundtable last week, Sir Tim took up a new cause by way of complaining about one of the ways that personal information may be obtained on the Internet:

Speaking at a House of Lords event on the 20th anniversary of the invention of the World Wide Web, Berners-Lee said that deep packet inspection was the electronic equivalent of opening people’s mail.

“This is very important to me, as what is at stake is the integrity of the internet as a communications medium,” Berners-Lee said on Wednesday. “Clearly we must not interfere with the internet, and we must not snoop on the internet. If we snoop on clicks and data, we can find out a lot more information about people than if we listen to their conversations.”

Deep packet inspection involves examining both the data and the header of an information packet as it passes a ‘black box’ on a network, in order to reveal the content of the communication.

Like many opponents of the scary-sounding “deep packet inspection,” His Eminence confuses means and ends. There are many ways to obtain personal information on the Internet; the preceding post was about one of them. Given the choice, most of us would gladly surrender some level of information in order to obtain free services or simply better-targeted ads. As long as the Internet is considered a bastion of “free-” (actually, “advertising-supported-“) culture and information, personal information gathering will be the coin of the realm. So it doesn’t much matter if my privacy is violated by a silly packet-snooping system that I can easily thwart by encrypting my data or by an overly-invasive ad placement system, it’s gone either way. So if he’s manic about privacy, he should address the practice of information-gathering itself and not simply one means of doing it.

Nonsense is not unknown in the House of Lords, however. One of the most entertaining debates in the history of Western democracy took place in that august body, the infamous UFO debate:

The big day came on 18 January 1979 in the middle of a national rail strike. But the industrial crisis did nothing to dampen interest in UFOs. The debate was one of the best attended ever held in the Lords, with sixty peers and hundreds of onlookers – including several famous UFOlogists – packing the public gallery.

Lord Clancarty opened the three hour session at 7pm “to call attention to the increasing number of sightings and landings on a world wide scale of UFOs, and to the need for an intra-governmental study of UFOs.” He wound up his speech by asking the Government reveal publicly what they knew about the phenomenon. And he appealed to the Labour Minister of Defence, Fred Mulley, to give a TV broadcast on the issue in the same way his French counterpart, M. Robert Galley, had done in 1974.

The pro-UFO lobby was supported eloquently by the Earl of Kimberley, a former Liberal spokesman on aerospace, who drew upon a briefing by the Aetherius Society for his UFO facts (see obituary, FT 199:24). Kimberley’s views were evident from an intervention he made when a Tory peer referred to the Jodrell Bank radio telescope’s failure to detect a single UFO: “Does the noble Lord not think it conceivable that Jodrell Bank says there are no UFOs because that is what it has been told to say?”

More than a dozen peers, including two eminent retired scientists, made contributions to the debate. Several reported their own sightings including Lord Gainford who gave a good description of the Cosmos rocket, “a bright white ball” like a comet flying low over the Scottish hills on New Year’s Eve. Others referred to the link between belief in UFOs and religious cults. In his contribution the Bishop of Norwich said he was concerned the UFO mystery “is in danger of producing a 20th century superstition” that sought to undermine the Christian faith.

Perhaps their Lordships will invite His Eminence to observe an actual debate on Internet privacy, now that he’s set the stage with the roundtable. I think it would be absolutely smashing to see 40 of Bertie Wooster’s elderly uncles re-design the Web. Maybe they can add a comprehensive security model to the darned thing.

On a related note, Robb Topolski presented the worthies with a vision of the Web in a parallel universe that sent many scurrying back to their country estates to look after their hedgehogs. Topolski actually spoke about North American gophers, but the general discussion brings to mind the hedgehog’s dilemma of an open, advertising-supported Internet: a system that depends on making the private public is easily exploited.

UPDATE: Incidentally, Topolski’s revisionist history of the Web has been harshly slapped-down by the Boing-Boing readers who should be a friendly audience:

Huh? What a bizarre claim. Is he saying that network admins weren’t capable of blocking port 80 when HTTP was getting off its feet?!?

Wha? Even ignoring the fact that network admins at the time _did_ have the tools to block/filter this kind of traffic, this would still have little or nothing to do with endpoint computing power.

Oh, man. This is defintely junk.

Revisionist history in the name of greater freedom is still a lie.

Follow this link to a discussion from 1993 about how to make a Cisco firewall block or permit access to various Internet services by port. HTTP isn’t in the example, but the same rules apply. The power was clearly there.

Welcome to the NAF, Robb, do your homework next time.

, ,

A little bit breathless

The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently have in the US, absent a coherent regulatory framework for Internet services.

Most of us would probably say, after reading the whole package, that consumer rights are advanced by it. But most of us aren’t fire-breathing neutrality monsters who can’t be bothered with the practical realities of network operation. The actual document the Brits are circulating is here; pay special attention to the Rationale.

The operative language establishes the principle that there are in fact limits to “running the application of your choice” and “accessing and sharing the information of your choice” on the Internet, which is simply stating some of the facts of life. If you’re not allowed to engage in identity theft in real life, you’re also not allowed to do so on the Internet; if you’re not allowed to violate copyright in real life, you’re also not allowed to do so on the Internet; and so on. Similarly, while you’re allowed to access the legal content and services of your choice, you’re not allowed to access them at rates that exceed the capacity of the Internet or any of its component links at any given moment, nor without the finite delays inherent in moving a packet through a mesh of switches, nor with such frequency as to pose a nuisance to the Internet Community as a whole or to your immediate neighbors. Such is life.

In the place of the current text which touts the freedoms without acknowledging the existing legal and practical limits on them, the amendment would require the carriers to disclose service plan limits and actual management practices.

So essentially what you have here is a retreat from a statement that does not accurately describe reasonable expectations of Internet experience with one that does. You can call it the adoption of a reality-based policy statement over a faith-based statement. Who could be upset about this?

Plenty of people, as it turns out. A blog called IPtegrity is hopping mad:

Amendments to the Telecoms Package circulated in Brussels by the UK government, seek to cross out users’ rights to access and distribute Internet content and services. And they want to replace it with a ‘principle’ that users can be told not only the conditions for access, but also the conditions for the use of applications and services.

…as is science fiction writer and blogger Cory Doctorow:

The UK government’s reps in the European Union are pushing to gut the right of Internet users to access and contribute to networked services, replacing it with the “right” to abide by EULAs.

…and Slashdot contributor Glyn Moody:

UK Government Wants To Kill Net Neutrality In EU
…The amendments, if carried, would reverse the principle of end-to-end connectivity which has underpinned not only the Internet, but also European telecommunications policy, to date.’

The general argument these folks make is that the Internet’s magic end-to-end argument isn’t just a guideline for developers of experimental protocols (as I’ve always thought it was,) but an all-powerful axiom that confers immunity from the laws of physics and economics as well as those of human legislative bodies. Seriously.

So what would you rather have, a policy statement that grants more freedoms to you than any carrier can actually provide, or one that honestly and truthfully discloses the actual limits to you? This, my friends, is a fundamental choice: live amongst the clouds railing at the facts or in a real world where up is up and down is down. Sometimes you have to choose.

H/T Hit and Run.

Perils of Content Neutrality

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality

While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us.

Adam was in the audience at last week’s MAAWG panel on net neutrality, and raised an interesting question about Random Early Discard. The moderator cut us off before we were able to address his point (he was anxious to catch a plane) but the question deserves a response.

RED is a method of packet discard that’s intended to avoid the problems inherent in a packet drop discipline that simply uses tail-drop to prevent buffer overflow in routers. The tail drop mechanism tends to cause cycles in packet delivery rates:

1. A buffer overflows, and a whole set of transmitters throttles back.
2. Link utilization drops to 50%.
3. The transmitters as a group increase rate together, until buffer overflow occurs again.
4. Repeat.

The net result of this cycling behavior is that congested links have their effective capacity reduced to about 70% of link speed. RED is an attempt to reduce transmission rate more selectively in order to push the link toward the upper limit of capacity. RED algorithms have been under study since the late ’80s, and none is completely satisfactory. The IETF response was to draft an Internet Standard for something called ECN that enables the network to signal end systems that congestion was building, but it remains undeployed due to Microsoft’s concerns about home router compatibility. The follow-on to ECN is Bob Briscoe’s Re-ECN, which I’ve written about on these pages and in The Register.

The bottom line is that Internet congestion protocols are an area that needs a lot of additional work, which the proposed Net Neutrality laws would hamper or prevent.

Van Jacobson realizes this, per the remarks he makes in an interview in the ACM Queue magazine this month:

Also, we use buffer memory in such a way that it’s valuable only if it’s empty, because otherwise it doesn’t serve as a buffer. What we do is try to forget what we learned as soon as we possibly can; we have to do that to make our buffer memory empty.

For the Olympics (not the most recent, but the previous one), we got some data from the ISP downstream of NBC. That router was completely congested; it was falling over, dropping packets like crazy. If you looked inside its buffers, it had 4,000 copies of exactly the same data, but you couldn’t tell that it was the same because it was 4,000 different connections. It was a horrible waste of memory, because the conversations were all different but what they were about was the same. You should be able to use that memory so you don’t forget until you absolutely have to—that is, go to an LRU (least recently used) rather than MRU (most recently used) replacement policy. It’s the same memory; you just change the way you replace things in it, and then you’re able to use the content.

It wouldn’t be necessary for carriers to put disks in routers. They could just start using the existing buffer memory in a more efficient way, and any time the data was requested more than once, they would see a bandwidth reduction.

Strict neutralism would prevent this system from being implemented: it involves Deep Packet Inspection, and the fanatics have warned us that DPI is a great evil. So we’re faced with this choice: networks that are cheap and efficient, or networks that are bloated with silly ideology. Take your pick, you only get one.