Verizon’s Vision of the Internet

Despite the fact that I’ve been trying to explain why companies like Time Warner need to impose broadband usage caps on their systems before going to the capital markets for assistance in beefing up their innards, I’m not a fan of usage caps generally. They’re a very crude tool for imposing an equitable distribution of bandwidth, and one that ensures that the actual infrastructure in any given network will not be used efficiently. The key to network efficiency for a truly multi-service network like the Internet of the future is successful discrimination of application needs and traffic types. If the network can be made smart enough to follow orders, users can control their network usage according to their personal economics with no big surprises in the billing cycle. Network operators don’t need to manage traffic streams all the time, they need to manage them during periods of peak load (which better not be all that often.) And their best guidance in doing this comes from users and applications.

Many cities around the world manage access to the city core with something called congestion pricing: if you want to drive into the very heart of Singapore or London during peak hours, you have a pay a fee, which keeps traffic from gridlocking while permitting access by those who really need it. The Internet should work the same way: if you need low-latency service during peak load hours for Skype, you should be able to get it. And if you want to play P2P at the same time, you should be able to do so, but with higher latency (or at least higher jitter.) Accounts can be provisioned to allow a certain amount of congestion traffic for a flat rate, with additional portions available for an added fee. Users who demand a lot of transit from their networks should be able to get it, but at a reduced rate relative to average loads or for an additional fee.

The point is that networks are never going to be so fat that they can’t be overloaded, and local congestion is always going to occur. So the trick in managing networks is to allocate resources fairly and transparently, and let users control their use of whatever quota they have (not manually, but through home router and application signaling to the network.)

The least congested residential broadband service in the US today is Verizon FiOS. Verizon sells access at up to 50 MB/s, and has the capacity to increase this as consumers demand more.They can do this because they’ve invested money in a total infrastructure that consists of neighborhood loops, second hop infrastructure, and core network links. Their current system can carry 100 Mb/s per user without any contention short of the core, which is rather awesome. This is why you never hear anything about caps or quotas for FiOS: the system can’t be overloaded short of the core.

Despite that, Verizon’s visionaries realize that network management is going to be a part of the Internet of the future:

In part because most of the attention in the early days of the Internet was on connectivity and ensuring networks and devices could interconnect and communicate successfully, security and quality of service techniques were not a focus of the discussions around network protocols and functionality. Such features have instead often been offered “over the top”, usually as attributes in applications or as functionalities in web sites or distributed services.

The complexity and volume of Internet traffic today – and the fact that much more of it than ever before is “real time” or time sensitive – means that the Internet’s traditional routing and processing schemes are challenged more than ever. It is no longer realistic to expect that all of the heavy lifting to make applications and services work well on the Internet in today’s “two-way, heavy content, complex applications” world can be done through the old models. More work needs to be done at all levels to ensure better quality and improved services. This includes the network level as well.

This need not threaten the basic foundation of the Internet – its ability to provide consumers with access to any content they wish to use and connect any device they want to a broadband network. Competition, broad commitment to openness by industry and advocates, and oversight by regulators helps ensure this foundation remains. But it does mean that enhanced network based features and functionalities should not be automatically viewed with concern. Such features can be an important aspect of the Internet’s improvement and future evolution.

Indeed we shouldn’t fear rational and transparent management; it’s part of what has always made these systems work as well as they have for us.

, ,

Spectrum 2.0 panel from eComm

Courtesy of James Duncan Davidson, here’s a snap from the Spectrum 2.0 panel at eComm09.

Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm
Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm

The general discussion was about the lessons learned from light licensing of wireless spectrum in the US, on the success of Wi-Fi and the failure of UWB, and what we can realistically hope to gain from the White Spaces licensing regime. As a person with a foot in both camps – technical and regulatory – it was an interesting exercise in the contrast in the ways that engineers and policy people deal with these issues. In general, hard-core RF engineer Peter Ecclesine and I were the most pessimistic about White Space futures, while the policy folks still see the FCC’s Report and Order as a victory.

In lobbying, you frequently run into circumstances where the bill you’re trying to pass becomes so heavily encumbered with amendments that it’s not worth passing. Rather than get your policy vehicle adopted in a crippled form, it’s better in such circumstances to take it off the table and work with the decision-makers to revive it in a future session without the shackles. While this is a judgment call – sometimes you go ahead and take the victory hoping to fix it later – it’s dangerous to pass crippled bills in a tit-for-tat system because you’re conceding a win in the next round to the other side.

I suggested that the FCC’s order was so badly flawed that the best thing for White Space Liberation would be to have the court void the order and the FCC to start over. This message wasn’t well-received by Rick Whitt, but I had the feeling Peter is on board with it.

The problem with the White Spaces is that the FCC couldn’t make up its mind whether these bands are best used for home networking or for a Third (or is it fourth or fifth?) pipe. The power limits (40 milliwatts to 1 watt) doom it to home networking use only, which simply leads to more fragmentation in the home net market and no additional WAN pipes. That’s not the outcome the champions of open networks wanted, but it’s what they got.

eComm, incidentally, is a terrific conference. The focus is very much on the applications people are developing for mobile phones, and it’s essential for people like me who build networks to see what people want to do with them, especially the things they can’t do very well today. Lee Dryburgh did a fantastic job of organization and selecting speakers, and is to be congratulated for putting on such a stellar meeting of the minds.

David Sohn of CDT makes the right points

Commenting on the pending FCC action against Comcast, the CDT’s David Sohn says most of what needs to be said:

In order to engage in enforcement, there needs to be either:

(1) An existing, articulated rule or standard against which to judge behavior;
or
(2) Authority for the enforcement body to adjudicate and issue rulings based on general notions of fairness/equity.

It is difficult to argue that number (1) is present here. The FCC expressly stated that its broadband principles are not rules. If they are not rules, then it is hard to see how the FCC can turn around and try to police violations of them as if they were . . . well . . . rules. Doing so would put the FCC on perilously shaky legal ground.

As for number (2), CDT believes that everyone with a stake in the Internet — which at the end of the day is pretty much everyone, period — should be extremely wary of any assertion of open-ended and highly discretionary FCC jurisdiction over broadband Internet service. Even those who may like what the FCC proposes regarding the Comcast question should consider that they may be far less happy with what some future FCC may do, once the door to largely unguided regulatory action is open. CDT believes that the FCC neither has nor should have open-ended authority to craft policies for the Internet out of whole cloth.

This is the problem with suggesting, as some commentators have, that Internet neutrality concerns could be addressed via case-by-case adjudication and enforcement rather than ex ante rules. You can’t adjudicate and gradually build up a body of common law unless there is some underlying standard to adjudicate against — or unless you have broad authority to make law from scratch. That’s why CDT continues to call for legislation in this area. Having the FCC initiate and craft the entire legal framework, without Congress setting the parameters, cedes too much authority to the agency.

It will be interesting to see how an eventual FCC order, if there is one, addresses the murky legal status of the FCC’s Policy Statement and what legal hook the agency tries to hang its action on.

One other thing I’d add is this: an ideal residential Internet access system needs to be managed in two different but equally important phases:

1) Allocate bandwidth fairly among competing accounts; and then

2) Prioritize streams within each account according to application requirements.

Phase 1 keeps you from being swamped by your neighbor, and keeps you from swamping him, and Phase 2 prevents your VoIP session from being swamped by your BitTorrent session.

The problem with the Comcast Sandvine system is that it skips phase 1 and simply does phase 2, application-level traffic shaping. And the problem with the FCC order that Chairman Martin is floating about is that it makes phase 2 shaping illegal. It’s incredibly useful to manage streams for each user as he would want them managed if he had direct control over them. I think future home gateways will empower users to do this, but in the meantime it’s desirable for the ISP to manage sessions appropriately.

The first rule of regulation should be “do no harm,” and on that basis Martin’s prescription is bad medicine.

Internet Fairness, or Not

My latest piece for The Register is up: Dismantling a Religion: The EFF’s Faith-Based Internet. In it, I explore the difference between the way the EFF wants to manage the Internet and the new way the IETF folks are discussing.

Bottom line: the Internet has never had a user-based fairness system, and it needs one. All networks need one, actually.

On that note, the TCP-Friendly folks remind us:

The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network (probably in the form of preferentially dropping their packets during times of congestion).

An actively-managed Internet is a functional Internet.

Faster, More Symmetric Networking

Would you like to have a fat Internet connection to your home? If we can agree that 100 Mb/s in both directions would qualify as “fat”. you should be able to have your way in a year or two, three at the most. Here’s a quick survey of the alternatives.

First. we have a clue as to why Comcast still uses relatively pokey DOCSIS 1.1: it’s skipping the faster and more symmetric DOCSIS 2.0 and going straight to the nirvana of even faster connections with DOCSIS 3.0:

SANTA CLARA, Calif. — Comcast Corp. (Nasdaq: CMCSA, CMCSK) plans to have a Docsis 3.0 infrastructure in place in about 20 percent of its footprint by the end of 2008, teeing up cable modem services capable of delivering shared Internet speeds in excess of 100 Mbit/s.

The nation’s largest MSO will be 3.0-capable in one-in-five homes by the end of next year, according to Comcast Chief Technology Officer Tony Werner, the keynoter here Wednesday morning at the first CableNEXT conference.

(H/T Engadget)

This should make them competitive with FTTH for a good while, but not forever:

While we’ve seen all sorts of blazing feats over fiber here lately, it’s not often that such wide open bandwidth gets piped directly to a home, but a 75-year old Swede recently changed all that when she had a 40Gbps connection installed in her domicile.

She can download a DVD in two seconds.

Closer to home, Verizon is going faster and more symmetric with FiOS:

With the help of the symmetrical services, users can benefit from equally fast downstream and upstream connections of up to 15 megabits per second (Mbps) or up to 20 Mbps, based on the state where the service is sold.

DSL over copper isn’t sitting still either:

University of Melbourne research fellow Dr John Papandriopoulos is in the throes of moving to Silicon Valley after developing an algorithm to reduce the electromagnetic interference that slows down ADSL connections.

Most ADSL services around the world are effectively limited to speeds between 1 to 20Mbps, but if Dr Papandriopoulos’s technology is successfully commercialised that speed ceiling would be closer to 100Mbps.

Papandriopoulos is joining ASSIA, a company founded by DSL inventor John Cioffi (and named after his wife and EVP.) ASSIA currently develops tools, but I suspect that will change. (Assia, on the other hand, is an aesthete.)

And wireless is on the move as well. Corporate products conforming to the new ~100 Mb/s (real speed) 802.11n standard are starting to roll out in trials, 4G cellular network deployments are starting, and UWB for the home is available at multi-hundred megabit/sec rates.

One has the feeling that the residential network regulators are already fighting yesterday’s war, and that the network pessimists have no basis for their fears.

Internet over TV, maybe

There seems to be a huge disconnect on the nature of the magic box proposed to the FCC by the Usual Suspects to reclaim whitespace abandoned by analog TV:

A coalition of big technology companies wants to bring high-speed Internet access to consumers in a new way: over television airwaves. Key to the project is whether a device scheduled to be delivered to federal labs today lives up to its promise.

The coalition, which includes Microsoft and Google, wants regulators to allow idle TV channels, known as white space, to be used to beam the Internet into homes and offices. But the Federal Communications Commission first must be convinced that such traffic would not bleed outside its designated channels and interfere with existing broadcasts.

The six partners — Microsoft, Google, Dell, Hewlett-Packard, Intel and Philips — say they can meet that challenge. Today, they plan to give FCC officials a prototype device, built by Microsoft, that will undergo months of testing.

Is it a low-power, in-home system comparable to WiFi and UWB, or is it a high-power, long-distance service comparable to WiMax? Nobody seems to know, yet that’s critical to evaluating its usefulness. Anybody who knows, please clue me in.

Speaking of Cults…

The reaction of the Apple faithful to the disclosure of a security hole in the design of Apple OSX was amazing. A couple of guys figured out that you could trick OSX into executing some foreign code with root privilege by sending a malformed packet to a third-party wireless LAN card. The guys – David Maynor and Johnny Ellch – have been viciously attacked by the Kool-Aid drinking Apple faithful:

I was absolutely shocked when I ran across these stories on Digg. I had personally video interviewed Maynor and his partner Jon “Johnny Cache” Ellch and these two gentlemen were very honest and straightforward. But as soon as I read the stories, the stench began to rise. Maynor and SecureWorks had been telling the truth the entire time and they had falsified nothing. The only falsification going on was the stories themselves! Not only did Dalrymple and Chartier and others like them not follow the most basic of journalism principles to at least check with the source, they apparently didn’t even bother looking at the original video of David Manor released by SecureWorks.

The Faithful claim Maynor and Ellch alleged something they didn’t allege, and are therefore out to get Apple.

The saga continues on George Ou’s ZDNet blog today. It seems to me that the flaw the dudes found depends on bad behavior from both the driver and the OS, and if it exists on one vendor’s product, it certainly can exist on others as well. So Apple and its faithful should simply fix the problem and stop smearing people.

Is that too much to ask?

Airgo re-writes the laws of physics

My friends in Palo Alto have topped themselves with a new chippie:

Airgo Networks today announced its third generation True MIMO chipset with support for data rates up to 240 Mbps. The company said its technology makes wire-free offices a reality…

“When MIMO was first unveiled, it reversed over 100 years of scientific thinking by harnessing natural radio wave distortions, which were previously perceived as interference, to deliver dramatically increased speed, range and reliability,” said Greg Raleigh, President and CEO of Airgo Networks. “With True MIMO Gen3 technology, our team has achieved a scientific milestone by proving that wireless can surpass wired speeds.”

WiFi+MIMO may be literally like a rocket ship, but not really faster than all wired networks, or even as fast as the UWB wireless network, but Greg can dream.