This is not a misprint

How many companies are actually increasing headcount these days? Not many, but AT&T is adding 3,000 jobs to expand its 3G network:

Despite a capex cut of up to $3 billion this year, AT&T Inc. (NYSE: T) made it clear today that it intends to spend to improve and expand its 3G network — adding 3,000 jobs in the process to support “mobility, broadband, and video.”

This comes on the heels of an announcement last December of a cut of 12,000 jobs, so it’s not quite as against-the-grain as it might seem. Still, it’s good news for 3,000 people and a counter-indicator of permanent global economic collapse.

A little bit breathless

The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently have in the US, absent a coherent regulatory framework for Internet services.

Most of us would probably say, after reading the whole package, that consumer rights are advanced by it. But most of us aren’t fire-breathing neutrality monsters who can’t be bothered with the practical realities of network operation. The actual document the Brits are circulating is here; pay special attention to the Rationale.

The operative language establishes the principle that there are in fact limits to “running the application of your choice” and “accessing and sharing the information of your choice” on the Internet, which is simply stating some of the facts of life. If you’re not allowed to engage in identity theft in real life, you’re also not allowed to do so on the Internet; if you’re not allowed to violate copyright in real life, you’re also not allowed to do so on the Internet; and so on. Similarly, while you’re allowed to access the legal content and services of your choice, you’re not allowed to access them at rates that exceed the capacity of the Internet or any of its component links at any given moment, nor without the finite delays inherent in moving a packet through a mesh of switches, nor with such frequency as to pose a nuisance to the Internet Community as a whole or to your immediate neighbors. Such is life.

In the place of the current text which touts the freedoms without acknowledging the existing legal and practical limits on them, the amendment would require the carriers to disclose service plan limits and actual management practices.

So essentially what you have here is a retreat from a statement that does not accurately describe reasonable expectations of Internet experience with one that does. You can call it the adoption of a reality-based policy statement over a faith-based statement. Who could be upset about this?

Plenty of people, as it turns out. A blog called IPtegrity is hopping mad:

Amendments to the Telecoms Package circulated in Brussels by the UK government, seek to cross out users’ rights to access and distribute Internet content and services. And they want to replace it with a ‘principle’ that users can be told not only the conditions for access, but also the conditions for the use of applications and services.

…as is science fiction writer and blogger Cory Doctorow:

The UK government’s reps in the European Union are pushing to gut the right of Internet users to access and contribute to networked services, replacing it with the “right” to abide by EULAs.

…and Slashdot contributor Glyn Moody:

UK Government Wants To Kill Net Neutrality In EU
…The amendments, if carried, would reverse the principle of end-to-end connectivity which has underpinned not only the Internet, but also European telecommunications policy, to date.’

The general argument these folks make is that the Internet’s magic end-to-end argument isn’t just a guideline for developers of experimental protocols (as I’ve always thought it was,) but an all-powerful axiom that confers immunity from the laws of physics and economics as well as those of human legislative bodies. Seriously.

So what would you rather have, a policy statement that grants more freedoms to you than any carrier can actually provide, or one that honestly and truthfully discloses the actual limits to you? This, my friends, is a fundamental choice: live amongst the clouds railing at the facts or in a real world where up is up and down is down. Sometimes you have to choose.

H/T Hit and Run.

The Fiber Formula

In part three of Saul Hansell’s series on broadband in the Rest of the World, we learn that taxpayers in the fiber havens are doing all the heavy lifting:

But the biggest question is whether the country needs to actually provide subsidies or tax breaks to the telephone and cable companies to increase the speeds of their existing broadband service, other than in rural areas. Many people served by Verizon and Comcast are likely to have the option to get super-fast service very soon. But people whose cable and phone companies are in more financial trouble, such as Qwest Communications and Charter Communications, may well be in the slow lane to fast surfing. Still, it’s a good bet that all the cable companies will eventually get around to upgrading to the faster Docsis 3 standard and the phone companies will be forced to upgrade their networks to compete.

The lesson from the rest of the world is that if the Obama administration really wants to bring very-high-speed Internet access to most people faster than the leisurely pace of the market, it will most likely have to bring out the taxpayers’ checkbook.

None of this should come as a surprise to our regular readers. Businesses invest in fiber infrastructure on a 20-year basis, and government subsidies can compress the investment timeline to one tenth of that. And Hansell finds that a lot of the foreign spending is driven by nationalist pride rather than more prudent factors. The problem I have with massive government spending on ultra-highspeed fiber projects is the conflicting priorities. I like fast networks, but I know that my tastes and interests aren’t the universal ones. And then there’s the question of utility: mobile networks aren’t as fast as locked-down fiber, but they’re an order of magnitude more useful.

So why don’t we strive to make the US number one in wireless, and leave the fiber race to the smaller nations? The long-term benefits of pervasive, high-speed wireless are much greater than those of heavily subsidized (and therefore heavily regulated) stationary networks.

, ,

Explaining the Price Gap

This is old news to those of you who read the other sources of broadband politics news on the new-fangled world wide computernet, but the esteemed Saul Hansell (a sometime reader of this blog) has released the second part of his analysis of American broadband, addressing the pricing issue. Broadband is cheaper in other countries due to subsidies and differences in demographics, but also because of unbundling, the practice of requiring carriers to offer wholesale access to their customers:

Unbundling can be seen as a slightly disguised form of price regulation. Profits dropped. Many of the new entrants have found it difficult to build sustainable businesses, while margins for the incumbent phone companies have been squeezed as well.

It’s not exactly clear, however, that this approach is in the public’s long-term interest. Phone companies have less incentive to invest and upgrade their networks if they are going to be forced to share their networks.

Some argue that this is the main reason that there is little investment in bringing fiber to homes in Europe. “Investing in fiber is a huge risk,” Kalyan Dasgupta, a London-based consultant with LECG, wrote me in an e-mail, “and the prospect of taking that risk alone, but having to ’share’ the rewards with other players, is not a prospect that most rational businesses would consider.”

Britain, which has been the biggest proponent of line sharing, has decided to deregulate the wholesale price BT can charge for fiber, so long as it doesn’t favor its own brand of Internet service.

Like any form of price control, unbundling produces short-term gains in access diversity at the expense of long-term investment. Adopting this approach ultimately requires the government to bear the cost of infrastructure improvements, as it ceases to be a rational use of investor dollars to build out enhancements that don’t produce substantial returns in a non-monopoly market. Many of the folks seeking net neutrality regard broadband as a utility, and this becomes a self-fulfilling prophecy. If we treat it that way, that’s that it becomes.

Just as our electric utility networks include less-efficient generating plants that belch excessive amounts of CO2 into the air because the regulators won’t approve rate hikes to pay replacement costs, so too will price-capping broadband stifle innovation in transport networks.

, ,

Debunking the Broadband Gap

Today we learn, via Saul Hansell at Bits Blog, that the US isn’t as far behind the Rest of the World with broadband as was previously thought:

Even without any change in government policies, Internet speeds in the United States are getting faster. Verizon is wiring half its territory with its FiOS service, which strings fiber optic cable to people’s homes. FiOS now offers 50 Mbps service and has the capacity to offer much faster speeds. As of the end of 2008, 4.1 million homes in the United States had fiber service, which puts the United States right behind Japan, which has brought fiber directly to 8.2 million homes, according to the Fiber to the Home Council. Much of what is called fiber broadband in Korea, Sweden and until recently Japan, only brings the fiber to the basement of apartment buildings or street-corner switch boxes.

Actual download speeds are more important that raw signaling rates: The United States has an average speed of 5.2 Mbps, Japan is 16.7 Mbps, Sweden was 8.8 Mbps, and Korea averaged 7.2 Mbps. There’s a gap alright, but it’s not nearly as large as we’ve been lead to believe.

In fact, the gap is entirely consistent with population density and the extent of government subsidies.

Spectrum 2.0 panel from eComm

Courtesy of James Duncan Davidson, here’s a snap from the Spectrum 2.0 panel at eComm09.

Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm
Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm

The general discussion was about the lessons learned from light licensing of wireless spectrum in the US, on the success of Wi-Fi and the failure of UWB, and what we can realistically hope to gain from the White Spaces licensing regime. As a person with a foot in both camps – technical and regulatory – it was an interesting exercise in the contrast in the ways that engineers and policy people deal with these issues. In general, hard-core RF engineer Peter Ecclesine and I were the most pessimistic about White Space futures, while the policy folks still see the FCC’s Report and Order as a victory.

In lobbying, you frequently run into circumstances where the bill you’re trying to pass becomes so heavily encumbered with amendments that it’s not worth passing. Rather than get your policy vehicle adopted in a crippled form, it’s better in such circumstances to take it off the table and work with the decision-makers to revive it in a future session without the shackles. While this is a judgment call – sometimes you go ahead and take the victory hoping to fix it later – it’s dangerous to pass crippled bills in a tit-for-tat system because you’re conceding a win in the next round to the other side.

I suggested that the FCC’s order was so badly flawed that the best thing for White Space Liberation would be to have the court void the order and the FCC to start over. This message wasn’t well-received by Rick Whitt, but I had the feeling Peter is on board with it.

The problem with the White Spaces is that the FCC couldn’t make up its mind whether these bands are best used for home networking or for a Third (or is it fourth or fifth?) pipe. The power limits (40 milliwatts to 1 watt) doom it to home networking use only, which simply leads to more fragmentation in the home net market and no additional WAN pipes. That’s not the outcome the champions of open networks wanted, but it’s what they got.

eComm, incidentally, is a terrific conference. The focus is very much on the applications people are developing for mobile phones, and it’s essential for people like me who build networks to see what people want to do with them, especially the things they can’t do very well today. Lee Dryburgh did a fantastic job of organization and selecting speakers, and is to be congratulated for putting on such a stellar meeting of the minds.

At long last, Genachowski

The long-awaited nomination of Julius Genachowski to the FCC chair finally came to pass yesterday, raising questions about the delay. If everybody with an interest in telecom and Internet regulation knew he was the choice months ago, why did the official announcement take so long? I have no inside information, so I’ll leave it to those who do to enlighten us on that question. Perhaps the Administration was just being extra-cautious after the debacles around a Commerce Secretary and others.

Neutralists are excited about the choice, naturally, as they view Genachowski as one of their own. And indeed, if network neutrality were actually a coherent policy and not just a rag-tag collection of Christmas wishes, they would have cause to be exhilarated. But given the range of restrictions that the movement seeks, it’s less than clear that any particular raft of regulations would satisfy them and leave broadband networks the ability to function, so we’ll see how this pans out. We’re already hearing runblings from Boucher that there may not be any Congressional action on network neutrality this year in any case.

Genachowski brings an interesting (and potentially very dangerous) set of qualifications to the job. A college buddy of the President, he’s an inner circle member with the power to wield enormous influence. As a former FCC staffer, he’s imbued with the Agency’s culture, and as a former venture capitalist funding fluffy applications software, he’s something of a tech buff. But he resembles Kevin Martin in most of the important respects: he’s a Harvard lawyer who’s worked inside the regulatory system for most of his life, and he has strong alliances to an industry that seeks to exercise control over the nation’s network infrastructure for its own purposes. Whether those purposes resemble the public interest remains to be seen.

The largest problem with the FCC and similar agencies is the knowledge gap between regulators and the modern broadband networks that are the subject of their regulatory power. Martin didn’t have the training to appreciate the effect that his orders would have on the infrastructure, and neither does Genachowski. So the new Chairman is just as likely as the old chairman to make things worse while trying to make them better.

In a perfect world, the commissioners would be able to rely on the expert judgment of the Chief Technologist to stay out of trouble, but the current occupant of that job, Jon Peha, has a penchant for playing politics that renders him ineffective. The bizarre, quixotic inquiry the FCC made recently into the quality of service variations between Comcast’s voice service and over-the-top VoIP is an example. This isn’t a serious line of inquiry for a serious Commission, and Peha never should have let it happen. But it did, and that fact should remind us that the FCC is more a creature of politics than of technology.

I’ll see you at eComm

I’ll be speaking at the eComm2009: Emerging Communications Conference in San Francisco next week:

The world’s leading-edge telecom, Internet communications and mobile innovation event built to both showcase and accelerate innovation; and to explore radical new opportunities – together.

eComm deals with the Telco 2.0 world in which telephony is software and networks are multi-purpose and agile. A lot of great minds and influential movers and shakers will be converging in this space, including Martin Geddes, Doc Searls, Brough Turner, Brad Templeton, and Rick Whitt, so I highly recommend it.

Brough is moderating my panel, called Spectrum 2.0, on the move to release more unlicensed spectrum. I suspect we’ll touch on the 802.11y-style spectrum sharing etiquettes since Peter Ecclesine is on the panel, and the White Spaces issue since Whitt will be there as well.

Perils of Content Neutrality

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality

While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us.

Adam was in the audience at last week’s MAAWG panel on net neutrality, and raised an interesting question about Random Early Discard. The moderator cut us off before we were able to address his point (he was anxious to catch a plane) but the question deserves a response.

RED is a method of packet discard that’s intended to avoid the problems inherent in a packet drop discipline that simply uses tail-drop to prevent buffer overflow in routers. The tail drop mechanism tends to cause cycles in packet delivery rates:

1. A buffer overflows, and a whole set of transmitters throttles back.
2. Link utilization drops to 50%.
3. The transmitters as a group increase rate together, until buffer overflow occurs again.
4. Repeat.

The net result of this cycling behavior is that congested links have their effective capacity reduced to about 70% of link speed. RED is an attempt to reduce transmission rate more selectively in order to push the link toward the upper limit of capacity. RED algorithms have been under study since the late ’80s, and none is completely satisfactory. The IETF response was to draft an Internet Standard for something called ECN that enables the network to signal end systems that congestion was building, but it remains undeployed due to Microsoft’s concerns about home router compatibility. The follow-on to ECN is Bob Briscoe’s Re-ECN, which I’ve written about on these pages and in The Register.

The bottom line is that Internet congestion protocols are an area that needs a lot of additional work, which the proposed Net Neutrality laws would hamper or prevent.

Van Jacobson realizes this, per the remarks he makes in an interview in the ACM Queue magazine this month:

Also, we use buffer memory in such a way that it’s valuable only if it’s empty, because otherwise it doesn’t serve as a buffer. What we do is try to forget what we learned as soon as we possibly can; we have to do that to make our buffer memory empty.

For the Olympics (not the most recent, but the previous one), we got some data from the ISP downstream of NBC. That router was completely congested; it was falling over, dropping packets like crazy. If you looked inside its buffers, it had 4,000 copies of exactly the same data, but you couldn’t tell that it was the same because it was 4,000 different connections. It was a horrible waste of memory, because the conversations were all different but what they were about was the same. You should be able to use that memory so you don’t forget until you absolutely have to—that is, go to an LRU (least recently used) rather than MRU (most recently used) replacement policy. It’s the same memory; you just change the way you replace things in it, and then you’re able to use the content.

It wouldn’t be necessary for carriers to put disks in routers. They could just start using the existing buffer memory in a more efficient way, and any time the data was requested more than once, they would see a bandwidth reduction.

Strict neutralism would prevent this system from being implemented: it involves Deep Packet Inspection, and the fanatics have warned us that DPI is a great evil. So we’re faced with this choice: networks that are cheap and efficient, or networks that are bloated with silly ideology. Take your pick, you only get one.

Neutralism is the new S-word

Scott Cleland has done some interesting quote-diving from the works of the neutralists, offering up his findings in a new study. Without using the term “socialism” Scott provides the evidence that this is largely an anti-capitalist movement. The fact that a few highly-profitable capitalist enterprises have found a way to manipulate a rather traditional form of digital utopian socialism for their own ends is the real news, however.

Anyhow, enjoy Scott’s paper and think about the notion of a “digital kibbutz” while you’re doing it. Now that we live in a time where government owns the banking system, “socialism” isn’t a bad word in all contexts automatically, but we do have to understand that we need to apply different expectations to government-managed systems than we do to privately-managed ones. It’s not as obvious to me as it is to the neutralists that government is more likely to give us universal high-speed connectivity than is business.

UPDATE: See comments for a critique of Scott’s analysis by Brett Glass.