The Fiber Formula

In part three of Saul Hansell’s series on broadband in the Rest of the World, we learn that taxpayers in the fiber havens are doing all the heavy lifting:

But the biggest question is whether the country needs to actually provide subsidies or tax breaks to the telephone and cable companies to increase the speeds of their existing broadband service, other than in rural areas. Many people served by Verizon and Comcast are likely to have the option to get super-fast service very soon. But people whose cable and phone companies are in more financial trouble, such as Qwest Communications and Charter Communications, may well be in the slow lane to fast surfing. Still, it’s a good bet that all the cable companies will eventually get around to upgrading to the faster Docsis 3 standard and the phone companies will be forced to upgrade their networks to compete.

The lesson from the rest of the world is that if the Obama administration really wants to bring very-high-speed Internet access to most people faster than the leisurely pace of the market, it will most likely have to bring out the taxpayers’ checkbook.

None of this should come as a surprise to our regular readers. Businesses invest in fiber infrastructure on a 20-year basis, and government subsidies can compress the investment timeline to one tenth of that. And Hansell finds that a lot of the foreign spending is driven by nationalist pride rather than more prudent factors. The problem I have with massive government spending on ultra-highspeed fiber projects is the conflicting priorities. I like fast networks, but I know that my tastes and interests aren’t the universal ones. And then there’s the question of utility: mobile networks aren’t as fast as locked-down fiber, but they’re an order of magnitude more useful.

So why don’t we strive to make the US number one in wireless, and leave the fiber race to the smaller nations? The long-term benefits of pervasive, high-speed wireless are much greater than those of heavily subsidized (and therefore heavily regulated) stationary networks.

, ,

Explaining the Price Gap

This is old news to those of you who read the other sources of broadband politics news on the new-fangled world wide computernet, but the esteemed Saul Hansell (a sometime reader of this blog) has released the second part of his analysis of American broadband, addressing the pricing issue. Broadband is cheaper in other countries due to subsidies and differences in demographics, but also because of unbundling, the practice of requiring carriers to offer wholesale access to their customers:

Unbundling can be seen as a slightly disguised form of price regulation. Profits dropped. Many of the new entrants have found it difficult to build sustainable businesses, while margins for the incumbent phone companies have been squeezed as well.

It’s not exactly clear, however, that this approach is in the public’s long-term interest. Phone companies have less incentive to invest and upgrade their networks if they are going to be forced to share their networks.

Some argue that this is the main reason that there is little investment in bringing fiber to homes in Europe. “Investing in fiber is a huge risk,” Kalyan Dasgupta, a London-based consultant with LECG, wrote me in an e-mail, “and the prospect of taking that risk alone, but having to ’share’ the rewards with other players, is not a prospect that most rational businesses would consider.”

Britain, which has been the biggest proponent of line sharing, has decided to deregulate the wholesale price BT can charge for fiber, so long as it doesn’t favor its own brand of Internet service.

Like any form of price control, unbundling produces short-term gains in access diversity at the expense of long-term investment. Adopting this approach ultimately requires the government to bear the cost of infrastructure improvements, as it ceases to be a rational use of investor dollars to build out enhancements that don’t produce substantial returns in a non-monopoly market. Many of the folks seeking net neutrality regard broadband as a utility, and this becomes a self-fulfilling prophecy. If we treat it that way, that’s that it becomes.

Just as our electric utility networks include less-efficient generating plants that belch excessive amounts of CO2 into the air because the regulators won’t approve rate hikes to pay replacement costs, so too will price-capping broadband stifle innovation in transport networks.

, ,

Debunking the Broadband Gap

Today we learn, via Saul Hansell at Bits Blog, that the US isn’t as far behind the Rest of the World with broadband as was previously thought:

Even without any change in government policies, Internet speeds in the United States are getting faster. Verizon is wiring half its territory with its FiOS service, which strings fiber optic cable to people’s homes. FiOS now offers 50 Mbps service and has the capacity to offer much faster speeds. As of the end of 2008, 4.1 million homes in the United States had fiber service, which puts the United States right behind Japan, which has brought fiber directly to 8.2 million homes, according to the Fiber to the Home Council. Much of what is called fiber broadband in Korea, Sweden and until recently Japan, only brings the fiber to the basement of apartment buildings or street-corner switch boxes.

Actual download speeds are more important that raw signaling rates: The United States has an average speed of 5.2 Mbps, Japan is 16.7 Mbps, Sweden was 8.8 Mbps, and Korea averaged 7.2 Mbps. There’s a gap alright, but it’s not nearly as large as we’ve been lead to believe.

In fact, the gap is entirely consistent with population density and the extent of government subsidies.

Spectrum 2.0 panel from eComm

Courtesy of James Duncan Davidson, here’s a snap from the Spectrum 2.0 panel at eComm09.

Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm
Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm

The general discussion was about the lessons learned from light licensing of wireless spectrum in the US, on the success of Wi-Fi and the failure of UWB, and what we can realistically hope to gain from the White Spaces licensing regime. As a person with a foot in both camps – technical and regulatory – it was an interesting exercise in the contrast in the ways that engineers and policy people deal with these issues. In general, hard-core RF engineer Peter Ecclesine and I were the most pessimistic about White Space futures, while the policy folks still see the FCC’s Report and Order as a victory.

In lobbying, you frequently run into circumstances where the bill you’re trying to pass becomes so heavily encumbered with amendments that it’s not worth passing. Rather than get your policy vehicle adopted in a crippled form, it’s better in such circumstances to take it off the table and work with the decision-makers to revive it in a future session without the shackles. While this is a judgment call – sometimes you go ahead and take the victory hoping to fix it later – it’s dangerous to pass crippled bills in a tit-for-tat system because you’re conceding a win in the next round to the other side.

I suggested that the FCC’s order was so badly flawed that the best thing for White Space Liberation would be to have the court void the order and the FCC to start over. This message wasn’t well-received by Rick Whitt, but I had the feeling Peter is on board with it.

The problem with the White Spaces is that the FCC couldn’t make up its mind whether these bands are best used for home networking or for a Third (or is it fourth or fifth?) pipe. The power limits (40 milliwatts to 1 watt) doom it to home networking use only, which simply leads to more fragmentation in the home net market and no additional WAN pipes. That’s not the outcome the champions of open networks wanted, but it’s what they got.

eComm, incidentally, is a terrific conference. The focus is very much on the applications people are developing for mobile phones, and it’s essential for people like me who build networks to see what people want to do with them, especially the things they can’t do very well today. Lee Dryburgh did a fantastic job of organization and selecting speakers, and is to be congratulated for putting on such a stellar meeting of the minds.

At long last, Genachowski

The long-awaited nomination of Julius Genachowski to the FCC chair finally came to pass yesterday, raising questions about the delay. If everybody with an interest in telecom and Internet regulation knew he was the choice months ago, why did the official announcement take so long? I have no inside information, so I’ll leave it to those who do to enlighten us on that question. Perhaps the Administration was just being extra-cautious after the debacles around a Commerce Secretary and others.

Neutralists are excited about the choice, naturally, as they view Genachowski as one of their own. And indeed, if network neutrality were actually a coherent policy and not just a rag-tag collection of Christmas wishes, they would have cause to be exhilarated. But given the range of restrictions that the movement seeks, it’s less than clear that any particular raft of regulations would satisfy them and leave broadband networks the ability to function, so we’ll see how this pans out. We’re already hearing runblings from Boucher that there may not be any Congressional action on network neutrality this year in any case.

Genachowski brings an interesting (and potentially very dangerous) set of qualifications to the job. A college buddy of the President, he’s an inner circle member with the power to wield enormous influence. As a former FCC staffer, he’s imbued with the Agency’s culture, and as a former venture capitalist funding fluffy applications software, he’s something of a tech buff. But he resembles Kevin Martin in most of the important respects: he’s a Harvard lawyer who’s worked inside the regulatory system for most of his life, and he has strong alliances to an industry that seeks to exercise control over the nation’s network infrastructure for its own purposes. Whether those purposes resemble the public interest remains to be seen.

The largest problem with the FCC and similar agencies is the knowledge gap between regulators and the modern broadband networks that are the subject of their regulatory power. Martin didn’t have the training to appreciate the effect that his orders would have on the infrastructure, and neither does Genachowski. So the new Chairman is just as likely as the old chairman to make things worse while trying to make them better.

In a perfect world, the commissioners would be able to rely on the expert judgment of the Chief Technologist to stay out of trouble, but the current occupant of that job, Jon Peha, has a penchant for playing politics that renders him ineffective. The bizarre, quixotic inquiry the FCC made recently into the quality of service variations between Comcast’s voice service and over-the-top VoIP is an example. This isn’t a serious line of inquiry for a serious Commission, and Peha never should have let it happen. But it did, and that fact should remind us that the FCC is more a creature of politics than of technology.