I’ll see you at eComm

I’ll be speaking at the eComm2009: Emerging Communications Conference in San Francisco next week:

The world’s leading-edge telecom, Internet communications and mobile innovation event built to both showcase and accelerate innovation; and to explore radical new opportunities – together.

eComm deals with the Telco 2.0 world in which telephony is software and networks are multi-purpose and agile. A lot of great minds and influential movers and shakers will be converging in this space, including Martin Geddes, Doc Searls, Brough Turner, Brad Templeton, and Rick Whitt, so I highly recommend it.

Brough is moderating my panel, called Spectrum 2.0, on the move to release more unlicensed spectrum. I suspect we’ll touch on the 802.11y-style spectrum sharing etiquettes since Peter Ecclesine is on the panel, and the White Spaces issue since Whitt will be there as well.

Perils of Content Neutrality

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality

While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us.

Adam was in the audience at last week’s MAAWG panel on net neutrality, and raised an interesting question about Random Early Discard. The moderator cut us off before we were able to address his point (he was anxious to catch a plane) but the question deserves a response.

RED is a method of packet discard that’s intended to avoid the problems inherent in a packet drop discipline that simply uses tail-drop to prevent buffer overflow in routers. The tail drop mechanism tends to cause cycles in packet delivery rates:

1. A buffer overflows, and a whole set of transmitters throttles back.
2. Link utilization drops to 50%.
3. The transmitters as a group increase rate together, until buffer overflow occurs again.
4. Repeat.

The net result of this cycling behavior is that congested links have their effective capacity reduced to about 70% of link speed. RED is an attempt to reduce transmission rate more selectively in order to push the link toward the upper limit of capacity. RED algorithms have been under study since the late ’80s, and none is completely satisfactory. The IETF response was to draft an Internet Standard for something called ECN that enables the network to signal end systems that congestion was building, but it remains undeployed due to Microsoft’s concerns about home router compatibility. The follow-on to ECN is Bob Briscoe’s Re-ECN, which I’ve written about on these pages and in The Register.

The bottom line is that Internet congestion protocols are an area that needs a lot of additional work, which the proposed Net Neutrality laws would hamper or prevent.

Van Jacobson realizes this, per the remarks he makes in an interview in the ACM Queue magazine this month:

Also, we use buffer memory in such a way that it’s valuable only if it’s empty, because otherwise it doesn’t serve as a buffer. What we do is try to forget what we learned as soon as we possibly can; we have to do that to make our buffer memory empty.

For the Olympics (not the most recent, but the previous one), we got some data from the ISP downstream of NBC. That router was completely congested; it was falling over, dropping packets like crazy. If you looked inside its buffers, it had 4,000 copies of exactly the same data, but you couldn’t tell that it was the same because it was 4,000 different connections. It was a horrible waste of memory, because the conversations were all different but what they were about was the same. You should be able to use that memory so you don’t forget until you absolutely have to—that is, go to an LRU (least recently used) rather than MRU (most recently used) replacement policy. It’s the same memory; you just change the way you replace things in it, and then you’re able to use the content.

It wouldn’t be necessary for carriers to put disks in routers. They could just start using the existing buffer memory in a more efficient way, and any time the data was requested more than once, they would see a bandwidth reduction.

Strict neutralism would prevent this system from being implemented: it involves Deep Packet Inspection, and the fanatics have warned us that DPI is a great evil. So we’re faced with this choice: networks that are cheap and efficient, or networks that are bloated with silly ideology. Take your pick, you only get one.

Neutralism is the new S-word

Scott Cleland has done some interesting quote-diving from the works of the neutralists, offering up his findings in a new study. Without using the term “socialism” Scott provides the evidence that this is largely an anti-capitalist movement. The fact that a few highly-profitable capitalist enterprises have found a way to manipulate a rather traditional form of digital utopian socialism for their own ends is the real news, however.

Anyhow, enjoy Scott’s paper and think about the notion of a “digital kibbutz” while you’re doing it. Now that we live in a time where government owns the banking system, “socialism” isn’t a bad word in all contexts automatically, but we do have to understand that we need to apply different expectations to government-managed systems than we do to privately-managed ones. It’s not as obvious to me as it is to the neutralists that government is more likely to give us universal high-speed connectivity than is business.

UPDATE: See comments for a critique of Scott’s analysis by Brett Glass.

DTV Transition Starts, World Doesn’t End

Contrary to the expectations of Congress and the FCC, the first phase of the DTV transition took place without major incident. Some 23% of American TV stations stopped sending out analog signals Tuesday at midnight, and only 28,000 calls came into the centers the FCC and the cable and satellite providers have established for transition help. The biggest category of call, close to half of all calls, was from people unable to pick up the digital broadcasts at all, or picking them up with very poor quality. A significant number didn’t know how to setup their converter boxes, or didn’t realize that the converter boxes have to scan for channels.

These numbers support a suspicion I’ve had for a while now, that the emphasis on converter boxes is misplaced. The problem that most people are going to have is a complete inability to receive digital broadcasts at all, because they don’t have the right kind of antenna, the antenna isn’t oriented properly, or because they live in the wrong place. Many stations are moving transmitter locations to alter service areas, and won’t be serving some traditional customers any more. Others are reducing power, sometimes quite substantially. Digital broadcasts are more robust, so some reduction in power is quite sensible. But I suspect that over-the-air delivery of TV is such a small percentage of the overall market – well below 20%, and in some areas less than 10% – that it doesn’t make financial sense for stations to invest heavily in high power transmitters.

The timing of the transition was very bad for this reason. A substantial number of OTA TV viewers are doing to need upgrades to roof-mounted antennas, and in many cases they’re going to need multiple antennas pointing in different directions. Getting up on a roof in February is not a pleasant experience in much of America, so a May or June transition date would have been much more sensible. In any event, it’s a good time to buy stock in antenna companies.

I’ve been doing some experiments with roof-mounted antennas that I’ll be reporting on shortly. So far, I can only get 5 stations where I live, and four broadcast in Spanish. Perhaps the FCC needs a budget for bilingual education as well as for converter boxes and antennas.

Nice Outings

My talk at the Messaging Anti-Abuse Working Group went very well. It was a huge room, seating probably 500 or so, and over half-full. I talked about how some of the crazier ideas about net neutrality are potentially becoming mainstream thanks to the politics in the nation’s capitol and some of the personnel choices made by the Obama Administration. The selection of Susan Crawford for the FCC Transition Team is a cause for alarm. Susan is as nice a person as you’ll ever want to meet, and quite bright and well-intentioned, but her position that ISPs and carriers have no business actively managing packets is poison. I got a healthy round of applause, and several people thanked me for my remarks afterwards. Very few people know how dependent e-mail is on the DNS Blacklists that members of this organization maintain, and that’s a real shame.

Last night I took the short trip up to Mountain View to see Jeff Jarvis’s talk about his book What Would Google Do? The audience, about 25 people more or less, was a lot less impressed with Google than Jeff is, and it occurred to me that Google really is vulnerable on the search front. I can imagine a much more effective search methodology than the one Google employs, but getting the venture capital to build a rival infrastructure isn’t going to happen.

I told Jeff (an old friend of the blog who’s driven a lot of traffic this way over the years) that what he likes about Google isn’t Google as much as it’s inherent qualities of the Internet. He more or less knows that, but the packaging of open networks, distributed computing, and free expression is easier when you concretize it, and that’s what his book does. I read it as a sequel to Cluetrain.

Speaking at MAAWG in Frisco tomorrow

I’m on a panel tomorrow at the General Meeting of the Messaging Anti-Abuse Working Group, the organization that keeps the Internet from being overrun by spam and malware:

The Messaging Anti-Abuse Working Group is a global organization focusing on preserving electronic messaging from online exploits and abuse with the goal of enhancing user trust and confidence, while ensuring the deliverability of legitimate messages. With a broad base of Internet Service Providers (ISPs) and network operators representing almost one billion mailboxes, key technology providers and senders, MAAWG works to address messaging abuse by focusing on technology, industry collaboration and public policy initiatives

My panel is on Mail Filtering Transparency: The Impact of Network
Neutrality on Combating Abuse:

Network Neutrality (NN) means different things to different people. In 2008, much of the debate was focused on protecting P2P applications from various network management practices. In 2009, the debate is likely to expand to explore the impact of NN concepts on other applications, particularly email. We have already seen the strong reaction by some parties at the IETF to attempts to standardize DNS xBLs, which some claimed were discriminatory and lacking in transparency. We have also heard of claims that when ISPs block certain domains and servers that this may be discriminatory and could run afoul of NN concepts. This panel will explore the question of what NN means to email anti‐abuse, the increasing scrutiny that anti‐abuse policies will be under, the motivations behind the drive for greater transparency regarding such policies, and how all of those things should be balanced against the need to enforce strong anti‐abuse techniques.

Dave Crocker is on the panel, and I’m looking forward to meeting him, and I have it on good authority that Paul Vixie will be in attendance as well. The best thing about being an opinionated jerk like I am is the people you get to meet.

This organization is at the crossroads of “run any application you want” and “reasonable network management.” Spam prevention has always been a lightning rod because the very existence of spam highlights so many of the problems the current Internet architecture has. Its central assumption is that people will behave nicely all (or at least most) of the time, and the existence of botnets clearly calls that into question. It probably comes as no surprise that the filtering that spam reduction systems have to do makes net neuts nervous. Stupid networks may be nice in theory, but we live in a world of practice.

Court protects the right to bluff

In a rare move, the DC Circuit has upheld an FCC decision

The cable industry has won a big legal victory in the fiercely competitive phone services market. An appeals court has supported the Federal Communications Commission in its ruling that phone carriers—in this case Verizon—can’t try to lure back customers after they’ve initiated a service switch but before their number has been transferred.

The FCC rarely prevails in court, of course, so this may be a sign that we’re living in the End Times. But we can take some comfort from the fact that it wasn’t totally unpredictable, given that Kevin Martin was on the losing side.

The case involved Verizon’s efforts to win back customers when notified by the new carrier that they had to release the phone number. Verizon took this as an occasion to offer sweeter deals, which the court ruled an unlawful violation of the customer’s privacy, despite the fact that Google’s entire business is based on this kind of snooping.

It’s a win for consumers because it preserves the right to bluff. In today’s economy, consumers can frequently get better deals on subscription services merely by threatening to cancel, whether we’re serious or not. As it happens, I got lower prices from Sports Illustrated and Illy Coffee by calling up to cancel my subscriptions, and in both cases they were substantial. DirecTV refused to offer me a sweetner last year when I was tired of their crappy DVR, so they lost my TV business to Comcast. It’s not entirely clear to the business whether any of these threats are serious, of course, so it’s in their interest to err on the side of caution and offer the customer a better deal when they have the chance. Efforts to win back a customer who’s already made a switch have to be harder to pull off.

But the Verizon deal stacked the cards a little too far in the company’s favor, because it allowed them to play hardball until it was absolutely clear that the customer wasn’t bluffing. They only get a switchover for phone service when you’ve made a deal and scheduled a hookup date.

No deal, we all have the right to bluff and the company is going to have to guess just like any other poker player. That’s a good deal for the consumer.

Digital Britain and Hokey Tools

It’s helpful to see how other countries deal with the typically over-excited accusations of our colleagues regarding ISP management practices. Case in point is the Digital Britain Interim Report from the UK’s Department for Culture, Media and Sport and Department for Business, Enterprise and Regulatory Reform, which says (p. 27):

Internet Service Providers can take action to manage the flow of data – the traffic – on their networks to retain levels of service to users or for other reasons. The concept of so-called ‘net neutrality’, requires those managing a network to refrain from taking action to manage traffic on that network. It also prevents giving to the delivery of any one service preference over the delivery of others. Net neutrality is sometimes cited by various parties in defence of internet freedom, innovation and consumer choice. The debate over possible legislation in pursuit of this goal has been stronger in the US than in the UK. Ofcom has in the past acknowledged the claims in the debate but have also acknowledged that ISPs might in future wish to offer guaranteed service levels to content providers in exchange for increased fees. In turn this could lead to differentiation of offers and promote investment in higher-speed access networks. Net neutrality regulation might prevent this sort of innovation.

Ofcom has stated that provided consumers are properly informed, such new business models could be an important part of the investment case for Next Generation Access, provided consumers are properly informed.

On the same basis, the Government has yet to see a case for legislation in favour of net neutrality. In consequence, unless Ofcom find network operators or ISPs to have Significant Market Power and justify intervention on competition grounds, traffic management will not be prevented.

(Ofcom is the UK’s FCC). Net neutrality is, in essence, a movement driven by fears of hypothetical harm that might be visited upon the Internet given a highly unlikely set of circumstances. Given the fact that 1.4 billion people use the Internet every day, and the actual instances of harmful discrimination by ISPs can be counted on one hand (and pales in comparison to harm caused by malicious software and deliberate bandwidth hogging in any case,) Ofcom’s stance is the only one that makes any sense: keep an eye on things, and don’t act without provocation. This position would have kept us out of Iraq, BTW.

Yet we have lawmakers in the US drafting bills full of nebulous language and undefined terms aimed at stemming this invisible menace.

Are Americans that much less educated than Brits, or are we just stupid? In fact, we have a net neutrality movement in the US simply because we have some well-funded interests manipulating a gullible public and a system of government that responds to emotion.

A good example of these forces at work is the freshly released suite of network test tools on some of Google’s servers. Measurement Lab checks how quickly interested users can reach Google’s complex in Mountain View, breaking down the process into hops. As far as I can tell, this is essentially a dolled-up version of the Unix “traceroute” which speculates about link congestion and takes a very long time to run.

The speed, latency, and consistency of access to Google is certainly an important part of the Internet experience, but it’s hardly definitive regarding who’s doing what to whom. But the tech press loves this sort of thing because it’s just mysterious enough in its operation to invite speculation and sweeping enough in its conclusions to get users excited. It’s early days for Measurement Lab, but I don’t have high expectations for its validity.

Doubts about Broadband Stimulus

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett:

Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably just as bad an outcome given that the most immediate goal of the stimulus measure is to pump new spending into the economy.

An “open access” requirement in the bill might discourage some companies from applying for grants because any investments in broadband infrastructure could benefit competitors who would gain access to the network down the line.

Meeting minimum speed requirements set forth in the House version could force overly costly investments by essentially providing Cadillac service where an economy car would be just as useful. And some worry that government may pay for technology that will be obsolete even before the work is completed.

“Really the devil is in the details,” Mr. Glass said. “Yes, there is $9 billion worth of good that we can do, but the bill doesn’t target the funds toward those needs.”

The bill is still very rough. Some critics cite the bill’s preference for grants to large incumbents, others highlight the amorphous “open access” provisions and the arbitrary speed provisions as weaknesses. The only interest groups that appear altogether happy with it are Google’s boosters, such as Ben Scott of Free Press. This is a flip-flop for Free Press, who only last week was urging members to call Congress and ask that bill be killed.

A particularly odd reaction comes from friend of the blog Jeff Jarvis, who took time out from pitching his love letter to Google What Would Google Do? to tear into the article’s sourcing:

I found myself irritated by today’s story in the New York Times that asks whether putting money from the bailout toward broadband would be a waste. The question was its own answer. So was the placement of the story atop page one. The reporter creates generic groups of experts to say what the he wants to say (I know the trick; I used to be a reporter): “But experts warn…. Other critics say…. Other supporters said…”

I wish that every time he did that, the words “experts,” “critics,” and “supporters” were hyperlinked to a page that listed three of each.

It’s an obvious case of a story with an agenda: ‘I’m going to set out to poke a hole in this.’

The odd bit is that five people are named and quoted, and the terms “expert” and “critic” clearly refer to these named sources. It’s boring to repeat names over and over, so the writer simply uses these terms to avoid the tedium. It’s clear that Brett and Craig Settles are the critics and experts. Jeff seems not to have read the article carefully and simply goes off on his defensive tirade without any basis.

It’s a given in Google’s world that massive government subsidies for broadband are a good thing because they will inevitably lead to more searches, more ad sales, and more revenue for the Big G. But while that’s clearly the case, it doesn’t automatically follow that what’s good for Google is good for America, so it behooves our policy makers to ensure that the money is spent wisely, without too many gimmicks in favor of one technology over another or too many strings attached that don’t benefit the average citizen.

Raising questions about pending legislation and trying to improve it is as American as baseball, and the article in the Times is a step in the right direction. It may not be what Google would do, but it’s good journalism.

I want to make sure that the broadband money is spent efficiently, so I would bag the open access requirement (nobody knows what it means anyway) and give credit all improvements in infrastructure that increase speed and reduce latency.

The bill needs to support all technologies that have utility in the Internet access space, wireless, coax, and fiber, but should encourage the laying of new fiber where it’s appropriate, and high-speed wireless in less-populated areas. Eventually, homes and businesses are pretty much all going to have fiber at the doorstep, but that doesn’t need to happen overnight.