Nostalgia Blues

San Jose Mercury News columnist Troy Wolverton engaged in a bit of nostalgia in Friday’s paper. He pines for the Golden Age of dial-up Internet access, when Internet users had a plethora of choices:

A decade ago, when dial-up Internet access was the norm, you could choose from dozens of providers. With so many rivals, you could find Internet access at a reasonable price all by itself, without having to buy a bundle of other services with it.

There was competition because regulators forced the local phone giants to allow such services on their networks. But regulators backed away from open-access rules as the broadband era got under way. While local phone and cable companies could permit other companies to use their networks to offer competing services, regulators didn’t require them to do so and cable providers typically didn’t.

Wolverton’s chief complaint is that the DSL service he buys from Earthlink is slow and unreliable. He acknowledges that he could get cheaper service from AT&T and faster service from Comcast, but doesn’t choose to switch because he doesn’t want to “pay through the nose.”

The trouble with nostalgia is that the past never really was as rosy as we tend remember it, and the present is rarely as bad as it appears through the lens of imagination. Let’s consider the facts.

Back in the dial-up days, there were no more than three first-class ISPs in the Bay Area: Best Internet, Netcom, and Rahul. They charged $25-30/month, over the $15-20 we also paid for a phone line dedicated to Internet access; we didn’t want our friends to get a busy signal when we were on-line. So we paid roughly $45/month to access the Internet at 40 Kb/s download and 14 Kb/s or so upstream.

Now that the nirvana of dial-up competition (read: several companies selling Twinkies and nobody selling steak) has ended, what can we get for $45/month? One choice in the Bay Area is Comcast, who will gladly provide you with a 15 Mb/s service for a bit less than $45 ($42.95 after the promotion ends,) or a 20 Mb/s service for a bit more, $52.95. If this is “paying through the nose,” then what were we doing when we paid the same prices for 400 times less performance back in the Golden Age? And if you don’t want or need this much speed, you can get reasonable DSL-class service from a number of ISPs that’s 40 times faster and roughly half the price of dial-up.

Wolverton’s column is making the rounds of the Internet mailing lists and blogs where broadband service is discussed, to mixed reviews. Selective memory fails to provide a sound basis for broadband policy, and that’s really all that Wolverton provides.

, ,

Second Hearing in Internet Privacy tomorrow

From House Energy and Commerce:

Energy and Commerce Subcommittee Hearing on “Behavioral Advertising: Industry Practices and Consumers’ Expectations”

Energy and Commerce Subcommittee Hearing on “Behavioral Advertising: Industry Practices and Consumers’ Expectations”
Publications
June 16, 2009

The Subcommittee on Communications, Technology and the Internet and the Subcommittee on Commerce, Trade, and Consumer Protection will hold a joint hearing titled, “Behavioral Advertising: Industry Practices and Consumers’ Expectations” on Thursday, June 18, 2009, in 2123 Rayburn House Office Building. The hearing will examine the potential privacy implications of behavioral advertising.

INVITED WITNESSES:

* Jeffrey Chester, Executive Director, Center for Digital Democracy
* Scott Cleland, President, Precursor LLC
* Charles D. Curran, Executive Director, Network Advertising Initiative
* Christopher M. Kelly, Chief Privacy Officer, Facebook
* Edward W. Felten, Professor of Computer Science and Public Affairs, Princeton University
* Anne Toth, Vice President of Policy, Head of Privacy, Yahoo! Inc.
* Nicole Wong, Deputy General Counsel, Google Inc.

WHEN: 10:00 a.m. on Thursday, June 18

WHERE: 2123 Rayburn House Office Building



This is the second in a series of hearings on the subject of behavioral advertising. I’ll predict that the Democrats will praise Google, the Republicans will criticize them, and nobody will pay much notice to Yahoo.

I only know four of the six personally, I need to get out more.

FCC Comments due in National Broadband Plan

See IEEE Spectrum for a few observations on the FCC’s request for comments on the National Broadband Plan:

Comments are due Monday, June 8, at the FCC on the National Broadband Plan (NBP.) The Notice of Inquiry lists some 120 questions that the Commission would like filers to address, running the gamut from goals and benchmarks to open access to privacy to entrepreneerial activity to job creation. Anyone who compiles a list of so many questions clearly hasn’t given much thought to the problem under discussion, so it’s clear upon reading the NOI that we’re many years away from a good NBP, although we may have some vague and probably counter-productive guidelines much sooner: the FCC is supposed to report a plan to Congress by next February. Bear in mind that it took the US 20 years to convert from analog to digital TV, and we’re not even there yet.

There’s more.

What slows down your Wi-Fi?

The Register stumbled upon an eye-opening report commissioned by the UK telecom regulator, Ofcom, on sources of Wi-Fi interference in the UK:

What Mass discovered (pdf) is that while Wi-Fi users blame nearby networks for slowing down their connectivity, in reality the problem is people watching retransmitted TV in the bedroom while listening to their offspring sleeping, and there’s not a lot the regulator can do about it.

Outside central London that is: in the middle of The Smoke there really are too many networks, with resends, beacons and housekeeping filling 90 per cent of the data frames sent over Wi-Fi. This leaves only 10 per cent for users’ data. In fact, the study found that operating overheads for wireless Ethernet were much higher than anticipated, except in Bournemouth for some reason: down on the south coast 44 per cent of frames contain user data.

When 90% of the frames are overhead, the technology itself has a problem, and in this case it’s largely the fact that there’s such a high backward-compatibility burden in Wi-Fi. Older versions of the protocol weren’t designed for obsolescence, so the newer systems have to take steps to ensure the older systems can see them, expensive ones, or collisions happen, and that’s not good for anybody. Licensed spectrum can deal with the obsolescence problem by replacing older equipment; open spectrum has to bear the costs of compatibility forever. So this is one more example of the fact that “open” is not always better.

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Here’s the video of the panel I was on at the Congressional Internet Caucus Advisory Committee’s “State of the Mobile Net” conference in DC last Thursday. This was the closing panel of the conference, where all the loose ends were tied together. For those who don’t live and breath Washington politics, I should do what moderator Blair Levin didn’t do and introduce the panel. Levin was the head of the TIGR task force for the Obama transition, the master group for the review of the regulatory agencies and the administration’s use of technology. Kevin Werbach is a professor at the Wharton School, and took part in the FCC review for the transition along with Susan Crawford. He runs the Supernova conference. Larry Irving was part of the review of NTIA for the transition, and is a former Assistant Secretary of Commerce. Ben Scott is the policy guy at Free Press, and Alex Hoehn-Saric is legal counsel to the Senate Committee on Commerce, Science and Transportation.

Regulatory policy needs to be technically grounded, so I emphasized the tech side of things.

Google’s Sweet Book Deal

If you read books, you’ll want to know what Robert Darnton has to say about the pending Google book deal, in Google & the Future of Books – The New York Review of Books. Here’s a teaser:

As an unintended consequence, Google will enjoy what can only be called a monopoly—a monopoly of a new kind, not of railroads or steel but of access to information. Google has no serious competitors. Microsoft dropped its major program to digitize books several months ago, and other enterprises like the Open Knowledge Commons (formerly the Open Content Alliance) and the Internet Archive are minute and ineffective in comparison with Google. Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

A policy change of this magnitude should not be negotiated behind closed doors to the detriment of all purveyors of information but Google.

Time Warner Cable bides its time

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold:

Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests
on consumption based billing. As a result, we will not proceed with implementation of additional tests until further consultation with our customers and other interested parties, ensuring that community needs are being met. While we continue to believe that consumption based billing may be the best pricing plan for consumers, we want to do everything we can to inform our customers of our plans and have the benefit of their views as part of our testing process.”

Time Warner Cable also announced that it is working to make measurement tools available as quickly as possible. These tools will help customers understand how much bandwidth they consume and aid in the dialog going forward.

The public response was somewhat less public than it may appear, as most of it was ginned-up by a few activist bloggers and the interest groups that are generally in the middle of these things, such as Free Press’ “Save the Internet” blog. In this case, the Internet was saved from a plan that Free Press’ chairman Tim Wu had previously lauded for its fairness in allocating network resources:

“I don’t quite see [metering] as an outrage, and in fact is probably the fairest system going — though of course the psychology of knowing that you’re paying for bandwidth may change behavior,” said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Of course, the “psychology of knowing that you’re paying for bandwidth” is actually meant to change behavior.

Free Press is now crowing that the postponement of the trial signals a great victory for the Internet:

“We’re glad to see Time Warner Cable’s price-gouging scheme collapse in the face of consumer opposition. Let this be a lesson to other Internet service providers looking to head down a similar path. Consumers are not going to stand idly by as companies try to squeeze their use of the Internet.

The Freeps should have chosen their words a bit more carefully. The dilemma that TWC faces does indeed relate to “squeezing,” but that doesn’t actually originate exclusively (or even primarily) at the cable company’s end of the bargain. TWC’s consumption per user has been increasing roughly 40% per year, and there’s no reason to assume it will do anything but increase as more HDTV content becomes available on the web, people connect more devices, and video calling becomes more popular. TWC’s capital expenditures are 20% of income, and the company lost $7.3 billion in the course of spinning out from Time Warner, Inc. last year. Some of TWC’s critics have charged that their bandwidth is free (or nearly so,) citing “high speed data costs of $146 million.” In reality, TWC pays six times that much for the interest on its capital expenditures alone ($923M.)

Heavy users squeeze light users by leaving less bandwidth on the table, and the flat-rate pricing system squeezes them even more by making them pay a larger share of the costs of bandwidth upgrades than those who actually use them. No fair-minded and rational person can look at the costs of operating a network and conclude that flat-rate pricing for a single Quality of Service level is the best we can do.

Continuous upgrades are a fact of life in the broadband business, and aligning their costs with the revenues carriers collect is one of the keys to creating an economically sustainable broadband ecosystem. We’ll take that up in another post.

UPDATE: Dig into the comments for some discussion of transit and peering prices.

,

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.

Life in the Fast Lane

No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before with the Blast! service that’s advertised at 16/2. I had the dude put the modem in the living room to get my router closer to the center of the house in order to improve my Wi-Fi coverage, which only took a splitter off the TiVo’s feed. The old modem remains installed for phone service, but its MAC address has been removed from the DHCP authorization list. It turns out the backup battery had been installed incorrectly in the old modem, so he fixed that. The only incident that turned up in the install was the discovery that my TiVo HD is feeding back a noticeable voltage from the cable connection, which can apparently cause bad things to happen to the DOCSIS connection. He installed a voltage blocker off some kind to keep that at bay, but I’ll have to complain to TiVo about that feature.

As I had to go to the office as soon as the installation was completed, I haven’t had time to play with my privileged fast lane service, but I did enough to notice a fairly dramatic difference even in ordinary activities like reading e-mail. I use an IMAP server on the host that handles bennett.com, and its location in Florida tends to make for sluggish response when deleting mail or simply scanning a folder. It’s so fast now it’s like a local service. (People who use the more popular POP3 e-mail protocol won’t understand this, so don’t worry about it – when you delete an e-mail it’s a local copy, but mine is on the network.)

So the main effect of this super-fat Internet pipe is to make network services and content as readily accessible as local services and content. Which is a very wonderful thing for a couple of reasons: accessing content and services from the various machines I have connected to the Internet from home involves maintenance and security hassles that aren’t always worthwhile, so it’s convenient to outsource data to a system in the cloud that’s secure, well maintained, and backed up. It’s very easy to do that now, all the way around. And for the data that I still access locally, such as media files and the like, an off-site backup will be very painless.

One of the next exercises is going to be media streaming from my server in Florida to my TV in California, after I’ve got all my data encrypted and backed up. At this point, I’ve got three devices at home connected to the Internet that are something other than general-purpose computers: a TiVo, a Blu-Ray player that also does Netflix streaming, and a Blackberry that does goes to the web via 802.11a/g Wi-Fi. At any given time, I’ve got two to four general-purpose computers on the ‘net as well (more if we count virtual machines,) so it’s clear that the balance is turning in the direction of the special-purpose machines. This is what makes Zittrain sad, but it shouldn’t. It’s in the nature of general-purpose systems not to require much multiplication; one that’s fast but stationary and another that’s lighter and mobile and one more that’s super light and ultra-mobile is about all you’ll ever need. But special purpose machines multiply like rabbits, as more and more purposes are discovered for networked devices.

So the future is obviously going to embrace more specialized (“sterile tethered appliance”) machines than general purpose machines; that’s a given. The “Future of the Internet” question is actually whether the general-purpose machines also become more powerful and capable of doing more things than they do now. In other words, don’t just count machines, count functions and applications. The failure to understand this issue is Zittrain’s fundamental error. (Gee, the fast Internet made me smarter already.)

Attaching a controller/monitor to my aquarium that I can access across the Internet is the next exercise, and after that some security cameras and an outdoor Wi-Fi access point. It never ends.