The Next Big Thing

I started working on the system architecture and protocols for Wi-Fi in late 1990, when I consulted with Photonics, a little start-up in Los Gatos that had already built the first commercial wireless LAN. The initial Photonics product was a short distance, infrared-based, wire replacement for Apple Talk, and the second generation system was Wi-Fi over infrared. Most people don’t remember that IEEE 802.11 was a single Medium Access Control protocol and two physical layers, one infrared and the other RF. The RF PHY was obviously more successful than the IR version.

Photonics had two large customers, IBM and Toshiba, both of whom wanted to do the same thing with the wireless LAN: integrate it into touchscreen, portable computers. IBM’s portable computer was called Think Pad, and as the name suggests, it was tablet computer with which the user would interact through a stylus; Toshiba had a similar idea. The user interface was based on gestures and handwriting recognition, a very rough science in those days, and the underlying system was Windows 3.15 or so, a keyboard-and-mouse system. These systems were challenged by limited battery life – like 2 hours between charges – and the slow processors of the day. So we figured out how to build a wireless LAN, but didn’t quite end up with a system that could take advantage of it. It wasn’t until nearly 10 years later that CPUs, PC technology in general, and batteries developed to the point that a fully portable computer was really practical, and by then Windows had established dominance in corporate America so the clamshell design that laptops use today won out over the more personal concept of the tablet. Clamshells weren’t all that revolutionary as a concept by the 90s, given clamshell prototypes were built at Texas Instruments, IBM, and other places in the 1970s. Just ’cause you can built a prototype doesn’t mean you can build a product, however: For portables, the technology has to be there to get decent performance, weight, and operating life from a charge. This is very clear in the case of handhelds, and all geeks are familiar with promising devices that never went anywhere because they failed on one of these three dimensions. The Google G1 phone drained its battery too fast, for example, so it was a total flop despite being an otherwise very nice device.

One interesting development that’s been taking place over the past couple of years is the convertible laptop, a clamshell computer with a touch-sensitive screen that swivels to turn the whole thing into a tablet. The Dell Latitude XT2 is the best example of this sort of system today. Dell Latitude XT2

This is nice machine, with a dual-core Intel CPU, several hours of battery life, and both pen and multitouch input. It weighs about 4 pounds with the serious battery, which is great for a laptop, and has all the nice connectivity options such as 802.11n and 3G from various sources. Nicely-equipped, it will set you back about $3,000 and it’s probably worth every penny of it if you need the features and functions it offers, especially handwriting recognition and a full-blown Windows platform. I’ve been tempted to buy one of these, to the extent of researching the refurbs you can get on eBay for half the retail price, but haven’t pulled the trigger. The price, weight, and the general concept as a Windows extension ultimately turned me off.

So I was pretty excited to learn that Apple was building a tablet computer. Apple is the one company with all the capabilities in hardware, software, user interfaces, and vision to develop a personal computing device that breaks new ground and isn’t chained to the past. Of all the personal technology companies in the world, Apple is the one with the least reverence for the traditional ways of doing things and the greatest autonomy in system design. Dell, HP, and the others are ultimately constrained by the dependence on either Microsoft or Linux for software, and it’s hard to push either of these highly successful enterprises into a totally new space very quickly.

As the specs came out, it became clear that Apple gets the limitations of the tablets of the past: the weight was only 1.5 pounds, the battery lasts all day (doing serious work, like video rendering), the connectivity options are all in place in terms of dual-band 802.11n and 3G + GPS for the model shipping later this month. The storage is all solid-state, and the OS is a slightly scaled-up version of the scaled-down Mach system known as OS X by Apple. The price is pretty appealing too, at $729.00 for the 3G model with a 32GB SSD.

The announcement was well in advance of the ship date, so I did what any sensible person would do, bought some Apple stock. The stock has now gone up enough to pay for an iPad 3G with a nice set of accessories and to cover taxes on the gain. So I put in the order at Apple’s on-line store for delivery sometime later this month. So we’ll see how it goes. If I don’t like it, I’ll send it back, probably without a vanity video about how disappointed I am in Steve Jobs. And yes, I expect that the second generation model will be faster, cheaper, 4G-enabled, multi-tasking, and camera-equipped. By then, I will have had 6 months or year’s worth of use from the one I’ve ordered, so that’s not too bad. I bought one of the first Macs in the first 100 days as well, since I liked all that bitmappy, mousey, windowey GUI stuff, and used it for 3 years or so before trading up to a Mac SE and then to a machine that would run Windows 95 at a decent speed.

I expect that we’re about three to five years away from general-purpose tablets that don’t need hands-on curated app stores to ensure consistency and quality in their user interfaces, so at some point the prophets of gloom will be able to buy a lightweight, fully functional, open system that does all the things they need.

Contrary to popular opinion, the cycle of innovation doesn’t always move from open devices to closed ones, of course. What we really see in the long arc of platform innovation is that closed devices like the Xerox Star and the iPhone lead the way, only to be cannibalized over the long term by open systems built on generic technology like Wintel machines.

When a new paradigm is emerging, however, the trade-offs between power, usability and cost are too fragile to accommodate the looseness in interface design and the over-engineering you need to have in order to accommodate unknown apps. We’ll get to open tablets eventually, but only as the hardware and software of the underlying platforms develop to the point that we can afford the overhead to provide openness. And by then, Apple will be pioneering a whole new concept in personal communication, computing, and entertainment. This is as it should be, of course.

Second Hearing in Internet Privacy tomorrow

From House Energy and Commerce:

Energy and Commerce Subcommittee Hearing on “Behavioral Advertising: Industry Practices and Consumers’ Expectations”

Energy and Commerce Subcommittee Hearing on “Behavioral Advertising: Industry Practices and Consumers’ Expectations”
June 16, 2009

The Subcommittee on Communications, Technology and the Internet and the Subcommittee on Commerce, Trade, and Consumer Protection will hold a joint hearing titled, “Behavioral Advertising: Industry Practices and Consumers’ Expectations” on Thursday, June 18, 2009, in 2123 Rayburn House Office Building. The hearing will examine the potential privacy implications of behavioral advertising.


* Jeffrey Chester, Executive Director, Center for Digital Democracy
* Scott Cleland, President, Precursor LLC
* Charles D. Curran, Executive Director, Network Advertising Initiative
* Christopher M. Kelly, Chief Privacy Officer, Facebook
* Edward W. Felten, Professor of Computer Science and Public Affairs, Princeton University
* Anne Toth, Vice President of Policy, Head of Privacy, Yahoo! Inc.
* Nicole Wong, Deputy General Counsel, Google Inc.

WHEN: 10:00 a.m. on Thursday, June 18

WHERE: 2123 Rayburn House Office Building

This is the second in a series of hearings on the subject of behavioral advertising. I’ll predict that the Democrats will praise Google, the Republicans will criticize them, and nobody will pay much notice to Yahoo.

I only know four of the six personally, I need to get out more.

How Hard is it to Find Authors?

One of the mind-boggling facts about the Google book deal is the number of so-called “orphan works” there are. According to Brewster Kahle, most books published since our current copyright regime was adopted in 1923 are orphan works:

But the settlement would also create a class that includes millions of people who will never come forward. For the majority of books — considered “orphan” works — no one will claim ownership. The author may have died; the publisher might have gone out of business or doesn’t respond to inquiries; the original contract has disappeared.

Google would get an explicit, perpetual license to scan and sell access to these in-copyright but out-of-print orphans, which make up an estimated 50 to 70 percent of books published after 1923. No other provider of digital books would enjoy the same legal protection. The settlement also creates a Book Rights Registry that, in conjunction with Google, would set prices for all commercial terms associated with digital books.

For the archivist who makes money by advertising and resale, orphan works are uniquely convenient: not only do you not have to obtain permission to republish, you also don’t have to share revenues with anyone. Taken together, those facts certainly don’t motivate digital book sellers to expend any effort to find the authors or their heirs.

Now imagine how this would change if someone developed a tool for searching the Internet. Surely the information is out there on most published authors, their heirs, and their whereabouts, so as long as someone is diligent enough to sift through it, evaluate it, and interpret it, they can be found. I wonder how long it will be until a bright young pair of graduate students in the computer science program of a major university set themselves to solve the problem of Internet search.

Not to be sarcastic or anything.

Interlocking Directorates

The New York Times reports that regulators have an interest in the structure of the Apple and Google boards of directors:

The Federal Trade Commission has begun an inquiry into whether the close ties between the boards of two of technology’s most prominent companies, Apple and Google, amount to a violation of antitrust laws, according to several people briefed on the inquiry.

I doubt this will go very far, as the interlocking directors (Eric Schmidt and former Genentech CEO Arthur Levinson,) will simply resign before any enforcement action is imminent, but it does raise some interesting questions about the market for mobile phone operating systems, currently split between Apple, Google, Microsoft, Palm, and a few others. These systems are rife with limitations, each of which could be considered a network neutrality violation when viewed in just the right way.

I imagine Apple itself might wish to give Dr. Schmidt his walking papers before he becomes an anti-trust problem, which he actually isn’t at this point. The FTC’s interest in this obscure situation is probably a signal that the Administration wants to be viewed as an anti-trust hawk without doing anything substantial.

But this is what the law calls an “occasion of sin.” Dear me.

This is not a misprint

How many companies are actually increasing headcount these days? Not many, but AT&T is adding 3,000 jobs to expand its 3G network:

Despite a capex cut of up to $3 billion this year, AT&T Inc. (NYSE: T) made it clear today that it intends to spend to improve and expand its 3G network — adding 3,000 jobs in the process to support “mobility, broadband, and video.”

This comes on the heels of an announcement last December of a cut of 12,000 jobs, so it’s not quite as against-the-grain as it might seem. Still, it’s good news for 3,000 people and a counter-indicator of permanent global economic collapse.

Nice Outings

My talk at the Messaging Anti-Abuse Working Group went very well. It was a huge room, seating probably 500 or so, and over half-full. I talked about how some of the crazier ideas about net neutrality are potentially becoming mainstream thanks to the politics in the nation’s capitol and some of the personnel choices made by the Obama Administration. The selection of Susan Crawford for the FCC Transition Team is a cause for alarm. Susan is as nice a person as you’ll ever want to meet, and quite bright and well-intentioned, but her position that ISPs and carriers have no business actively managing packets is poison. I got a healthy round of applause, and several people thanked me for my remarks afterwards. Very few people know how dependent e-mail is on the DNS Blacklists that members of this organization maintain, and that’s a real shame.

Last night I took the short trip up to Mountain View to see Jeff Jarvis’s talk about his book What Would Google Do? The audience, about 25 people more or less, was a lot less impressed with Google than Jeff is, and it occurred to me that Google really is vulnerable on the search front. I can imagine a much more effective search methodology than the one Google employs, but getting the venture capital to build a rival infrastructure isn’t going to happen.

I told Jeff (an old friend of the blog who’s driven a lot of traffic this way over the years) that what he likes about Google isn’t Google as much as it’s inherent qualities of the Internet. He more or less knows that, but the packaging of open networks, distributed computing, and free expression is easier when you concretize it, and that’s what his book does. I read it as a sequel to Cluetrain.

Doubts about Broadband Stimulus

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett:

Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably just as bad an outcome given that the most immediate goal of the stimulus measure is to pump new spending into the economy.

An “open access” requirement in the bill might discourage some companies from applying for grants because any investments in broadband infrastructure could benefit competitors who would gain access to the network down the line.

Meeting minimum speed requirements set forth in the House version could force overly costly investments by essentially providing Cadillac service where an economy car would be just as useful. And some worry that government may pay for technology that will be obsolete even before the work is completed.

“Really the devil is in the details,” Mr. Glass said. “Yes, there is $9 billion worth of good that we can do, but the bill doesn’t target the funds toward those needs.”

The bill is still very rough. Some critics cite the bill’s preference for grants to large incumbents, others highlight the amorphous “open access” provisions and the arbitrary speed provisions as weaknesses. The only interest groups that appear altogether happy with it are Google’s boosters, such as Ben Scott of Free Press. This is a flip-flop for Free Press, who only last week was urging members to call Congress and ask that bill be killed.

A particularly odd reaction comes from friend of the blog Jeff Jarvis, who took time out from pitching his love letter to Google What Would Google Do? to tear into the article’s sourcing:

I found myself irritated by today’s story in the New York Times that asks whether putting money from the bailout toward broadband would be a waste. The question was its own answer. So was the placement of the story atop page one. The reporter creates generic groups of experts to say what the he wants to say (I know the trick; I used to be a reporter): “But experts warn…. Other critics say…. Other supporters said…”

I wish that every time he did that, the words “experts,” “critics,” and “supporters” were hyperlinked to a page that listed three of each.

It’s an obvious case of a story with an agenda: ‘I’m going to set out to poke a hole in this.’

The odd bit is that five people are named and quoted, and the terms “expert” and “critic” clearly refer to these named sources. It’s boring to repeat names over and over, so the writer simply uses these terms to avoid the tedium. It’s clear that Brett and Craig Settles are the critics and experts. Jeff seems not to have read the article carefully and simply goes off on his defensive tirade without any basis.

It’s a given in Google’s world that massive government subsidies for broadband are a good thing because they will inevitably lead to more searches, more ad sales, and more revenue for the Big G. But while that’s clearly the case, it doesn’t automatically follow that what’s good for Google is good for America, so it behooves our policy makers to ensure that the money is spent wisely, without too many gimmicks in favor of one technology over another or too many strings attached that don’t benefit the average citizen.

Raising questions about pending legislation and trying to improve it is as American as baseball, and the article in the Times is a step in the right direction. It may not be what Google would do, but it’s good journalism.

I want to make sure that the broadband money is spent efficiently, so I would bag the open access requirement (nobody knows what it means anyway) and give credit all improvements in infrastructure that increase speed and reduce latency.

The bill needs to support all technologies that have utility in the Internet access space, wireless, coax, and fiber, but should encourage the laying of new fiber where it’s appropriate, and high-speed wireless in less-populated areas. Eventually, homes and businesses are pretty much all going to have fiber at the doorstep, but that doesn’t need to happen overnight.

The People’s Movement (for Google)

I did a podcast with the folks at The Technology Liberation Front on Google’s edge-caching system:

This week we saw a new kerfuffle of sorts develop over the revelation in a Monday front-page Wall Street Journal story that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content. What exactly is it that Google is proposing, and does it mean – as the Wall Street Journal and some others have suggested – that Google is somehow going back on their support for Net neutrality principles and regulation? More importantly, what does it all mean for the future of the Internet, network management, and consumers. That’s what we discussed on the TLF’s latest “Tech Policy Weekly” podcast.

Google’s genius at creating a citizen’s movement to boost their bottom line needs more praise, but this is a start.