AT&T learing Google’s lesson

Just as Google is finally fessing up that video can kill the Internet, AT&T is learning a similar lesson (WSJ subs only):

AT&T’s big bet on using Internet technology to vault ahead of rival cable operators in the television-distribution business is beginning to look more like a long shot.

The telecom giant says it has rolled out its so-called U-verse service in 11 cities. But that’s four fewer than promised, and the technology seems to remain mostly in the trial phase. AT&T executives acknowledge they aren’t fully marketing U-verse because the service can’t yet handle a surge of customers. AT&T counted just 3,000 customers at the end of the fourth quarter, unchanged from three months earlier.

Meanwhile, AT&T executives last month admitted for the first time that there were problems with the software for U-verse provided by Microsoft, its primary vendor on the project. That’s a concern not just for AT&T, but for telecom companies world-wide that bought Microsoft technology to run TV services using Internet protocol, or IP, to transmit signals.

It isn’t clear how serious the problems are because AT&T and Microsoft executives won’t discuss them. An AT&T spokesman attempted to play down the situation, calling it “a little fine tuning.” A Microsoft spokesman said the technology was “on track.”

But the delays plaguing U-verse have fed criticism that AT&T and Microsoft overreached, trying to get more out of Internet technology than it’s capable of delivering at this time. The skeptics include vendors, former employees and competitors. Surprisingly, one of the challenges they believe has tripped up AT&T is something the earliest TV sets could do easily: switch channels instantaneously.

Partnering with MS was a big mistake, and the technical approach is a big leap. Unlike Verizon, which runs fiber to the home with most of the channels in simultaneously moving in multicast streams, the U-verse system is video-on-demand over copper wire with a handful of channels to each home (like 4). So they need massive numbers of servers and with channel-changing at the central office. I wrote a patent application once for a rapid channel-changing system on a network like this, and I can tell you it’s a hard problem (though not an insoluble one, heh.) The basic problem is that you need to buffer up some data before you start displaying in order to have jitter protection, and the time it takes to fill that buffer causes delay in channel changing. MS demonstrated a fast channel-changing system at CES a year ago, but making something like that work on a real network is a very different problem from making it work in a demo.

The Internet is great for personalized programming, and not so great for huge amounts of bulk data. AT&T better get a better partner immediately and some better wires down the road if they’re ever going to get this thing to sing.

Firestorm coming

The Forbes article Don’t Marry Career Women is going to get some people excited:

Guys: A word of advice. Marry pretty women or ugly ones. Short ones or tall ones. Blondes or brunettes. Just, whatever you do, don’t marry a woman with a career.

Why? Because if many social scientists are to be believed, you run a higher risk of having a rocky marriage. While everyone knows that marriage can be stressful, recent studies have found professional women are more likely to get divorced, more likely to cheat, less likely to have children, and, if they do have kids, they are more likely to be unhappy about it. A recent study in Social Forces, a research journal, found that women–even those with a “feminist” outlook–are happier when their husband is the primary breadwinner.

Better don your flak jacket, Michael Noer, you’s spoken the unspeakable.

See it in pictures here and see the blogospheric reaction here.

Breathtaking stupidity

There’s way too much stupidity in the world to comment on all of it, but sometimes you see something that sets a new standard. The Cato Institute has commissioned Jaron Lanier to explain the Internet, and his contribution makes all the silly drivel written about it in the past look downright serious. Lanier’s main point is that the Internet is a social construct:

I hope I have demonstrated that the Net only exists as a cultural phenomenon, however much it might be veiled by an illusion that it is primarily industrial or technical. If it were truly industrial, it would be impossible, because it would be too expensive to pay all the people who maintain it.

Now it’s silly enough when left-feminist academics say “gender is a social construct” but this is downright hilarious. Lanier had something to do with gaming goggles once upon a time, but he’s basically illiterate and has no special expertise in networking. Cato is obviously over-funded and intent on wasting your time.

If you want to read a futurist of merit, check out Ray Kurzweil, a man of learning and intelligence who certainly won’t waste your time with a bunch of new-age drivel.

Coyote at the Dog Show has read Lanier’s essay, and he’s not impressed either. He mentions Lanier’s seemingly senseless attack on the concept of the “file” in computers. The revolutionary alternative that Lanier proposes is a time-indexed file, something that’s commonplace for video servers. Not exactly revolutionary, and not exactly well-informed.

If you don’t like files, folders, directories, and symbolic links, fine, throw all your stuff into a single common file and be done with it.

Downloading Beethoven

People aren’t as dumb as they look. BBC reports that people dowloaded a whole lot Beethoven than Bono in a fair test:

Forget Coldplay and James Blunt. Forget even Sgt Pepper’s Lonely Hearts Club Band, which, in the version performed at Live8 by Sir Paul McCartney and U2, has become the fastest online-selling song ever. Beethoven has routed the lot of them.

Final figures from the BBC show that the complete Beethoven symphonies on its website were downloaded 1.4m times, with individual works downloaded between 89,000 and 220,000 times. The works were each available for a week, in two tranches, in June.

Sgt Pepper could well end up as the best-selling online track of all time. But its sales figure of just 20,000 online in the two weeks since it has been available contrasts poorly with the admittedly free Beethoven symphonies. (Sgt Pepper cost 79p on the iTunes website.)

Beethoven rules, Bono drools.

How to feed cats with Linux

This guide to an automated cat-feeding system is essential to modern life, especially the vacation part:

We have to work, but that doesn’t mean our cats should have to go without stinky little fish, right? Why should our economic necessities have a negative effect on their treat times? Isn’t it our responsibility to build them an Internet-enabled, Linux-based, cat-feeding device?

The system involves microcontrollers, Python, and a serial port. And fish, typically dead ones, but the design could easily be upgraded to feed from an aquarium.

He can dish it out but he can’t take it

Cory Doctorow, the Andrea Dworkin of civil liberties, has threatened a parody site with legal action:

He can dish it out, but he can’t take it.

Well-known “copyfighter” and sci-fi novelist Cory Doctorow can sure complain when the MPAA and RIAA try to enforce their members’ copyrights, but the instant someone infringes on Cory’s copyrights and trademarks – watch out! – the threatening legal letters and lawsuits start flying. Case in point, the BoingBoing parody site BoringBoring:

To cap it off, he’s represented by the EFF, the organization founded by former levitation teacher Mitch Kapor to protect free speech.

Oh, the irony.

In related news, Amy Alkon fell for an April Fools prank recently. Some people are so clueless.

Scientific Method Man

Gordon Rugg has devised a simple method of solving scientific problems currently thought to be intractable, such as Alzheimers. Here’s how it works:

The verifier method boils down to seven steps: 1) amass knowledge of a discipline through interviews and reading; 2) determine whether critical expertise has yet to be applied in the field; 3) look for bias and mistakenly held assumptions in the research; 4) analyze jargon to uncover differing definitions of key terms; 5) check for classic mistakes using human-error tools; 6) follow the errors as they ripple through underlying assumptions; 7) suggest new avenues for research that emerge from steps one through six.

On its face it seems reasonable, so much so that it’s more or less exactly what many of us do already in the search for unique intellectual property.