UN talks Internet to villages; electricity can wait

The UN is going to hold a meeting to talk about wireless networking in the Third World, with help from The Wireless Internet Institute:

On June 26 , 2003, the Wireless Internet Institute will join forces with the United Nations Information and Communication Technologies Task Force to host “The WiFi Opportunity for Developing Nations” at UN Headquarters in New York City. The conference will create the conditions for informal dialogue and brainstorming among industry practitioners, government representatives and international development experts. It will feature plenary sessions and structured brainstorming workshops to establish strategies to overcome obstacles as well as develop environments favorable to the broad deployment of WiFi infrastructures. Conference conclusions will serve as a blueprint for national consensus-building programs, spectrum-policy reform and infrastructure deployment.

Maybe now that Hans Blix is out of a job, he can inspect Third World nations for strategies to overcome obstacles to Internet connectivity, like, um, no computers and stuff. At least that’ll keep him out of real trouble.

Don’t get me wrong here, I’m trashing the UN, not WiFi. I love WiFi, and not just because I invented most of its MAC protocol for Photonics back in 1992 (beacons, segmentation, RTS/CTS, and addressing). WiFi is a great solution to the “last 100 feet” problem, but it’s not a backbone or wide-area mesh solution, because: a) there aren’t enough channels in the 801.11b spectrum for that, and b) 802.11a doesn’t go far enough. So we need some better solutions to the infrastructure problem than 802.11, and we even need better solutions to the “last 100 feet” than the standard allows. As originally designed, the MAC supported the kinds of Quality of Service mechanisms needed for telephony, but the trio that shoehorned the standard dropped this feature, and now we’ve got a mess on our hands.

So 802.11 is nice, but it’s time to go to the second generation before we get too hog-wild about implementing it everywhere. And you already knew that anything the UN’s up to these days is likely to be crap.

And just incidentally, if there’s no such thing as RF interference (as David Reed and David Weinberger claim), then why should the FCC free up more channels for WiFi?

WiFi without Relativity

Dave Weinberger’s Salon article claiming RF interference is a myth hasn’t gone over too well, according to Weinberger’s source, David Reed:

And of course, there are the usual angry letters that seem to think I’m claiming to have discovered the earth is flat, or that relativity is wrong (someone actually thought I was arguing that!)

Reed is most famous, perhaps, as one of the co-authors of the 1981 paper arguing for an architecture-neutral Internet. (If we’re going to start enumerating technical myths, I’d start with architecture neutrality; the Internet’s initial design wasn’t neutral, it was crippled with respect to real-time data transfer, but if you read this blog at all, you’ve seen that already.)

The weakest parts of Reed’s theories about RF signalling relate to non-informational sources of interference such as barriers, reflection, multipath, and entropy. Other than that, it’s a fine way to look at signalling in a vacuum, covering all the considerations that should be taken into account by the FCC the next time they deal with metaphysical policy.

Sometimes I think I could edit an entire blog devoted to nothing but debunking pseudo-technical BS.

No network vigilante bill

Hollywood’s man in Washington, ultra-liberal machine boss Howard Berman, has apparently decided to drop the network vigilante bill that got so many web elves upset when it was introduced last year:

This week, however, Berman said he may not revive the measure. For one thing, copyright holders may not need extra protection to combat file-sharing piracy, he said. And though Berman wasn’t deterred by complaints from consumer advocates, the concerns voiced by Hollywood studios — among the biggest beneficiaries of the bill, given their active anti-piracy efforts online — suggested that Berman was climbing out on a limb by himself.

This bill, as you may recall, allowed copyright holders to invade file sharing computers and launch legal denial-of-service attacks in order to protect their intellectual property. Hollywood reached a consensus that the risk of liability from doing these things where they weren’t warranted outweighed the benefits.

The reaction to this bill underscored the confusion that reigns in the minds of many of our good tech-topians about the different business interests of telecom and Hollywood. The tech-topian tendency is to conflate telcos and Hollywood into a monolithic axis of evil, as they do in the World of Ends document that delivers a stern lecture to both on the (largely imaginary) differences between the Internet and the phone net. The organized opposition to the Berman bill (which Dave Winer wrongly attributed to co-sponsor Howard Coble) came from the telcos, especially Verizon, because they don’t want Hollywood messing with their Internet business.

In the real world, telcos and Hollywood have very different interests, of course.

Sharing spectrum

According to Scott Mace’s Radio Weblog and Boing Boing, the second day at the Spectrum conference was productive. Scott:

David P. Reed: “What happened with 802.11, a small group of companies developed all kinds of crazy technologies to run in that space. Most of the companies failed. Gradually, industry said, we need to work together. Some of those committees were IEEE committee. Despite the fact I don’t think 802.11 is the be-all and end-all, it’s an amazingly successful example of governing a commons.”

Boing:

Spectrum Etiquette: Two Proposals
Does the “unlicensed” spectrum band need etiquette rules at this time? Or should the FCC leave the space alone? This panel will address this general question, as well as specific etiquette proposals. Speakers from MSFT and Motorola, plus assorted commentators.

Two days on protocol regulation would be a good start.

Spectrum conference

I signed up for the Spectrum Conference at the Stanford Law School today and tomorrow, but decided to bail when FCC chairman Michael Powell bailed. Reading the blog accounts, like this one at Scripting News, maybe I didn’t miss much:

…these guys are part of a fraternity, they talk about things that mean nothing to me. I’m a stranger here. I don’t get it.

This is one of those deals where two worlds collide: policy makers and regulators don’t understand technology, and technologists don’t understand the policy and political issues. So they end up talking past each other, and don’t really say all that much of value anyhow. Plus, a couple blogs reported that clueless attendees were comparing cell phones and cameras, and neither my cell phone nor my camera is state of the art, so I would have felt bad.

As I’ve said before, there are problems with the way the FCC regulates spectrum, but they aren’t legal problems related to property rights vs. commons, they’re more technical. The FCC says how much power you can pump into a given frequency with or without a license, but they don’t say what you do with that power in terms that make any sense to computer networkers. They need to adopt an approach where they regulate not just the power but the protocols, because some protocols share bandwidth well and some don’t. Spectrum is a scarce resource, because God isn’t making any more of it. Yes, there are clever ways to share spectrum that need to be encouraged, and bad ways to use spectrum that don’t share well and need to be discouraged.

In unlicensed spectrum, protocols need to be regulated. The appropriate analogy isn’t the Internet, because its pipes don’t have an interference problem; the analogy that people can understand is spam. Some uses of spectrum are junk, and these need to be curtailed.

Connecting neighborhoods to the net with WiFi is a junk application, for example; there are better ways to do that.

Cool stuff for your Palm

odbscan.jpg

This deal plugs your Palm Pilot into your car so you can figure out why the “service engine soon” light is on. It displays diagnostic codes, makes some cool graphs, and gets you through smog check.

Vivato’s smart antenna

Glenn Fleischman interviews Mr. Omninet Phil Belanger for Wi-Fi News: Vivato Unleashed and mines some interesting details about how the Vivato system works. It’s clearly a generation ahead of the other “smart antenna” systems in the works, and stands out for its combination of technology and channel marketing sense. If I were working for a company that thinks it has a smart antenna in the works, I’d be worried, but I’m not, so I’m amused at the prospect of coming trainwrecks. It’s about time Belanger had some success, BTW — he’s been slogging away at LANs since he was at Corvus in the early 80s.

The Internet is not a potato, it’s a napkin

David Weinberger has been thinking about the Internet, and the webheads and greedheads. He’s especially fascinated by a 20-year-old paper on network design:

I’ve been thinking about the end of the Internet. No, not its collapse, but as in the”End-to-End” (E2E) argument, put definitively by David P. Reed, J.H. Saltzer, and D.D. Clark in their seminal article, End-to-End Arguments in System Design. The concept is simple: whenever possible, services should not be built into a network but should be allowed to arise at the network’s ends.

Let me prick this bubble, if I may: the Internet was not designed correctly. This is especially true from the standpoint of real-time services, such as streaming audio and video. The fundamental problem is that the end-to-end model only works when timed delivery is not important, because it’s not able to manage the system-to-system, network-to-network, and router-to-router links that have to be managed for bandwidth to be reserved and used efficiently by real-time services. The Internet runs on telephony-based services such as ATM and SONET that provide for real-time delivery, quality of service selection, and bandwidth sensitive billing, but the Internet protocols, especially IPv4, mask access to the controls that run these links and make real-time at best a matter of faith and prayer and massively over-built datalinks.

If the connection-oriented, end-to-end services provided by TCP had been implemented at the network layer instead of at the transport layer, the Internet would be poised to gracefully carry the next generation of services. But it wasn’t, so it’s not, and IPv6 doesn’t fully remedy the deficiencies. Don’t hold up any engineering exercise done twenty-five and thirty years ago as state-of-the-art, and don’t try and build a model of human morality on it – it’s a losing proposition.

Sun’s drive to stay relevant

EE Times – Sun employs startup’s technology in server blade processors

SAN FRANCISCO — Sun Microsystems Inc. hopes to leapfrog competitors such as IBM and Intel in the emerging area of server blades by launching multicore microprocessors that can handle as many as 32 separate threads. The technology will form the basis of new Sparc processors that will be discussed when Sun discloses its processor road map late this month.

These would be the first multi-core, multi-thread CPUs in the business, if Sun can pull it off. It’s not clear how register-sharing is going to work, and if it’s actually manufacturable, but it sure sounds neat.

via Hack the Planet.

Interesting tid bit

CommsDesign – IEEE 802.16 spec could disrupt wireless landscape

COLORADO SPRINGS, Colo. — The IEEE Standards Authority on Wednesday (Jan. 29) approved the 802.16a specification for wireless metropolitan-area networks (MANs) in the 2- to 11-GHz range, giving a seal of approval to technology that one executive said could enable a disruptive change in communications.

Sounds intriguing – as in: WiFi has some competition for its more inappropriate applications.