Here’s a rough transcript of the Spectrum 2.0 – Exploring the Roots of Wireless Spectrum Controversy panel discussion from eComm 2009. Enjoy.
According to National Journal, Susan Crawford is joining the Obama administration in a significant new role:
Internet law expert Susan Crawford has joined President Barack Obama’s lineup of tech policy experts at the White House, according to several sources. She will likely hold the title of special assistant to the president for science, technology, and innovation policy, they said.
This does not make me happy. Crawford is not a scientist, technologist, or innovator, and the job that’s been created for her needs to be filled by someone who is; and an exceptional one at that, a person with deep knowledge of technology, the technology business, and the dynamics of research and business that promote innovation. A life as a legal academic is not good preparation for this kind of a job. Crawford is a sweet and well-meaning person, who fervently believes that the policy agenda she’s been promoting is good for the average citizen and the general health of the democracy and that sort of thing, but she illustrates the adage that a little knowledge is a dangerous thing.
As much as she loves the Internet and all that it’s done for modern society, she has precious little knowledge about the practical realities of its operation. Her principal background is service on the ICANN Board, where she listened to debates on the number of TLDs that can dance on the head of pin and similarly weighty matters. IETF engineers generally scoff at ICANN as a bloated, inefficient, and ineffective organization that deals with issues no serious engineer wants anything to do with. Her other qualification is an advisory role at Public Knowledge, a big player on the Google side of the net neutrality and copyright debates.
At my recent net neutrality panel discussion at MAAWG, I warned the audience that Crawford’s selection to co-manage the Obama transition team’s FCC oversight was an indication that extreme views on Internet regulation might become mainstream. It appears that my worst fears have been realized. Crawford has said that Internet traffic must not be shaped, managed, or prioritized by ISPs and core networking providers, which is a mistake of the worst kind. While work is being done all over the world to adapt the Internet to the needs of a more diverse mix of applications than it’s traditionally handled, Crawford harbors the seriously misguided belief that it already handles diverse applications well enough. Nothing could be farther from the truth, of course: P2P has interesting uses, but it degrades the performance of VoIP and video calling unless managed.
This is an engineering problem that can be solved, but which won’t be if the constraints on traffic management are too severe. People who harbor the religious approach to network management that Crawford professes have so far been an interesting sideshow in the network management wars, but if their views come to dominate the regulatory framework, the Internet will be in serious danger.
Creating a position for a special adviser on science, technology and innovation gave President Obama the opportunity to to lay the foundation of a strong policy in a significant area. Filling it with a law professor instead of an actual scientist, technologist, or innovator simply reinforces the creeping suspicion that Obama is less about transformational change than about business as usual. That’s a shame.
Cross-posted at CircleID.
The Internet is dying, according to advocacy group Free Press. The organization has published a report, Deep Packet Inspection: The End of the Internet as We Know It? that claims technology has evolved to the point that Internet carriers can control everything that we read, see, and hear on the Internet, something they’ve never been able to do before. It’s the backdrop of a just so story Free Press’s network guru, Robb Topolski, delivered to a House of Lords roundtable in the UK recently. It’s an outlandish claim which echoes the Groundhog’s Day predictions about the Internet’s imminent demise Free Press has been making since 2005.
Suffice it to say it hasn’t exactly happened. Internet traffic continues to grow at the rate of 50-100% per year, more people than ever – some 1.5 billion – are using the Internet in more places and with more devices, and there hasn’t been an incident of an American ISP choking traffic since the dubiously alarming case of Comcast’s rationing of P2P bandwidth – mainly used for piracy – in 2007.
There are multiple errors of fact and analysis in the Free Press report, pretty much the same ones that the organization has been pumping since they jumped on the net neutrality bandwagon. There’s been no new breakthrough in Internet management. While it’s true that Moore’s Law makes computer chips run faster year after year, it’s also true that it makes networks run faster. So any reduction in the time it takes to analyze a packet on a network has to be balanced against the number of packets that cross the network in a given unit of time. Machines work faster. Some machines analyze Internet packets, and other machines generate Internet packets. They’re both getting faster, and neither is getting faster faster.
Network operators have been analyzing packets and rationing bandwidth as long as there have been IP networks. The first one to go live was at Ford Aerospace, where the discovery was made, more or less instantly, that user access to the network had to be moderated so that users of bulk data transfer applications didn’t crowd out interactive uses. More sophisticated forms of this kind of helpful “discrimination” are the principle uses of DPI today.
The complaint by Free Press is more or less on par with the shocking discovery that the sun has both good and bad effects: it causes plants to grow, and it can also cause skin cancer. Shall we now pass a legislative ban on sunlight?
The important new trend on the Internet is an increasing diversity of applications. Until fairly recently, the Internet’s traffic management system was occupied almost exclusively with a set of applications that had very similar requirements: e-mail, web browsing, and short file transfers are all concerned about getting exact copies of files from point A to point B, with no particular concern for how long it took, within seconds. Now we’ve added Skype to mix, which needs millisecond delivery, and P2P transactions that can run for hours and involve gigabytes of data. Add in some gaming and some video calling, and you’ve got a healthy diversity of applications with unique requirements.
The sensible way to manage Internet diversity is to identify application needs and try to meet them, to create “the greatest good for the greatest number” of people. DPI is really, really good at this, and it’s a win for all Internet users when it’s used properly.
Free Press’s jihad against helpful technologies echoes their previous war against newspaper consolidation. With the recent closures and printing plant shutdowns of daily papers in Seattle, Denver, and elsewhere, it’s clear that these efforts at media reform have been less than helpful.
Let’s not send the Internet the way of the Seattle Post-Intelligencer. Rather than buying Free Press’s shameless scare tactics, reflect on your own experience. Do you see even the slightest shred of evidence to support the wild claim that the Internet is withering on the vine? I certainly don’t.
No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before with the Blast! service that’s advertised at 16/2. I had the dude put the modem in the living room to get my router closer to the center of the house in order to improve my Wi-Fi coverage, which only took a splitter off the TiVo’s feed. The old modem remains installed for phone service, but its MAC address has been removed from the DHCP authorization list. It turns out the backup battery had been installed incorrectly in the old modem, so he fixed that. The only incident that turned up in the install was the discovery that my TiVo HD is feeding back a noticeable voltage from the cable connection, which can apparently cause bad things to happen to the DOCSIS connection. He installed a voltage blocker off some kind to keep that at bay, but I’ll have to complain to TiVo about that feature.
As I had to go to the office as soon as the installation was completed, I haven’t had time to play with my privileged fast lane service, but I did enough to notice a fairly dramatic difference even in ordinary activities like reading e-mail. I use an IMAP server on the host that handles bennett.com, and its location in Florida tends to make for sluggish response when deleting mail or simply scanning a folder. It’s so fast now it’s like a local service. (People who use the more popular POP3 e-mail protocol won’t understand this, so don’t worry about it – when you delete an e-mail it’s a local copy, but mine is on the network.)
So the main effect of this super-fat Internet pipe is to make network services and content as readily accessible as local services and content. Which is a very wonderful thing for a couple of reasons: accessing content and services from the various machines I have connected to the Internet from home involves maintenance and security hassles that aren’t always worthwhile, so it’s convenient to outsource data to a system in the cloud that’s secure, well maintained, and backed up. It’s very easy to do that now, all the way around. And for the data that I still access locally, such as media files and the like, an off-site backup will be very painless.
One of the next exercises is going to be media streaming from my server in Florida to my TV in California, after I’ve got all my data encrypted and backed up. At this point, I’ve got three devices at home connected to the Internet that are something other than general-purpose computers: a TiVo, a Blu-Ray player that also does Netflix streaming, and a Blackberry that does goes to the web via 802.11a/g Wi-Fi. At any given time, I’ve got two to four general-purpose computers on the ‘net as well (more if we count virtual machines,) so it’s clear that the balance is turning in the direction of the special-purpose machines. This is what makes Zittrain sad, but it shouldn’t. It’s in the nature of general-purpose systems not to require much multiplication; one that’s fast but stationary and another that’s lighter and mobile and one more that’s super light and ultra-mobile is about all you’ll ever need. But special purpose machines multiply like rabbits, as more and more purposes are discovered for networked devices.
So the future is obviously going to embrace more specialized (“sterile tethered appliance”) machines than general purpose machines; that’s a given. The “Future of the Internet” question is actually whether the general-purpose machines also become more powerful and capable of doing more things than they do now. In other words, don’t just count machines, count functions and applications. The failure to understand this issue is Zittrain’s fundamental error. (Gee, the fast Internet made me smarter already.)
Attaching a controller/monitor to my aquarium that I can access across the Internet is the next exercise, and after that some security cameras and an outdoor Wi-Fi access point. It never ends.
DC wonks are by now aware that Harold Feld has left MAP and joined Public Knowledge as Legal Director. While there’s no doubt that Harold is a card-carrying communist, he’s my favorite pinko so I’m glad to see he’s secured gainful employment. With any luck, he can slap a little sense into the more fanatical members of the PK staff and make them act a little bit more like adults. So congrats, Harold, and good luck to you. A little, anyway.
Speaking of communists, check this breath-taking exercise in spin at Technology Liberation Front. Tim Lee trots out that sick “GNU/Linux operating system” trope. Nope, GNU and Linux are two different things created by two different communities under very different assumptions. The FSF tried to create its own OS for many years and failed, but Torvalds did it right away because he’s a brainy and practical dude. Don’t count in fire-breathing ideologues to create your technology for you, there will be so many strings attached you won’t want to use it.
There’s nothing like a hoarde of iPhone users to kill access to to AT&T’s wireless network: my AT&T Blackberry Bold was nearly unusable at eComm because of the large number of iPhones in the room, and the situation at SxSW is roughly the same. The silver lining in Austin this week is that the show’s Wi-Fi network is working well. Part of the trick is the deployment of Cisco 1252 Access Points with 5 GHz support. Unlike the Bold, iPhones can’t operate on 5 GHz channels, so all that spectrum is free for the taking by Bolds and laptops that can operate on it. In a concession to Macbook users who aren’t allowed to select a Wi-Fi band, the show net had different ESSID’s for 2.4 and 5 GHz operation. It also has a load of reasonable restrictions:
Acceptable Use Policy
The Wireless network at the Convention Center is designed for blogging, e-mail, surfing and other general low bandwidth applications. It is not intended for streaming of any sort.
a) Peer-to-peer traffic such as bittorrent and the like, use a disproportionate amount of bandwidth and are unfair to other attendees. Please refrain from non-conference related peer-to-peer activities to minimize this effect.
b) Please be considerate and share the bandwidth with your fellow attendees. Downloading all of the videos from a video sharing service for example, is being a hog.
c) Please do not actively scan the network. Many of the tools for scanning an address range are too efficient at using as much bandwidth as possible, this will likely be noticed.
Despite this AUP, I can confidently predict that speakers will demand unrestricted use of wireless spectrum.
Slight disconnect, eh?
UPDATE: Om of GigaOm reports that AT&T is addressing the problems in Austin by switching on the 850 MHz band in their downtown Austin towers:
AT&Tâ€™s network choked and suddenly everyone was up in arms. And then Ma Bell got in touch with Stacey, who reported that AT&T was boosting its network capacity.
How did they do this? By switching on 850 MHz band on eight cell towers to blanket the downtown Austin area. This was in addition to the existing capacity on the 900 MHz band. AT&T is going to make the same arrangements in San Francisco and New York by end of 2009, AT&T Mobility CEO Ralph de la Vega told Engadget.
Not all of your AT&T devices support the 850 MHz band, but the Bold does. The larger takeaway, however, is that all wireless systems become victim to their own success. The more people use them, the worse they get. C’est la vie.
We’re quite fond of Sir Tim Berners-Lee. As the first web designer, he personally converted the Internet from an odd curiosity of network engineering into a generally useful vehicle for social intercourse, changing the world. That this was a contribution of inestimable value goes without saying. It’s therefore distressing to read that he’s been mumbling nonsense in public fora about Internet management practices.
For all his brilliance, Sir Tim has never really been on top of the whole traffic thing. His invention, HTTP 1.0, did strange things to the Internet’s traffic handling system: his decision to chunk segments into 512 byte pieces tripled the number of packets the Internet had to carry per unit of information transfer, and his decision to open a unique TCP stream for every object (section of text or graphic image) on a web page required each part of each page to load in TCP’s “slow start” mode. Carriers massively expanded the capacity of their pipes in a vain attempt to speed up web pages, as poor performance was designed into Sir Tim’s protocol. Hence the term “world-wide wait” had to be coined to describe the system, and more experienced engineers had to produce HTTP 1.1 to eliminate the tortured delay. This is not to bash His Eminence, but rather to point out that all of us, even the geniuses, have limited knowledge.
At a House of Lords roundtable last week, Sir Tim took up a new cause by way of complaining about one of the ways that personal information may be obtained on the Internet:
Speaking at a House of Lords event on the 20th anniversary of the invention of the World Wide Web, Berners-Lee said that deep packet inspection was the electronic equivalent of opening people’s mail.
“This is very important to me, as what is at stake is the integrity of the internet as a communications medium,” Berners-Lee said on Wednesday. “Clearly we must not interfere with the internet, and we must not snoop on the internet. If we snoop on clicks and data, we can find out a lot more information about people than if we listen to their conversations.”
Deep packet inspection involves examining both the data and the header of an information packet as it passes a ‘black box’ on a network, in order to reveal the content of the communication.
Like many opponents of the scary-sounding “deep packet inspection,” His Eminence confuses means and ends. There are many ways to obtain personal information on the Internet; the preceding post was about one of them. Given the choice, most of us would gladly surrender some level of information in order to obtain free services or simply better-targeted ads. As long as the Internet is considered a bastion of “free-” (actually, “advertising-supported-“) culture and information, personal information gathering will be the coin of the realm. So it doesn’t much matter if my privacy is violated by a silly packet-snooping system that I can easily thwart by encrypting my data or by an overly-invasive ad placement system, it’s gone either way. So if he’s manic about privacy, he should address the practice of information-gathering itself and not simply one means of doing it.
Nonsense is not unknown in the House of Lords, however. One of the most entertaining debates in the history of Western democracy took place in that august body, the infamous UFO debate:
The big day came on 18 January 1979 in the middle of a national rail strike. But the industrial crisis did nothing to dampen interest in UFOs. The debate was one of the best attended ever held in the Lords, with sixty peers and hundreds of onlookers â€“ including several famous UFOlogists – packing the public gallery.
Lord Clancarty opened the three hour session at 7pm â€œto call attention to the increasing number of sightings and landings on a world wide scale of UFOs, and to the need for an intra-governmental study of UFOs.â€ He wound up his speech by asking the Government reveal publicly what they knew about the phenomenon. And he appealed to the Labour Minister of Defence, Fred Mulley, to give a TV broadcast on the issue in the same way his French counterpart, M. Robert Galley, had done in 1974.
The pro-UFO lobby was supported eloquently by the Earl of Kimberley, a former Liberal spokesman on aerospace, who drew upon a briefing by the Aetherius Society for his UFO facts (see obituary, FT 199:24). Kimberleyâ€™s views were evident from an intervention he made when a Tory peer referred to the Jodrell Bank radio telescopeâ€™s failure to detect a single UFO: â€œDoes the noble Lord not think it conceivable that Jodrell Bank says there are no UFOs because that is what it has been told to say?â€
More than a dozen peers, including two eminent retired scientists, made contributions to the debate. Several reported their own sightings including Lord Gainford who gave a good description of the Cosmos rocket, â€œa bright white ballâ€ like a comet flying low over the Scottish hills on New Yearâ€™s Eve. Others referred to the link between belief in UFOs and religious cults. In his contribution the Bishop of Norwich said he was concerned the UFO mystery â€œis in danger of producing a 20th century superstitionâ€ that sought to undermine the Christian faith.
Perhaps their Lordships will invite His Eminence to observe an actual debate on Internet privacy, now that he’s set the stage with the roundtable. I think it would be absolutely smashing to see 40 of Bertie Wooster’s elderly uncles re-design the Web. Maybe they can add a comprehensive security model to the darned thing.
On a related note, Robb Topolski presented the worthies with a vision of the Web in a parallel universe that sent many scurrying back to their country estates to look after their hedgehogs. Topolski actually spoke about North American gophers, but the general discussion brings to mind the hedgehog’s dilemma of an open, advertising-supported Internet: a system that depends on making the private public is easily exploited.
UPDATE: Incidentally, Topolski’s revisionist history of the Web has been harshly slapped-down by the Boing-Boing readers who should be a friendly audience:
Huh? What a bizarre claim. Is he saying that network admins weren’t capable of blocking port 80 when HTTP was getting off its feet?!?
Wha? Even ignoring the fact that network admins at the time _did_ have the tools to block/filter this kind of traffic, this would still have little or nothing to do with endpoint computing power.
Oh, man. This is defintely junk.
Revisionist history in the name of greater freedom is still a lie.
Follow this link to a discussion from 1993 about how to make a Cisco firewall block or permit access to various Internet services by port. HTTP isn’t in the example, but the same rules apply. The power was clearly there.
Welcome to the NAF, Robb, do your homework next time.
Regular readers are aware that this blog used to feature Google ads. We never made serious money from Adsense, so it was easy to decide to drop it when the Terms and Conditions of Google’s new behavioral advertising campaign were relased. Here’s what Google suggests re: a privacy disclosure:
* Google’s use of the DART cookie enables it to serve ads to your users based on their visit to your sites and other sites on the Internet.
We use third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. If you would like more information about this practice and to know your choices about not having this information used by these companies, click here.
You can find additional information in Appendix A of the NAI Self-Regulatory principles for publishers (PDF). Please note that the NAI may change this sample language at any time.
People don’t come to this site to buy stuff, and they shouldn’t have to undergo a vexing decision-making process before visiting this blog, so we’ve dropped Google as an advertiser. Not because Google is Evil, but simply because this is one too many hoops for our readers to jump through. Plus, the commission rate sucks.
So please continue to read Broadband Politics without fear of being reported to Big Brother.
How many companies are actually increasing headcount these days? Not many, but AT&T is adding 3,000 jobs to expand its 3G network:
Despite a capex cut of up to $3 billion this year, AT&T Inc. (NYSE: T) made it clear today that it intends to spend to improve and expand its 3G network — adding 3,000 jobs in the process to support “mobility, broadband, and video.”
This comes on the heels of an announcement last December of a cut of 12,000 jobs, so it’s not quite as against-the-grain as it might seem. Still, it’s good news for 3,000 people and a counter-indicator of permanent global economic collapse.
The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently have in the US, absent a coherent regulatory framework for Internet services.
Most of us would probably say, after reading the whole package, that consumer rights are advanced by it. But most of us aren’t fire-breathing neutrality monsters who can’t be bothered with the practical realities of network operation. The actual document the Brits are circulating is here; pay special attention to the Rationale.
The operative language establishes the principle that there are in fact limits to “running the application of your choice” and “accessing and sharing the information of your choice” on the Internet, which is simply stating some of the facts of life. If you’re not allowed to engage in identity theft in real life, you’re also not allowed to do so on the Internet; if you’re not allowed to violate copyright in real life, you’re also not allowed to do so on the Internet; and so on. Similarly, while you’re allowed to access the legal content and services of your choice, you’re not allowed to access them at rates that exceed the capacity of the Internet or any of its component links at any given moment, nor without the finite delays inherent in moving a packet through a mesh of switches, nor with such frequency as to pose a nuisance to the Internet Community as a whole or to your immediate neighbors. Such is life.
In the place of the current text which touts the freedoms without acknowledging the existing legal and practical limits on them, the amendment would require the carriers to disclose service plan limits and actual management practices.
So essentially what you have here is a retreat from a statement that does not accurately describe reasonable expectations of Internet experience with one that does. You can call it the adoption of a reality-based policy statement over a faith-based statement. Who could be upset about this?
Plenty of people, as it turns out. A blog called IPtegrity is hopping mad:
Amendments to the Telecoms Package circulated in Brussels by the UK government, seek to cross out users’ rights to access and distribute Internet content and services. And they want to replace it with a â€˜principle’ that users can be told not only the conditions for access, but also the conditions for the use of applications and services.
…as is science fiction writer and blogger Cory Doctorow:
The UK government’s reps in the European Union are pushing to gut the right of Internet users to access and contribute to networked services, replacing it with the “right” to abide by EULAs.
…and Slashdot contributor Glyn Moody:
UK Government Wants To Kill Net Neutrality In EU
…The amendments, if carried, would reverse the principle of end-to-end connectivity which has underpinned not only the Internet, but also European telecommunications policy, to date.’
The general argument these folks make is that the Internet’s magic end-to-end argument isn’t just a guideline for developers of experimental protocols (as I’ve always thought it was,) but an all-powerful axiom that confers immunity from the laws of physics and economics as well as those of human legislative bodies. Seriously.
So what would you rather have, a policy statement that grants more freedoms to you than any carrier can actually provide, or one that honestly and truthfully discloses the actual limits to you? This, my friends, is a fundamental choice: live amongst the clouds railing at the facts or in a real world where up is up and down is down. Sometimes you have to choose.
H/T Hit and Run.