Going Mobile: Technology and Policy Issues in the Mobile Internet

I’m presenting a report on the Mobile Internet at the ITIF Global Command Center in Washington bright and early Tuesday morhing:

The Internet is changing. In a few short years, Internet use will come predominately from mobile devices such as smartphones and tablets rather than traditional PCs using fixed broadband. A fully mobile broadband Internet offers exciting opportunities for innovation in networks, devices, and applications with enormous benefits for the economy and society.

The shift from a wire-centric Internet to a mobile one has profound implications for technology, policy, and applications. A new report by ITIF Research Fellow Richard Bennett explains how mobile networks are changing as they become part of the Internet, the implications mobile networking has for public policy, and how policymakers can facilitate the transition to mobile broadband.

Join us for the presentation of the report and a panel discussion among leading representatives of diverse viewpoints on Internet policy.

Date: Tuesday, March 2, 2010
Time: 9:00am- 10:30am
Location: 1101 K Street Suite 610A Washington, DC 20005

Presenter

Richard Bennett
Research Fellow, The Information Technology and Innovation Foundation
Respondents

Harold Feld
Legal Director, Public Knowledge

Morgan Reed
Executive Director, Association for Competitive Technology

Barbara Esbin
Senior Fellow and Director, Center for Communications and Competition Policy, PFF

Click here to RSVP.

Blackberry dominates the world

Everybody knows we have our first Blackberry-toting president, but how many know that BlackBerry outsells Apple?

An aggressive “buy-one-get-one” promotion by Verizon Wireless helped RIM’s BlackBerry Curve move past Apple’s iPhone to become the best-selling consumer smartphone in the U.S. in the first quarter of 2009, according to market research firm The NPD Group.

RIM’s consumer smartphone market share increased by 15 percent to nearly half of the entire smartphone market in Q1 2009 versus the prior quarter. Apple’s and Palm’s market share both declined 10 percent each.

Part of this is driven by the new iPhone coming around in June, but Curve is not exactly state of the art in the Blackberry world. Take this as another example of conventional wisdom not being too wise.

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.

Life in the Fast Lane

No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before with the Blast! service that’s advertised at 16/2. I had the dude put the modem in the living room to get my router closer to the center of the house in order to improve my Wi-Fi coverage, which only took a splitter off the TiVo’s feed. The old modem remains installed for phone service, but its MAC address has been removed from the DHCP authorization list. It turns out the backup battery had been installed incorrectly in the old modem, so he fixed that. The only incident that turned up in the install was the discovery that my TiVo HD is feeding back a noticeable voltage from the cable connection, which can apparently cause bad things to happen to the DOCSIS connection. He installed a voltage blocker off some kind to keep that at bay, but I’ll have to complain to TiVo about that feature.

As I had to go to the office as soon as the installation was completed, I haven’t had time to play with my privileged fast lane service, but I did enough to notice a fairly dramatic difference even in ordinary activities like reading e-mail. I use an IMAP server on the host that handles bennett.com, and its location in Florida tends to make for sluggish response when deleting mail or simply scanning a folder. It’s so fast now it’s like a local service. (People who use the more popular POP3 e-mail protocol won’t understand this, so don’t worry about it – when you delete an e-mail it’s a local copy, but mine is on the network.)

So the main effect of this super-fat Internet pipe is to make network services and content as readily accessible as local services and content. Which is a very wonderful thing for a couple of reasons: accessing content and services from the various machines I have connected to the Internet from home involves maintenance and security hassles that aren’t always worthwhile, so it’s convenient to outsource data to a system in the cloud that’s secure, well maintained, and backed up. It’s very easy to do that now, all the way around. And for the data that I still access locally, such as media files and the like, an off-site backup will be very painless.

One of the next exercises is going to be media streaming from my server in Florida to my TV in California, after I’ve got all my data encrypted and backed up. At this point, I’ve got three devices at home connected to the Internet that are something other than general-purpose computers: a TiVo, a Blu-Ray player that also does Netflix streaming, and a Blackberry that does goes to the web via 802.11a/g Wi-Fi. At any given time, I’ve got two to four general-purpose computers on the ‘net as well (more if we count virtual machines,) so it’s clear that the balance is turning in the direction of the special-purpose machines. This is what makes Zittrain sad, but it shouldn’t. It’s in the nature of general-purpose systems not to require much multiplication; one that’s fast but stationary and another that’s lighter and mobile and one more that’s super light and ultra-mobile is about all you’ll ever need. But special purpose machines multiply like rabbits, as more and more purposes are discovered for networked devices.

So the future is obviously going to embrace more specialized (“sterile tethered appliance”) machines than general purpose machines; that’s a given. The “Future of the Internet” question is actually whether the general-purpose machines also become more powerful and capable of doing more things than they do now. In other words, don’t just count machines, count functions and applications. The failure to understand this issue is Zittrain’s fundamental error. (Gee, the fast Internet made me smarter already.)

Attaching a controller/monitor to my aquarium that I can access across the Internet is the next exercise, and after that some security cameras and an outdoor Wi-Fi access point. It never ends.

Bye bye G1

After suffering with the Google phone for 4 weeks, I took it back to T-Mobile yesterday (the contract says you only have 14 days, but I live in California where the time limit on an upgrade return is 30 days.) Jeff Turner describes the G1 appropriately: Like Windows 2.0, it’s good enough that you can tell it’s going to become the standard some day, but it’s not really usable in its present form. The main gripes I had with it are, in no particular order: poor battery life, dropped calls, a crappy Bluetooth implementation, unusable e-mail, a pathetic keypad, and a dearth of applications. My previous phone was a Blackberry Curve, which did everything that it did extremely well; if the Curve could do 3G I’d have got a replacement for the one I lost in London. But it doesn’t, so I’ve gone to a Sony Ericsson TM506, a feature phone that does phone things extremely well, has a built-in GPS (that doesn’t seem to work very well) and may possibly be used as a modem to tether a laptop to the 3G network (that feature seems to be controversial as Sony Ericsson supports it and T-Mobile may not; see update below.)

It’s basically a stop-gap until there’s a competent Blackberry for T-Mobile’s 3G network, which unfortunately uses oddball frequencies in the US.

The G1 has a high return rate owing to the generally pathetic implementation of Android by HTC. And I also don’t like sharing all the information about my personal life that Google wants. But that’s another story.

It’s clear the the iPhone has changed the game for mobile devices and the entrenched cell phone suppliers are struggling to catch up. I don’t doubt that Apple will continue to dominate the mobile device space for at least the next year or two, so I may just have to accede to reality and jump on that bandwagon.

UPDATE: Tethering works, I get close to 800Kbps at home, the Bluetooth limit, but the quota is pathetic: 100 MB/mo, and that’s not going to last long. Presumably, it downgrades to EDGE when the 3G quota is exhausted. The phone doesn’t have a standard USB connector, so I tethered over Bluetooth using the very nice PC Suite from Sony-Ericsson. It guides you through the Bluetooth hookup and makes accessing the Internet through the phone a point-and-click operation, even on a Mac.

It’s nice to use stuff that’s well engineered, isn’t oversold, and actually works, (except for that GPS, which must be defective on my phone.) including the GPS.

The 100MB/mo quota for $20 for the TM506 makes no sense compared to the 10GB/mo they sell for $25 to G1 customers unless Google is paying a subsidy to T-Mobile. If they are, Steve Jobs must be laughing all the way to the bank.

Just Another Utility

Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or nearly as much) as we need electricity and running water, nobody has much of a problem with this observation, other than standard hyperbole objections.

But utilities in the USA tend to be provided by the government, so it’s reasonable to draw the implication from Crawford’s comparison that the government should be providing Internet access. This interpretation is underscored by her frequent complaints about the US’s ranking versus other countries in the broadband speed and price sweepstakes.

If you continually advocate for more aggressive spending to win a supposed international broadband arms race, you minimize the effectiveness of private investment, and you tout the planet’s fastest and cheapest Internet service as a necessity of life, you’re going to be regarded as either a garden-variety socialist or an impractical dreamer.
Continue reading “Just Another Utility”

Ultra-cool Computers

My next personal computer is going to be an ultra-portable tablet. I’ve never bought a laptop of my own, since my employers tend to shower me with them, and they’ve had so many drawbacks I couldn’t see any point in shelling out for one of my own. But recent research shows that we’re officially in the Dynabook Era with great gear like the Dell Latitude XT Tablet, the Lenovo X200 Tablet, the Asus R1E, Fujitsu LifeBook T5010, and the recently-announced HP Elitebook 2730p

What these babies have in common is light weight, sharp but small screens, long battery life, a wealth of connectivity features, and other goodies like web cams and mikes, GPS locators, touch-sensitive displays, and handwriting recognition. They’re more like Smartphones than traditional PCs, but without all the annoying limitations that make Blackberries better in the demo than in real life. Unlike pure slate computers that lack keyboards, they have swivel-mounted screens that can be twisted and folded to cover the laptop’s clamshell base, so you have a touch-sensitive display for when you need to jot notes or draw, and a regular keyboard for high-volume typing.

Each excels in some areas. The Dell seems to have the clearest screen and the best handwriting recognition since it uses a capacitive touchscreen. It draws a bit more power, since capacitive touch keeps an electric field active across the screen, where the more common resistive touch relies on a magnetic stylus to alert the touch sensor that something’s happening. The stylus-activated system rules out using your finger as a pointing device, which is also unfortunate, and has a thicker overlay on the screen than the Dell. The iPhone uses a capacitive touch system.

Dell also has a nice graphics chip with some dedicated memory which signficantly outperforms the shared-memory systems that are commonplace. But Dell’s CPU is at the low end of the scale, and the 1.2 GHz Intel U7600, an ultra-low voltage 65nm dual-core CPU, is as good as it gets. This is apparently a soldered-in part that can’t be upgraded. Dell is also super-expensive.

The Lenovo is too new for much in the way of evaluation, but it has very nice specs and a great pedigree. While the XT Tablet is Dell’s first convertible, the X200 is Lenovo’s third or so, and the details show. If they would only stop white-listing their own wireless cards in the BIOS they’d be at the top of my list. X200 Tablet uses a more substantial and higher power Intel CPU, around 1.8 GHz, which makes is considerably faster than* the Dell. They also use Intel’s Centrino graphics, and suffer a bit for it, but that’s a classic engineering tradeoff. Lenovo has an amazing array of connectivity choices, including the UWB system AKA Wireless USB. With an internal Wireless WAN card with GPS, internal Wi-Fi (including 3×3 11n,) Bluetooth, and Wireless USB, this system has five kinds of wireless without a visible antenna, awfully sharp.

The Fujitsu and Asus convertibles have larger screens – 13.3 in. vs. 12.1 for the Dell and the Lenovo – and add a pound or so of weight. Asus is concentrating on their netbooks these days, and doesn’t seem to be serious about keeping up to date, while the Fujitsu makes some strange choices with noisy fans and heat.

To be avoided are the older HP’s using the AMD chipset. AMD can’t keep up with Intel on power efficiency, so convertible systems that use their parts are only portable between one wall socket and another.

None of these little Dynabooks has made me swipe a card yet, but the collections of technology they represent say a lot about the future of networking. With all that wireless, the obligatory Gigabit Ethernet looks like an afterthought.

Which brings me to my point, gentle readers. What’s your experience with Wireless WANs in terms of service – between AT&T, Sprint, and Verizon, who’s got it going on? I get my cell phone service from friendly old T-Mobile, but they’re not player in the 3G world. I like Verizon’s tiered pricing, as I doubt I’ll use 5GB/mo of random wireless, as close as I tend to be to Wi-Fi hotspots, but it seems like a much nicer fall-back than using my Blackberry Curve as a modem.

For a nice demonstration of the XT’s capacitive touch screen in comparison to the more primitive Lenovo, see Gotta Be Mobile.

*Edited. The X200 non-tablet has a faster processor than the X200 Tablet. The tablet sucks power out of the system, and Lenovo had to de-tune the CPU to provide it.

Comcast was right, FCC was wrong

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.

What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:

Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.

“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.

Korzeniowski then explains the facts of life:

The reality is that all ISPs are overbooked–they have sold more bandwidth than they can support.

This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.

ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.

“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.

But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.

Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to you computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.

Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.

One approach is to cap upstream traffic:

However, the “all you can eat” model may no longer be viable–a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.

In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.

Capping has its critics, mostly the same people who object to traffic management as well:

For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.

Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.

Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena–it’s not a great business.

In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory–and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.

The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.

Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.

Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.

This is not good.

Cross-posted to CircleID.

UPDATE: See Adam Thierer’s comments on this article at Tech Lib.


Technorati Tags: , ,

Comcast files their compliance plan

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted to the Comcast web site shortly.

The filing corrects some of the false allegations made by critics with respect to privacy, making it very clear that the existing system simply inspects protocol headers (“envelopes”) and not personal data. David Reed in particular got himself worked into a tizzy over the idea that Comcast was deciding which streams to delay based on content, but this is clearly not the case. Inside the IP envelope sits a TCP envelope, and inside that sits a BitTorrent envelope. User data is inside the BitTorrent (or equivalent) envelope, and Comcast doesn’t look at it.

The current system sets a bandwidth quota for P2P, and prevents P2P as a group from crossing the threshold from this quota (about 50% of total upstream bandwidth) with new uni-directional upload (AKA, file-server-like) streams by tearing down requested new streams with the TCP Reset bit. The system is a bit heavy-handed, but reserving 50% of the network for one class of application seems pretty reasonable, given that no more than 20% of customers use P2P at all.

Nonetheless, the new system will not look at any headers, and will simply be triggered by the volume of traffic each user puts on the network and the overall congestion state of the network segment. If the segment goes over 70% utilization in the upload direction for a fifteen-minute sample period, congestion management will take effect.

In the management state, traffic volume measurement will determine which users are causing the near-congestion, and only those using high amounts of bandwidth will be managed. The way they’re going to be managed is going to raise some eyebrows, but it’s perfectly consistent with the FCC’s order.

High-traffic users – those who’ve used over 70% of their account’s limit for the last fifteen minutes – will have all of their traffic de-prioritized for the next fifteen minutes. While de-prioritized, they still have access to the network, but only after the conforming users have transmitted their packets. So instead of bidding on the first 70% of network bandwidth, they’ll essentially bid on the 30% that remains. This will be a bummer for people who are banging out files as fast as they can only to have a Skype call come in. Even if they stop BitTorrent, the first fifteen minutes of Skyping are going to be rough. A more pleasant approach would be to let excessive users out of QoS jail with credit for good behavior – if their utilization drops to Skype level, let them out in a few seconds, because it’s clear they’ve turned off their file sharing program. This may be easier said than done, and it may raise the ire of Kevin Martin, given how irrational he is with this anti-cable vendetta.

The user can prevent this situation from arising, of course, if he wants to. All he has to do is set the upload and download limits in BitTorrent low enough that he doesn’t consume enough bandwidth to land in the “heavy user” classification and he won’t have to put up with bad VoIP quality. I predict that P2P applications and home gateways are going to incorporate controls to enforce “Comcast friendly” operation to prevent de-prioritization. There are other more refined approaches to this problem, of course.

At the end of the day, Comcast’s fifteen/fifteen system provides users with the incentive to control their own bandwidth appetites, which makes it an “end-to-end” solution. The neutralitarians should be happy about that, but it remains to be seen how they’re going to react.

It looks pretty cool to me.

UPDATE: Comcast-hater Nate Anderson tries to explain the system at Ars Technica. He has some of it right, but doesn’t seem to appreciate any of its implications. While the new system will not look at protocol headers (the evil “Deep Packet Inspection” that gets network neophytes and cranks so excited) , and it won’t use TCP Resets, that doesn’t mean that P2P won’t be throttled; it will.

That’s simply because P2P contributes most of the load on residential networks. So if you throttle the heaviest users, you’re in effect throttling the heaviest P2P users, because the set of heavy users and the set of heavy P2P users is the same set. So the “disparate impact” will remain even though the “disparate treatment” will end.

But the FCC has to like it, because it conforms to all of Kevin Martin’s rabbit-out-the-hat rules. The equipment Comcast had had to purchase for this exercise in aesthetic reform will have utility down the road, but for now it’s simply a tax imposed by out-of-control regulators.