Like several other engineers, I’m disturbed by the white spaces debate. The White Space Coalition, and its para-technical boosters, argue something like this: “The NAB is a tiger, therefore the White Spaces must be unlicensed.” And they go on to offer the comparison with Wi-Fi and Bluetooth, arguing as Tom Evslin does on CircleID today that “If we got a lot of innovation from just a little unlicensed spectrum, it’s reasonable to assume that we’ll get a lot more innovation if there’s a lot more [unlicensed] spectrum available.”
According to this argument, Wi-Fi has been an unqualified success in every dimension. People who make this argument haven’t worked with Wi-Fi or Bluetooth systems in a serious way, or they would be aware that there are in fact problems, serious problems, with Wi-Fi deployments.
For one thing, Wi-Fi systems are affected by sources of interference they can’t detect directly, such as FM Baby Monitors, cordless phones, and wireless security cameras. Running Wi-Fi on the same channel as one of these devices causes extremely high error rates. If 2.4 and 5.x GHz devices were required to emit a universally detectable frame preamble much of this nonsense could be avoided.
And for another, we have the problem of newer Wi-Fi devices producing frames that aren’t detectable by older (esp. 802.11 and 802.11b gear) without an overhead frame that reduces throughput substantially. If we could declare anything older than 802.11a and .11g illegal, we could use the spectrum we have much more efficiently.
For another, we don’t have enough adjacent channel spectrum to use the newest version of Wi-Fi, 40 MHz 802.11n, effectively in the 2.4 GHz band. Speed inevitably depends on channel width, and the white spaces offer little dribs and drabs of spectrum all over the place, much of it in non-adjacent frequencies.
But most importantly, Wi-Fi is the victim of its own success. As more people use Wi-Fi, we have share the limited number of channels across more Access Points, and they are not required to share channel space with each other in a particularly efficient way. We can certainly expect a lot of collisions, and therefore packet loss, from any uncoordinated channel access scheme, as Wi-Fi is, on a large geographic scale. This is the old “tragedy of the commons” scenario.
The problem of deploying wireless broadband is mainly a tradeoff of propagation, population, and bandwidth. The larger the population your signal covers, the greater the bandwidth needs to be in order to provide good performance. The nice thing about Wi-Fi is its limited propagation, because it permits extensive channel re-use without collisions. if the Wi-Fi signal in your neighbor’s house propagated twice as far, it has four times as many chances to collide with other users. So high power and great propagation isn’t an unmitigated good.
The advantage of licensing is that the license holder can apply authoritarian rules that ensure the spectrum is used efficiently. The disadvantage is that the license holder can over-charge for the use of such tightly-managed spectrum, and needs to in order to pay off the cost of his license.
The FCC needs to move into the 21st century and develop some digital rules for the use of unlicensed or lightly-licensed spectrum. The experiment I want to see concerns the development of these modern rules. We don’t need another Wi-Fi, we know how it worked out.
So let’s don’t squander the White Spaces opportunity with another knee-jerk response to the spectre of capitalism. I fully believe that people like Evslin, the White Space Coalition, and Susan Crawford are sincere in their belief that unlicensed White Spaces would be a boon to democracy, it’s just that their technical grasp of the subject matter is insufficient for their beliefs to amount to serious policy.
I’ve posted the following on Dave Farber’s IP list, but I’m posting it here as well because it’s vitally important to spectrum policy and to consumers who want broadband throughout the US.
Making the TV white spaces into Part 15-style unlicensed bands is a HORRIBLE idea. Why? Well, firstly, the spectrum isn’t needed for that purpose. With 500 MHz of unlicensed spectrum at 5 GHz, plus 50 MHz at 2.4 GHz, PLUS the 20 MHz of spectrum at 900 MHz, do we really need more spectrum for cordless phones and baby monitors? No. What we need the spectrum for is to deliver wireless broadband, on frequencies which are NOT polluted by an untold number of consumer devices.
Secondly, the laws of physics come into play here, in a big way. Signals transmitted on the former TV frequencies can penetrate walls and go for miles, interfering with all the other devices in town operating in the same band. Is this what we want for consumer devices? Of course not! If anything, we should allocate higher unlicensed frequencies — which have a HARDER time penetrating walls — so that each user’s home is his or her “spectrum castle.” If any more spectrum is needed for indoor wireless (though it’s not clear that it is), how about liberalizing the rules for the 24 GHz unlicensed “low millimeter wave” band so that it can be used in the home? This would make far more sense than turning consumer devices loose on the TV bands. The “white spaces,” in turn, could be reserved for services that deliver wireless broadband right through the walls of these “castles,” so as to connect them to the Internet without drilling holes or mounting outdoor antennas.
Thirdly, since the radios used on these frequencies have to be “cognitive” anyway (they have to make sure not to step on TV channels), they might as well use their intelligence to make better use of the airwaves. As I explain in my talk at
http://www.brettglass.com/CR/
Shannon’s Law dictates that the way to get the greatest utility out of spectrum is not to slice it up into little chunks and license it exclusively, but rather to share large swaths of spectrum, intelligently, using cognitive radio technology. In fact, if we do not do the latter, wireless will never be useful to deliver “ultra-broadband” — that is, speeds of 100 Mbps and up — to multiple users from a single transmitter. The laws of physics and information theory indisputably dictate this; again, see the slides at the link above.
Fourthly, using this spectrum to deliver service TO the home rather than for unlicensed devices WITHIN the home would help ensure that this use would not interfere with television broadcasters, because providers’ antennas — which would be located outdoors on towers and buildings and would be very sensitive — would be much better situated to detect any broadcast signals with which they might interfere.
So, what we really need is NONEXCLUSIVE licensing of the TV white spaces — similar to the licensing which has been established for the 3.65 GHz band, but with a mandatory spectrum etiquette that allows the spectrum to be shared among multiple broadband providers and their customers. This would maximize the use of the spectrum and also consumer choice. It would this be far better than Tom Hazlett’s concept of total exclusive licensing (which would just allow the big cell phone providers to warehouse yet more spectrum) or the New America plan for totally unlicensed spectrum (which would turn the spectrum into a useless “junk band”). A simple understanding of physics, radio technology, and consumer needs leads inevitably to this as the best possible use of the TV “white spaces.”
–Brett Glass
P.S. — As for the Broadway theaters and churches who have been using the band illegally: All I can say is, “suck it up and get legal.” The cost of doing so is modest compared to the benefits that this spectrum would bring if it were devoted to the provision of wireless broadband. And the illegal users would be better off on a higher frequency anyway, because the potential for interference from sources outside the building would be much, much lower.
Today’s the deadline for FCC comments. The docket number is 04-186, and the ECFS form is here. I’ve filed mine already.
Brett Glass:
Yes, it is exactly for this purpose. The problem with the other bands
is that the signal does not travel far at those frequencies.
No it doesn’t. There is no requirement that the bandwidth be
contiguous. To illustrate this take your contiguous spectrum, filter
the upper half and have it modulate a carrier frequency fc_1, modulate
the other chunk of the spectrum at fc_2. Add them and send it over the
channel. At the receiver reverse the operation. The end result is an
identity operation so there can be no change in capacity.
But it is besides the point. It is senseless to talk about the
theoretical capacity of the channel when most practical communication
systems do not come anywhere close to approaching this bound. Never
mind that AWGN is a nothing near a good model for ‘real’ channel. In
real channels frequency diversity can provide gains in system
performance (see OFDM, FHSS).
The whole talk about “above the knee” in the Shannon curve is
non-sensical. Besides being a bad model/benchmark as explained above
you cannot just maximize channel capacity (or spectral efficiency). It
ignores a plethora of design constraints (BER and power constraints are
the two most prominent ones). For most channels the channel capacity
is not even known so it is a meaningless goal in any case.
I do not comprehend the basis of your assertion that lower-rate
signals contribute more interference than high rate signals for the
same transmission power. It would be foolish to mandate that
reliability (BER) necessarily be sacrificed for transmission rate. And
it is pointless to talk about a given spectral efficiency without
fixing a transmission power.
Jason, you obviously fail to comprehend the problem vis-a-vis interference. Today’s unlicensed spectrum carries more than far enough to allow the use of wireless devices within households. But the “white space” spectrum will cause consumer devices to interfere with everyone else in the neighborhood. Forget about the broadcasters; you’ll interfere with anything else that anyone else is doing on the band within several blocks.
As for contiguous spectrum: Chopping the spectrum up creates waste. No filter is perfect, and you cannot transmit right up to the edge of a channel, so you can lose 25% or more of the utility of your spectrum if you chop it into pieces.
Finally, as regards Shannon’s Law: you may want to consult any good textbook regarding its implications. It is a very simple corollary of Shannon’s Law that signals transmitted at lower data rates are more resistant to interference, and another that the error rate NECESSARILY increases with the data rate.
The same is true for every wireless communication system. What is your
point?
I demonstrated that it does not follow from Shannon’s theorem that
“the way to get the greatest utility out of spectrum is not to slice
it up” as you claimed. Moreover, you can combine imperfect filters
into an equivalent filter without “loss” (c.f. Nyquist pulse-shaping
criterion). Finally, every bandlimited transmission system requires
the use of transmission filters and creating wide-band filters is
impractical above a certain bandwidth so one does not, in general,
simply just use huge “swaths” of bandwidth without sub-dividing
it. GSM, for example, uses 200 kHz sub-channels.
But this is moot because available bandwidth in the whitespaces cannot
be increased due to the adjacent broadcast channels.
That wasn’t your claim. You claimed that transmitting at a lower data
rate increases interference for other users. Please substantiate that
claim.
(Also, the fact that you can trade rate for robustness does not follow
from Shannon’s theorem).
The point that Shannon’s theorem for the AWGN is a bad model and
channel capacity a meaningless design criteria for practical system
design also still stands.
White spaces advocates claim this spectrum has better propagation characteristics than the higher-frequency ISM and UNI-II bands used by Wi-Fi. This isn’t entirely correct, of course, but if it were the argument for superior propagation vs. Wi-Fi would certainly follow, Jason.
Regarding channel width, surely you’re aware that RF signals exhibit roll-off into adjacent channels due to decay into harmonics. To protect against interference caused by roll-off, we have guard bands. So we only get 22 MHz of usable channel in each 25 MHz chunk devoted to a Wi-Fi channel. The wider the channel, the less effect the guardband has on the signal.
There is a way to clean up the white space spectrum, of course: just re-allocate the TV channels so they’re contiguous. Sounds messy, because transmitters have to be re-tuned, but it’s probably easier than it looks. And how many people really watch TV OTA anyhow?
The real issue here, In my estimation, is that the FCC’s traditional analog model dictates power levels and duty cycles only, and that’s not good enough any more. We need to be able to retire obsolete devices like 802.11b when newer technologies like 11n come along that make more efficient use of the air. The White Spaces plan doesn’t allow for that, and it’s a major shortcoming.
Jason, I think you’re being intentionally dense here. The point is that, like the 900 MHz band (which penetrates walls a bit less), the unlicensed “white spaces” will become completely congested with signals that leak from buildings.
You also seem intentionally to have misconstrued my remarks, claiming that I asserted that “transmitting at a lower data
rate increases interference for other users.” That’s not what I said. What I said was that, as a basic corollary of Shannon’s Law, transmissions at slower data rates are more robust in the face of interference than those at higher data rates. There is thus a “race to the bottom” in which the slowest (and, hence, least efficient) equipment “wins” the interference battle. So, your baby monitor or wireless speaker system (which could just as well have been on another unlicensed frequency) will work OK, but it will render the spectrum useless for broadband.
We clearly see a “race for the bottom” in Wi-Fi, where rate selection algorithms retry at lower and lower rates to retry lost frames. Clearly, if the frame was lost because of a collision, the lower rate retry is not inherently productive, but it does result in greater occupancy of the shared channel. This isn’t an inherent property of packet radio, just a universal implementation choice. The net effect on congested channels is to make the delay vs. load graph go non-linear.
The same logic would imply that GSM band would be “completely
congested” with signals that leak through buildings also. Or any other
long-range multiple-access wireless communication medium. The fact is
that people want to communicate through walls and buildings and
specfically want to use this spectrum to do so. Power control can
mitigate interference transmission for those do not intend to transmit
over long distances.
This is obviously false. It does not matter to User A at what rate
User B transmits at, only the interference User B produces. If there is
much interference then users will be forced to transmit at
correspondingly lower rates to achieve the same robustness but this
has nothing to do with the rates that other users choose; the same
problem would exist if other users decided to select high-rate
low-robustness transmissions.
Also, although it is not relevant, I would like to know how the
rate-robustness trade-off follows specifically from Shannon’s channel
capacity theorem. I have access to several communications theory
textbooks from none of which this consequence is apparent. I’m always
happy to learn something new.
Again, Jason, you’re missing several important points. Firstly, with GSM, there is one licensee per channel, and therefore there is no random interference from millions of babbling consumer devices. Secondly, GSM uses time division multiplexing, so the devices speak in turn — not all at once — and do not interfere with one another. Thirdly, GSM devices only transmit with as much power as necessary; unlicensed consumer devices do not do this.
Richard points out a minor sort of “race to the bottom,” in which a group of similar rate-adaptive devices cause one another to drop to the lowest speed they all support. But this isn’t common, especially if they can avoid one another via CDMA. What’s much worse — and, alas, a much bigger problem — is when a user of unlicensed spectrum chooses a piece of equipment that ignores whether other equipment is transmitting and just tries to “conquer” all of the others. This occurs, for example, on the 900 MHz band, where railroads use very slow (but very robust) frequency hopping radios for down-the-track signaling. Wireless ISPs literally cannot use the 900 MHz band for broadband within a mile of a railroad track as a result of these noisy radios.
As for why this follows from Shannon’s Law: it’s an obvious corollary. Since the maximum data rate of a channel depends upon the signal to noise ratio, equipment that can get away with transmitting at a much slower data rate (e.g. that railroad telemetry equipment, which can work at 9600 baud instead of megabits per second) can “win” the battle when there is interference. But in doing so, it acts as a spoiler. An ISP who needs to do broadband can’t expect its users to tolerate slower-than-dialup data rates.
The point was that multiple access communication can be done with many
users. There is no reason multiple access cannot function with loosely
coordinated or even uncoordinated devices.
This is not fundamental. Take CDMA instead.
I believe the FCC order mandates that they must.
I hate to keep harping on this because it is not really germane to the
discussion but the channel capacity is not the rate at which practical
systems transmit only an upper bound and rate upper bounds would not
be terribly useful in comparing two systems. More importantly,
Shannon’s theorem only talks about zero (in the limit) probability of
error communication and therefore effectively says nothing about
non-arbitrarily small error rates like real communication systems
have. Normally what would be meant by a rate-robustness tradeoff is
that, for a fixed level of interference, lower data rates result in
lower bit-error rates and vice-versa. *This* does not follow from the channel capacity theorem.
How does it act as a spoiler? The only thing that would help the WISP
is if the railroad telemetry produced less interference (or didn’t
transmit at all) not at what rate it transmits. You may be saying that
railroad telemetry is not important and shouldn’t be transmitted in
order to improve performance of more “important” systems but that is a
different discussion.
The corollary of Shannon’s Law is that lower data rates require less bandwidth, Jason. Work the problem of low signaling rates in a shared channel and I think you’ll see Brett’s point. If it’s not immediately obvious, consider that many of our popular wireless systems use redundant coding to increase robustness, simply adding or subtracting redundancy at constant modulation.
Jason, you should read the Part 15 regulations. They absolutely do not say that a device should transmit with no more power than necessary. In fact, many manufacturers of Wi-Fi devices tout their power output — 500 mW or more — even though it’s useless, because the other devices with which they’re communicating cannot answer with that much power and so are the limiting factor.
As for CDMA vs. GSM: CDMA is an entirely different scheme which differentiates between simultaneous transmitters by encoding rather than requiring them to transmit one at a time. But their bandwidth is constrained by the coding, so that the effective throughput is the same in either case. Also, CDMA requires EXTREMELY careful regulation of power levels. And it cannot be used except on licensed spectrum, because it is very sensitive to in-band interference.
Finally, as for the applicability of Shannon’s Law: I give up. You don’t seem to be capable of getting it, or maybe you just do not want to because you have some preconceived notion of how things “ought” to be.
I understand perfectly well that you can trade robustness for rate. I
was disputing that this is a consequence of the channel capacity
theorem. My opinion is that working with capacity and interference is
not informative: rate and BER are meaningful. But this unimportant
in order to make the following point:
All else being equal the rate at which other users transmit has no
impact on the performance on a given user’s communication. The
rate-race-to-the-bottom assertion is not valid.
The whitespaces order says:
Sorry, Jason, but this is nonsense:
On a shared channel, only one frame can be successfully transmitted at a time, so the length of time my neighbors occupy the channel directly affects my ability to use the channel. You don’t need Shannon’s Law to understand that, it’s simple arithmetic.
Besides being not true (c.f. any non-TDMA system like CDMA or FDMA)
the implicit model being discussed is non-cooperating users. That is,
other users of the channel are external interferers.
Like I said in the previous post this has nothing to do with Shannon’s
law (the premise used is the rate-robustness tradeoff which I do not
dispute).
Yes, I’m talking about non-cooperating neighbors, the model for both Wi-Fi and the Google whitespaces model.
Richard: Yes, non-cooperating neighbors — without equipment that cooperates automatically — is the biggest weakness of Part 15. I’ve long been an advocate of spectrum etiquettes. However, the guys who just want all spectrum to be exclusively owned have done their best to get the FCC not to require them. Why? Because without etiquettes on the unlicensed bands, bedlam ensues and their licensed spectrum is more valuable.
Jason: Part 15 contains no requirement that minimum power be used. It will after the final rules for the white spaces are codified — and that will be ONLY for the white spaces. However, there is an inherent problem in it: to determine the minimum power required to communicate, you have to turn the power down regularly so as to cause communications to fail. Broadband providers’ equipment cannot do this; it needs to work perfectly, 24×7. Another reason why the release of the white spaces as unlicensed is a horrible idea.