Catherine Rosenberg, a professor with the University of Waterloo’s Department of Electrical and Computer Engineering, has written a great synopsis of the Internet for our cousins to the North:
The founding principle of the Internet is resource sharing and hence to deliver an appropriate end-to-end service, some level of co-ordination and traffic control is needed to ensure network performance does not collapse. This is even more true now as the last few years have seen massive increases in Internet traffic due in large part to the proliferation of “bandwidth hungry” applications such as games, peer-to-peer file transfers and increasingly complex, enriched web pages. Added to this is the “all you can eat” economic model promoted by the ISPs, an approach that entices users to always consume more, and of course the fact that the number of Internet users keeps on increasing.
So what does controlling the traffic mean? It means keeping the traffic entering the network under a certain threshold to avoid performance collapses that would affect everyone. And this is what traffic shaping does, by, for example, limiting the bandwidth available for certain types of applications that are less time sensitive in order to keep more bandwidth available for other applications that are more time sensitive, and used by the greater number of subscribers.
While some would argue that this is done “naturally” with Transmission Control Protocol, the reality is that TCP alone is not enough to avoid congestion and spread the burden of congestion as fairly as possible to all those using the congested area.
It’s so refreshing to read something like this after slogging through all the nonsense that our law professors have written about the Internet for our net neutrality debate. I highly recommend you read the Whole Thing.
The terror attack on Mumbai is an outrage, of course; it’s India’s 9/11 and 7/7. The terrorists attacked India’s most open city, entering by boat and killing random people at locations carefully chosen for traffic and impact. Indian security forces and heroic hotel service workers put down the terrorists, restoring order in a few days. This was kind of personal for me, since I’ve been through Mumbai (or “Bombay,” as we used to call it) something like 50 times over the years, occassionaly staying in the hotels that the terrorist scum attacked.
The press reports are now saying that the terrorist attack squad consisted of a mere 10 people. That’s a pretty small number to kill 200 people over the course of three days, so they must have had some local help. I’m waiting to see the rest of the story unfold.
Twitter played an essential role in increasing the terror and the confusion over the attack, as it served as the amplifier for every bogus rumor in circulation and offered exactly zero help with the fundamentals of the “story:” who, where, and why. Nonetheless, the “citizen media” crowd is crowing about the greatness of Twitter-enabled mobs. Sad. The Economist that came in the mail Friday was more authoritative than Twitter as to what actually happened in Mumbai and why.
The appropriate response to this massacre is to take a trip to Mumbai, and failing that to at least go eat at an Indian restaurant. The latter is symbolic only, but if that’s all you can do, at least do that. The civilized world has to hang together in the face of religious-fanatic barbarity, or surely we’ll hang separately.
During this period of hiatus for the world’s finest reality competition, please enjoy the Dancing Scientists:
Six weeks ago, the Gonzo Scientist challenged researchers around the world to interpret their Ph.D. research in dance form, film the dance, and share it with the world on YouTube (Science, 10 October, p. 186). By the 11 p.m. deadline this past Sunday, 36 dances–including solo ballet and circus spectacle–had been submitted online. A panel of nine judges–the three winners of the first “Dance Your Ph.D.” contest, three scientists from Harvard University, and three artistic directors of the dance company Pilobolus–scored the dances on their ability to bridge the art and science worlds. Today, Science announces the winners of the 2009 AAAS Science Dance Contest in four categories: Graduate Students, Postdocs, Professors, and Popular Choice
Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea.
The Internet is a global network, and regulating it properly is a matter of global concern. I’d like to share a view of the technical underpinnings of the question, to better inform the legal and political discussion that follows and to point out some of the pitfalls that lie in wait.
Why manage network traffic?
Network management, or more properly network traffic management, is a central focus of the current controversy. The consumer-friendly statements of policy, such as the Four Freedoms crafted by Senator McCain’s technology adviser Mike Powell, represent lofty goals, but they’re constrained by the all-important exception for network management. In fact, you could easily simplify the Four Freedoms as â€œyou can do anything you want except break the law or break the network.â€ Network management prevents you from breaking the network, which you principally do by using up network resources.
Every networking technology has to deal with the fact that the demand for resources often exceeds supply. On the circuit-switched PSTN, resources are allocated when a call is setup, and if they aren’t available your call doesn’t get connected. This is a very inefficient technology that allocates bandwidth in fixed amounts, regardless of the consumer’s need or his usage once the call is connected. A modem connected over the PSTN sends and receives at the same time, but people talking generally take turns. This network doesn’t allow you to save up bandwidth and to use it later, for example. Telecom regulations are based on the PSTN and its unique properties. In network engineering, we call it an â€œisochronous networkâ€ to distinguish it from technologies like the old Ethernet that was the model link layer technology when the DoD protocol suite was designed.
The Internet uses packet switching technology, where users share communications facilities and bandwidth is allocated dynamically. Dynamic bandwidth allocation, wire-sharing, and asynchrony mean that congestion appears and disappears on random, sub-second intervals. Packets don’t always arrive at switching points at the most convenient times, just as cars don’t run on the same rigorous schedules as trains. Continue reading “Regulation and the Internet”
It depends on whose numbers you like. Andrew Odlyzko claims it’s up 50-60% over last year, a slower rate of growth than we’ve seen in recent years. Odlyzko’s method is flawed, however, as he only looks at public data, and there is good reason to believed that more and more traffic is moving off the public Internet and its public exchange points to private peering centers. Nemertes collects at least some data on private exchanges and claims a growth rate somewhere between 50-100%.
The rate of growth matters to the ongoing debates about Internet regulation. If Odlyzko is right, the rate of growth is lower than the rate at which Moore’s Law makes digital parts faster and cheaper, so no problem, routine replacement of equipment will keep up with demand (leaving out the analog costs that aren’t reduced by Moore’s Law.) If Nemertes is right, user demand outstrips Moore’s Law and additional investment is needed in network infrastructure. Increased investment needs to be covered by government subsidies or by the extraction of additional value from the networks by their owner/operators. Subsidy isn’t going to happen while the economy teeters on the edge of collapse, so the high growth conclusion argues against regulations designed to preserve the legacy service model. It’s a vital question.
A couple of new data points emerged this week. Switch and Data, operator of PAIX public exchange points in Palo Alto and New York, says its traffic grew 112% last year:
International networks are making the decision to peer in the United States to reduce transit time between countries and accelerate the performance of U.S. and other global websites in their home markets. This is important due to the explosive growth of Web 2.0 with its bandwidth intensive websites for social networking, rich digital content, and business software applications. Exchanging traffic directly between content and end user networks also significantly reduces Internet transit expense which has been a rapidly growing cost for companies as their traffic volumes soar.
At the Switch and Data New York peering center, traffic was up an astonishing 295%.
Combining these numbers with what we know about the Content Delivery Networks that deliver as much as half of the Internet’s traffic, I think we can reasonably conclude that comprehensive measurement of Internet traffic would support the theory that traffic still grows at an increasing rate. One side effect of the increased use of CDNs and private peering is less certainty about the overall state of Internet traffic. Studies confined to public data are less and less useful, as many researchers have been saying for years.
At any rate, there’s considerable uncertainty about this question at the moment, which argues that the Internet needs a Nate Silver to pierce the fog of conflicting polls.
The recently-published Nemertes study, Internet Interrupted: Why Architectural Limitations Will Fracture the â€˜Net, includes a fine overview of the Internet, explaining public and private peering, content delivery networks, and overlay networks. It was necessary for the study to cover this ground as it had to correct the mistaken picture of Internet traffic thatâ€™s been foisted off on the regulating public by the MINTS study published by Andrew Odlyzko. MINTS only studies data gathered from public peering centers, a part of the Internet at which traffic growth is significantly lower than it is at private peering centers. Nemertes has a controversial model of traffic growth, but for understanding the way the Internet is put together, it’s excellent.
According to presidential historian Davis Logsdon of the University of Minnesota, some Americans might find it “alienating” to have a president who speaks English as if it were his first language.
“Every time Obama opens his mouth, his subjects and verbs are in agreement,” says Mr. Logsdon. “If he keeps it up, he is running the risk of sounding like an elitist.”
The historian said that if Mr. Obama insists on using complete sentences in his speeches, the public may find itself saying, “Okay, subject, predicate, subject predicate — we get it, stop showing off.”
The president-elect’s stubborn insistence on using complete sentences has already attracted a rebuke from one of his harshest critics, Gov. Sarah Palin of Alaska.
“Talking with complete sentences there and also too talking in a way that ordinary Americans like Joe the Plumber and Tito the Builder can’t really do there, I think needing to do that isn’t tapping into what Americans are needing also,” she said.
I wasn’t paying attention, but the Patent Office awarded me a second patent in July, United States Patent #7,394,813 for “Systems and methods for implementing an acknowledgement mechanism for transmission of a real-time data stream”
Systems and methods for implementing an acknowledgement mechanism for transmission of a real-time data stream from a sending system to a receiving system are disclosed. An exemplary method implemented in a sending system involves establishing a connection with a receiving system for transmitting a real-time data stream to the receiving system. The method also involves transmitting a current segment of the real-time data stream to the receiving system. If an acknowledgement of the current segment is not received from the receiving system during a predetermined time period, the method also involves discontinuing any retransmission attempts for the current segment and transmitting a next segment to the receiving system.
It’s a way of tricking TCP into letting a sender drop a segment that’s no longer interesting. My first patent was #7,239,626, “System clock synchronization in ad hoc and infrastructure wireless networks.” All about synchronizing our watches so we can strike at dawn.