Note about UDP

One of the more amusing criticisms of my article on BitTorrent over UDP is that I’m a clueless dork for saying UDP was designed for real-time applications since there was no such thing as VoIP back in the day. This is generally accompanied by the charge that I don’t know the first thing about the Internet, etc. So for the record, here’s a statement of the design goals for UDP by one of the people involved, the lovable David Reed:

A group of us, interested in a mix of real-time telephony, local area networks, distributed operating systems, and communications security, argued for several years for a datagram based network, rather than a virtual circuit based network…[UDP] was a placeholder that enabled all the non-virtual-circuit protocols since then to be invented, including encapsulation, RTP, DNS, …, without having to negotiate for permission either to define a new protocol or to extend TCP by adding “features”.

Any questions?

23 thoughts on “Note about UDP”

  1. Yes, I have one: If David Reed was able to see so far ahead to the point of thinking something similar to the actual VoIP protocol 28 years ago, how did he failed to see that a overuse of his UDP protocol could bring down the entire network? Not even Vint Cerf was able to imagine that IPv4 would be exhausted one day, and look, they are all running from server to server trying to implement IPv6 because IP numbers are running out.

    You have said all this because it’s about BitTorrent, and nothing more. If every phone in the planet starts using VoIP tomorrow it would have the same effect (or worse), and I can’t imagine you talking against VoIP.

  2. Lysander,

    If every broadband subscriber used VoIP at the same time, despite 20-to-1 over-subcription, the network would be fine at all levels. Normally though, we wouldn’t expect more than 5% of the VoIP phones to be in use at any given time.

    That’s because VoIP uses 30-90 Kbps while sky’s the limit for P2P. With a 15 Mbps Internet connection, it’s possible to see P2P operate close to 500 times faster than VoIP so 1 P2P takes the equivalent bandwidth of 500 VoIP users.

  3. What you said about VoIP is accurate, but there are “a lot” of phones out there, and some day someone will have the idea of implementing video too, converting VoIP into some kind of “ViVoIP” or “Video and Voice over IP” (just like this guy at uTorrent had the idea of putting all torrent traffic in UDP). Then your 30-90 Kbps range will end up being too small to be true.

    If the Internet collapses, torrenting will be just one of the possible causes, not the only one.

  4. Essentially, you’re saying that if VoIP carried the same load as BitTorrent, it would overload the network. I don’t disagree, assuming nobody bothers to do any backoff.

    Back in the day, the designers of the Internet believed they were building a better telecom network than the one the phone company had hacked together, so it’s not surprising that telephony was part of the mix, along with a little protocol called “telnet,” short for “telephone network emulator.”

  5. Well, that is the problem: Backing off. If every single BitTorrent user says: “I will download from the BitTorrent network two hours per day and that’s it”, then we wouldn’t have this conversation. You are a defender of putting bandwidth limits to the net. I respect that, you have your reasons (technical or not). But some people put the Internet at the same level of water and electricity.

    True, both things can be measured and everyone pays according to their usage, but those things don’t have limits. No water company will knock on the door of anyone saying: “We’ll send you a gallon per hour for half a day because you have exceeded your water limit”. It sounds like a line from a bad sci-fi flick.

    Writing your article, you have become an enemy of what some people may call “the American way” which is basically “I paid for it, I want it, and I will do whatever I want with it”. If that idea threatens the Web, what can we do?

    And about telnet, well, I’ll put it this way: Some people think that Moore is a jerk, but his “doubled transistors every two years” is quite accurate. What the Internet designers failed to do was a similar prediction. They had ideas about extending and expanding the net, but they never considered an oversaturated network scenario. Now the possibility exists. BitTorrent is not going to leave. If the ISPs block it, it will mutate. You are facing a hydra here.

  6. We frequently ration water and electricity where I live in California, “Lysander.” Just because we pay by volume doesn’t mean we always get to consume as much as we want, damn the consequences. Rainfall has been light this winter, so we’ll probably be rationing water next summer.

    In networks where a small number of users consume the lion’s share of the bandwidth, it’s often sensible to cut them off pending expensive upgrades.

  7. By the time video phones are common (whenever that may be), we’re talking about 1200 Kbps per 720p HD conference, broadband will be able to support nonstop usage for every single user. Furthermore, people generally don’t spend 24 hours a day on the phone or video phone and the duty cycle is usually 1 hour a day AT MOST. We can typically leverage a 20:1 overbooking for communications and it’s common to see 5 phone circuits allocated for 100 people in a company.

    By the time video phones are common, P2P will be pulling in 90,000 Kbps saturating the maximum available bandwidth and it will still use 75 times more bandwidth per user. Furthermore, P2P is more of a 12×7 or 24×7 activity and the duty cycle is extremely high.

    No matter how you slice it, P2P will always push any network to its limit and it needs to obey the existing traffic lights.

  8. Obeying traffic lights yes, I agree. Denying Internet access completely, that’s out of the question. There are many non-profit organizations saving thousands of dollars of bandwidth thanks to the BitTorrent network (every Linux distro I have seen has the option of download through BitTorrent).

    It’s been a nice conversation so far, thanks for reading.

  9. “Denying Internet access completely, that’s out of the question.”

    Enough with the strawmans. No one is talking about blocking BitTorrent. The concern about going over to UDP, which bypasses some existing traffic lights, is legitimate. Your attempts to equate VoIP or Video over IP to BitTorrent is not legitimate.

  10. Too many people are using BitTorrent today for any ISP in a competitive scenario to block it – that would be commercial suicide. The most celebrated claim of BitTorrent blocking, at Comcast, wasn’t really blocking, it was a quota system that reserved half of the upstream for non-BitTorrent uses.

  11. Hello Richard, Hello George (I’ll try not to typo ‘Georgia’ again!)

    First of all, your last comment, George.
    “By the time video phones are common, P2P will be pulling in 90,000 Kbps saturating the maximum available bandwidth” There is a simple way to deal with that, and thats two ways. Either providers can bump up their network capacity to cope, or they can physically limit the maximum speedto a level they can cope with.

    The problem here, and it’s one Richard adresses, is that network providers are being slammed in the use of their networks. The problem is not caused by Bittorrent Inc., it’s not caused by you and me (the end users) – the cause is the network companies themselves, the ISPs.

    You might remember the practice, back when airline flights were so popular (before we let the terrorists win, and had a collective panic at showey, pointless security – but I digress) that flights would get overbooked. A 400 seat jumbo would have 440 seats sold for the flight, with the hope that 40 people will not use what they have paid for. Sometimes though, everyone does turn up, or at least more want to use what they have paid for, than there is capacity.

    You both have done consulting, events, and have flown. I’m sure you’ve been bumped by overbooking before, I know I have. There is now a federal requirement that compensation be given, if you are the victim of what is essentially fraud. They did after all take your money, binding them to a contract, for a service they knew they may not be able to complete. Is this not exactly what we’re talking about here?

    Now, I live in Georgia, and we’ve had water shortages since last summer. but the net connection I pay for is not like the water connection to my house. With water there is a fixed, finite quantity of assett, that you are paying to recieve. If there is no more water, none can be created. That is not the same as a net connection. There, I don’t pay for a specific content to be transmitted in to my house, instead I just pay for the transportation method. I create my packets going out, and a third party creates the packets comming in. there is no finite number of packets that exist for the ISP to transfer, as there is with water.

    I pay for a 6Mbit/0.5Mbit connection. If my local network infrastructure was so heavily subscribed that being able to sustain that is not feasible, then how ethically has my telco (bellsouth) behaved in selling me a product/service that they know they can not provide, and can only provide if everyone else who has also paid for this service is not using it.

    Mr Bennett, your argument that the uTP will ‘kill the net’ also stems from this. Your argument is that with this, ISPs won’t be able to ‘manage’ things as efficiently. Again, management is needed because of oversell. Bceause the companies are fraudulently selling service.

    At the end of the day, it doesn’t matter if you think some users are using more than their ‘fair share’, there is no requirement to use a ‘fair share’. The ISP contracted a service and made no fair share requirements, instead stipulating a continuous average throughput. I’ll tell you what though, if my ISP only wants me to use about 25% of my maximum theoretical data transfer, per month, then I’ll happily pay them 25% of their monthly fee for my connection. That sounds very FAIR to me.

    Andrew Norton
    Pirate Party International

  12. “The most celebrated claim of BitTorrent blocking, at Comcast, wasn’t really blocking, it was a quota system that reserved half of the upstream for non-BitTorrent uses.”

    Richard, that’s not at all how Comcast’s system worked. Comcast started to insert RST packages as soon as it detected eight unidirectional (seeding) Bittorrent connections per node (around 450 households). That value was fairly arbitrary not not at all based on the actual bandwidth consumed by those eight connections at the time of each instance of blocking. And those RST packages affected all seeding connections, not just the ones that went above any quota.

  13. Do the math, Janko, and you’ll see that you and I are saying the same thing. The Comcast upstream was 10 Mb/s minus DOCSIS overhead or roughly 6 Mbps a year ago. Eight BitTorrent seeders can consume, worst case, 8 * 384 Kbps or 3.072 Mbps.

    But I’ve never seen any data that says the management was triggered by 8 seeders, if you have some please share.

  14. Norton, the Internet is overbooked by design, that’s the essence of packet-switched networking. The economics of the entire system depend on facilities being shared among all of us.

    Carriers will happily sell you dedicated bandwidth if you don’t want to share with others, but the prices are much higher than you imagine. A dedicated T1, 1.44 Mbps, will run you $400/mo if you’re lucky.

    When you pay $30-40/mo for a Internet connection, you’re required to play well with others at pain of being cut off. Protecting the collective Internet experience from the damage done by pirates is a social good.

  15. Richard, that number is straight out of Comcast’s final disclosure filings with the FCC. Which kind of proves the point that it’s good to have regulators demanding the publication of such details, doesn’t it? You can read all about it here:

    http://newteevee.com/2008/09/20/comcast-how-weve-been-slowing-down-bittorrent/

    And no, quotas are not about worst case assumptions, but about actual measured bandwidth. And that was never the case with Comcast.

  16. Your article says: “The filing also reveals that Comcast is still seeing around 50 percent of its upstream bandwidth consumed by P2P applications. In some areas, file swapping even causes up to two-thirds of all upstream bandwidth.”

    I don’t have a problem with a system that limits one application to half of network capacity, but it’s pretty much academic now that Comcast is rolling in a system that’s strictly volume-based.

    The new volume-based system doesn’t distinguish a given user’s BitTorrent traffic from his own VoIP traffic, however; if he’s over quota for excessive bandwidth consumption, his VoIP will work less well. Do you think that’s a step forward for the Internet?

  17. “‘By the time video phones are common, P2P will be pulling in 90,000 Kbps saturating the maximum available bandwidth’ There is a simple way to deal with that, and that’s two ways. Either providers can bump up their network capacity to cope, or they can physically limit the maximum speed to a level they can cope with.”

    Andrew, this is a really BAD solution. Limiting the maximum speed to deal with the worst case scenario is the exact reason why we moved away from circuit switching networks and on to packet switching networks. If you have a network that can always guarantee 1 Mbps of performance, then it can easily support a peak speed of 20 Mbps and people will be able to hit 20 Mbps most of the time because most people statistically aren’t saturating their bandwidth. Once in a while when it’s really busy, the bandwidth might shrink to 10 Mbps but that’s still a hell of a lot better than the worst case 1 Mbps.

    What you (and many other anti network management people) are arguing is that limiting people to the worst case scenario is better than letting them have 10 to 20 times the performance because you don’t understand how statistical multiplexing works.

    People who believe this silly idea believe that the overbooking of bandwidth is bad and that it’s merely a greedy business model. The truth is that this overbooking model is precisely the reason broadband connections are more than 20 times cheaper per Mbps than dedicated circuit connections.

  18. There seems to be a slight disconnect here. There is a big difference between using a packet switching network to enable burst speeds in excess of guaranteed speeds, and fantastically overselling said networks to meet marketing’s demands, and increase turnover.

    There is also a standard decision in business. When demand far exceeds supply, you can either try and increase supply, or raise the price to decrease demand. That’s economics 101. With the advent of digital rights management etc. there is another option – control and dictate your customers usage of your product, to try and prevent demand exceeding supply. This is where we’re at.

    The best case for this, is a transparant management of the demand, and the supply, so that it is not noticed by the end consumer. With Comcast, this failed, down in part to the sloppy usage of sandvine.

    I’ll also let you in on some personal experiance. When I was younger, I lived in the UK (I moved to the US almost 6 years ago). I had a number of friends that worked for British Telecom, and sometimes, when they had work in exchanges, I would go with them, to lend them a hand. I’ve also helped out with the occasional T1 install (the last time was November 01, at building 3 (think it was 3), on Treasure Island (formerly the Nash Bridges Studios) as we were prepping to film BattleBots. You get talking to people, and certainly with BT, talk of infrastructure upgrades were laughed out. It wasn’t on the cards then, despite the huge takeup of DSL.

    Similarly, last summer I had to call out to Bellsouth for on-site support (lightning had fried some of our phone line knocking out the DSL) and the man let slip that the local main office, Covington (here in Georgia) had just got a big delivery of fibre. The plan, he was told, was that Covington would have it installed by christmas, and by the end of the next year (ie now) it’d be installed down by me. It’s not in Covington, it’s not near me. From talking to various infrastructure installers at BellSouth (now AT+T) the cabling is still sitting there in the offices, as it’s not ‘cost effective’ to install it, not after the merger. I can’t speak for Covington, but this is a fairly rural area (see My Cousin Vinny for how rural) so our lines might not be overloaded.

    Regardless, it would appear that network infrastructure is not being upgraded as much as it might. The net now, is very different to that of ten years ago, or even 6. There is more data, web sites are larger, there is more to do. streaming video (either static, like youtube or iplayer, or live, like ustream) gaming etc. In 2001, I played Delta force 2 and land warrior online, against american players, using a dialup connection. now dialup isn’t enough for most games, even though they may only have 16 players, rather than the 32 or 50 I was against. I have a webcam, and would like to try video-calls over skype, or yahoo with other party members. it works about as well now as it did 8 years ago.

    Another annecdote, and then I’ll conclude. Back in 1990/91, the area I grew up in, the east-central side of Liverpool, finally got cableTV. They dug up all the roads and installed it. 10 years ago, they’d been merged into Telewest, and cable-internet was the new hot thing. Except we couldn’t get it, because the wiring put in initially was one-way only (a Cabletime system). We also couldn’t have digital cable, bceause that also wasn’t supported by the network infrastructure. When I moved in 2002 to my own house, it still hadn’t been upgraded, and finally my father cancelled the service, and went to sky digital. WE had been a customer for 11 years, and the infrastructure hadn’t been upgraded in that time. the cable-modem capable infrastructure had been installed in other parts of the city, by the cableco, since 93-94.

    This, not utorrent changing from TCP to UDP, or the growth in peer to peer, is the problem. Sky got my father’s television custom because they kept modernising their infrastructure. I even remember standing with my father, in a Dixons store, watching the Astra test-signal just before Sky started. I remember counting who had more, Sky or BSB (sky dishes were round, BSB had a ‘squarial’) going down the streets. They kept upgrading the back-end and thats why sky offered more features back in 2001/2002, than the likes of Dish network do right now (I spent thanksgiving at a friends house, who’ve just upgraded to the brand new HD DVR box)

    In an advancing field, you must constantly upgrade the hardware that powers your business. In the late 80s and early 90s, I was using a Dragon32 (think TRS-80 coco1) to handle the family business. I couldn’t do that now. Putting off upgrades might save money, but it builds frustration, and slows down things. This I know from experiance, my main computer is a 1Ghz athlon – and year on year, doing the same work, with newer stuff takes more time and effort. Openoffice 3, for instances, takes more of my system to do the same kind of job, then staroffice 5.4, which I bought in 2001. I’m finally going to bite the bullet, and spend some money on upgrading my equipment neccesary to perform my job. While my equipment hasn’t changed, and the job I do hasn’t changed, over the past two years alone, it has been increasingly deficient to the task. It is time that the ISP’s admitted the same.

  19. Minor nit: “TELNET” stands for — depending upon whom you ask — “telecommunications network” or “teletype over the network” (which is probably more accurate, since it’s used to log in to a text-based terminal interface of the kind used by teletypes before CRT “dumb terminals” became widespread). It didn’t have anything to do with telephony. Remember: at the time, many institutions’ connections to the network were 56 Kbps — enough to carry one phone call — and very expensive even for that small amount of bandwidth.

    As for oversale of bandwidth: As an ISP myself, I can state unequivocally that it is the ONLY way any ISP can provide affordable service to consumers. In my area, the typical price of dedicated backbone bandwidth is $100 per megabit per second per month. Want to saturate your link? I’ll have to charge you that or go out of business.

  20. If you keep commenting here, Andrew, I’m going to have to impose a limit on words-per-comment. You do carry on.

    It’s not good business to spend money to upgrade networks in the interest of applications used by microscopically small numbers of people; it’s much more efficient to cap, throttle, and kick them out.

    Only when highly consumptive applications become mainstream will the carriers upgrade their capacity, and we’re just now at the point where that’s becoming reality for P2P and video-streaming in general. If you’re at the leading edge of a new app, you’re going to be frustrated by the rate of change. That’s life.

  21. I have been told I tend to go on. That might be because I tend to try and provide justification to my statements, rather than have people take it on faith. I can see, though, that you have your ‘people shouldn’t use so much data’ stance apply to everything, including comments.

    “It’s not good business to spend money to upgrade networks in the interest of applications used by microscopically small numbers of people”

    Microscopic now, macroscopic in a few years time. I’d also call 5% more than ‘microscopic’ – “When five per cent of users consume half the network’s resources” (your words) – but if that is what you term microscopic, what’s tiny? 15%, small, 35%?

    5% is not small, it’s significant. In growth technology areas, thats where you look to address, because 5% now, could be 55% in a year or two. Ten years ago, Napster had a million users, now the piratebay alone has over 25million.

    In the end, Richard, your entire article boils down to ‘the way ISPs have done business in the past, will not work any more, which is a problem, as they don’t want to change their ways, so this is bad’. Thats what I read in your article, and your responses above.

    I’ll end this short (for me) post with something I was told by someone at BattleBots (not sure if it was Edward Roski, Michael Mulden, or Will Wright) but it was “Todays early adopters, are tomorrows mainstream consumers, plan for that.” In short, what’s hot and new now, will be common next year.

  22. Trite, Andrew. Everybody has observed, myself included, that “the bandwidth hog of today is the mainstream user of tomorrow.” I’ve used that line in speeches.

    But that doesn’t mean that the network of today needs to have the bandwidth of the network of tomorrow. ISPs upgrade their networks as users and markets require. If they spend too much money on upgrades, they get hammered by the stock market; they spend too little, they get hammered by their competitors.

    In the end, these things tend to work out. If the Pirate Party, International (which you represent) is unhappy with the present state of affairs, that’s a good indication that things are on track.

Leave a Reply

Your email address will not be published. Required fields are marked *