Alarming Title: BitTorrent declares war on the Internet

See The Register for my analysis of the latest tweak in Bittorrent:

Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches faced by ISPs so far look like a party game. So what’s happened, and why does it matter?

Upset about Bell Canada’s system for allocating bandwidth fairly among internet users, the developers of the uTorrent P2P application have decided to make the UDP protocol the default transport protocol for file transfers. BitTorrent implementations have long used UDP to exchange tracker information – the addresses of the computers where files could be found – but the new release uses it in preference to TCP for the actual transfer of files. The implications of this change are enormous.

As BitTorrent implementations follow uTorrent’s lead – and they will, since uTorrent is owned by BitTorrent Inc, and is regarded as the canonical implementation – the burden of reducing network load during periods of congestion will shift to the remaining TCP uses, the most important of which are web browsing and video streaming.

Several commentors are upset with the article, mostly because ISPs don’t provide them with unlimited bandwidth. There’s not much I can do for those folks.

A few others claim that BitTorrent over UDP has a congestion control algorithm which they feel is in some way equivalent to the TCP algorithm, but this argument is flawed on a couple of levels. For one, many routers have tweaks in their discard logic that prefers UDP over TCP. This is a key problem with the widespread use of UDP for purposes other than those for which it was intended.

UPDATE: The explicit goal of this new tweak is actually to create a more friendly variant of the existing TCP congestion avoidance algorithm, and it was the pirates at Broadband Reports who said otherwise.

What remains to be seen, however, is what the actual effects will be in large-scale deployment, and whether the genie can be forced back in the bottle if they’re undesirable.

UPDATE 2: I’ve added “Alarming Title” to the title. This piece is getting a lot of people excited.

Technorati Tags: ,

21 thoughts on “Alarming Title: BitTorrent declares war on the Internet”

  1. I read your piece on the Register.

    “Some of the people who use this system are spoiled children with no more concern for the greater good than junkies looking for their next fix. ”

    The stupidity of that is just mind boggling. Maybe you were just trying to be sensational but looking around here I see your a douche bag.

  2. Consumers urgently need a Net Neutrality legislation in order to kick ISPs that does traffic shapping and prefers some packets (UDP or TCP) over other ones. Neither the US nor the EU have such legislation. ISPs should be forbidden to use the word Internet if they filter ports, shape traffic and so on. They should be called rISPs (restricted Internet Service Providers).

  3. You are straight up wrong. TCP has more overhead, bittorrent manages its connections, including packet loss. UDP simply sends less extraneous data. It is the ISPs problem. Data should flow irrelevant of the protocol.

  4. The idea that using UDP is an effort to thwart Bell Canada throttling
    is wrong. As you mentioned in the article it would be trivial to match
    Bittorrent-over-UDP packets by using DPI and, in fact, Bell Canada is
    implementing their throttling using DPI — their filters are simply not
    updated yet to detect this new protocol variation.

  5. There’s nothing wrong with DPI when you think about it. You have to go much deeper with intrusion detection systems that prevent the Internet from melting down from worm and malware attacks. You have to go even deeper with anti-spam gateways which are used by nearly every ISP. DPI for protocol type analysis is far less intrusive than parsing every sentence and word in every one of your emails yet no one has a problem with anti-spam systems. Heck, you can just look at the traffic pattern to determine if something is truly a self-throttling real-time application like VoIP telephony or online gaming.

  6. Rather shocked at how miss informed and total bs is in this article.
    We live in the age of voip/vpn youtube extra.

    There is nothing wrong with managing the network your running but these days dpi and the like and massively abused.
    A prime example would be internet in Canada which use to be world leader and is now a joke do to the isps doing what they wish.
    Now go to hong kong or japan and you have a example of how it should work with fast speeds/fair caps and for a cheaper price.

    There has to be a balance between network management and fair service and these days that’s not the case.
    Most isps are doing it to improve their bottom line add new customers without upgrading hardware .
    Some are even doing it to run a monopoly by kill of competitors and competing services.

  7. I see that we have some ISP bashers here. Have any of you ever thought that maybe — just maybe — ISPs manage traffic and restrict P2P because it is necessary to provide reasonable quality of service to their customers?

    The fact is that uTP was explicitly stated — by its developers — to be an end run around providers’ reasonable network management practices, and Richard is absolutely correct when he notes that it could cause severe network problems.

    In fact, Ricard understates the case, because he neglects to mention one extremely important point. By switching to UDP, BitTorrent will not only compete with VoIP and some video and audio applications but also with DNS.

    This could well be catastrophic, because DNS (domain name service), as ISPs know all too well, is a “critical path” protocol in virtually every application. If DNS is slow, EVERYTHING ELSE that users do will also be slow. Remember, most network applications, including Web browsers, have to stop and wait — unable to do anything else — until they resolve one or more domain names. So, they’ll hang frustratingly if DNS packets are dropped due to congestion. And what underlying transport protocol does DNS use by default? UDP. (It can use TCP as well; however, it does so if, and only if, it has a lot of data to transfer. And TCP, due to its complex handshaking and “slow start” flow control, is much less efficient and much slower.)

    So, what we’re talking about is not just congestion but sand in the gears of the entire Internet.

    Also, because uTP does not conform to any explicit congestion management protocol that could detect congestion BEFORE packets are dropped, the only way it would be able to detect congestion in the network would be after packets were dropped. Which means that by the time it did anything — IF it did anything — to mitigate the congestion it caused, it already would have damaged the network.

    Finally, do you actually trust downloaders — who already, in the vast majority of cases, are brazenly engaging in illegal activity — to be courteous to anyone? There’s no honor among thieves, folks.

  8. What are you talking about Jason? Everything goes through the ISP’s spam filter by default and the Intrusion Detection System (IDS) always has to be left on. You might be able to opt out of spam filtering but you’re not going to opt out of the IDS and the deep packet inspection (DPI) it uses. So not only is DPI common place, it’s essential on the Internet.

    Moreover, network management protocol type identification and traffic analysis can hardly be classified as DPI, nor should DPI be considered some sort of evil four-letter word. Those who would cry foul at the slightest hint of DPI are grossly misguided and they haven’t a clue how the Internet operates.

  9. The fact is that uTP was explicitly stated — by its developers — to be
    an end run around providers’ reasonable network management practices

    Reference please.

  10. The explicit goal of this new tweak is actually to create a more friendly variant of the existing TCP congestion avoidance algorithm, and it was the pirates at Broadband Reports who said otherwise.

    What remains to be seen, however, is what the actual effects will be in large-scale deployment, and whether the genie can be forced back in the bottle if they’re undesirable.

  11. The weird thing about this article, and the related discussion, is how much blame seems to be going on. The internet is an organic, ever-evolving environment. There is no “social contract” for network application developers or ISP engineers. At best there is passive resistance on both sides to doing anything too untoward.

    The link given by Brett demonstrates that fact – that forum topic was a response to network engineers at an ISP abusing the TCP protocol via packet injection in order to deprioritise a service they felt was unimportant.

    The point is that any of these actions has a natural reaction. You screw with protocols, people will take steps to develop apps that utilise mechanisms that are more difficult to screw with. The Napster thing being the classic example of how this stuff works.

    Long rants blaming bittorrent engineers for moving to mechanisms that give them more flexibility in providing the service their users want is futile. It’s an app developers job to do better for his or her users.

    As always, the consequence of this is that there will be a reaction at the network engineer level. DPI, IP-based stuff, etc etc will all come out to play as usual and life will go on. It’s not the end of the world, it’s not even the end of VoIP or other low-latency stuff, it’s just another evolution.

    Quite frankly I’m astonished it took this long. We were already aware that the TCP protocol was being stretched into funny shapes as a result of its evolution from the tiny pipes of yesteryear to todays systems – prompting a wide variety of bizarre “accelerator” hacks like downloaders that would connect to the same server with 20 TCP streams and download a different bit of the same file via each stream. Think about that for a moment.

    It makes sense that one of the most bandwidth demanding applications out there would take steps to introduce a more compelling solution. We’ve learned a lot about reliable protocols since the 1980s and the only thing that prevented much of this from making it onto the net as a whole was legacy devices and application changes. Bittorrent is in a relatively unique position to make the first move on this and good on them. Perhaps in a year or two we’ll see it in Firefox too and we’ll be able to deliver web stuff in better ways.

    Network engineers will do what we’ve always done in the face of this stuff, we’ll upgrade routers and come up with new ways of managing ever increasing traffic levels and more diverse service requirements. This genie won’t be “forced back into the bottle”, it – or something like it – is the way forward.

    You’re raging against the sea and it doesn’t care what you think. Build a better boat.

  12. Richard (Clark): There are indeed “social contracts” governing the Internet. They’re called RFCs.

    You are correct, however, when you say that engineers may have to come up with new ways of managing misbehavior. But in the meantime, users will suffer due to degradation of the network. And there’s a huge threat of government regulation — being pushed by corporate interests – that will PREVENT engineers from managing their networks. In which case, you can’t build a better boat (or a better network). You are, metaphorically speaking, sunk. Just watch as the same people who condemned BitTorrent connection pruning via RST packets lobby the government to prohibit management of UDP traffic.

    I predict that one of the first problems we’ll see, if BitTorrent shifts to UDP, is overloading of the NAT tables in firewalls. NAT firewalls are capable of telling when a TCP session is over, and can purge the entry for that connection from the table when it closes. But UDP is a connectionless protocol, and so the only technique that the firewall can use to purge the table is a somewhat arbitrary timeout. Set the timeout too short, and programs won’t get responses to their DNS queries, etc. Set it too long, and the router runs out of memory or behaves erratically. The problem is that uTorrent’s abuse of UDP for extended file transfers violates the in-built principle that UDP is to be used for short transactions where the setup and teardown time of TCP make it inefficient for the task at hand.

  13. With respect Brett, the people breaking the RFCs are not the bittorrent people. They’re the people engineering traffic solutions by inserting RST packets into perfectly normal TCP flows.

    Ignoring ideas like that, nobody is violating RFCs here. This isn’t even a grey area like TCP SYN cookies. It’s a perfectly valid use of the UDP protocol – your assertion that UDP has some “built in principle” regarding short transactions would place VoIP and almost all gaming – in fact, nearly all actual popular uses of UDP – in violation.

    The vast majority of the common uses in fact last longer than the average TCP transaction. The reason they’re using it is nothing to do with length and everything to do with the U in the name, “Unreliable”. These are applications that can afford to have packets dropped, usually because the information is timely.

    Hell, 780 lists (trivial) file transfer as one of its two major applications. Sure that’s way out of date but I think you (and others) have been ascribing intent to UDP that simply never existed at design time.

    There is nothing inherently RFC violating, nor bizarre, in doing reliable transfers over UDP – indeed it’s been done plenty of times before by libraries attempting to create broadcast reliable transmissions etc – a perfectly valid, and traffic-saving, use.

    I don’t really have an argument with your second paragraph – my reading of it is that you are asserting that we do not have the tools to manage this “new” form of transfer. This is patently incorrect, every router on my networks is capable of deep inspection. There may be a bit of patching and people generally fiddling to work out best practice but it’s not like we are starting from scratch here.

    The argument regarding regulation is entirely irrelevant to this debate – if you can’t manage it, you can’t manage it, regardless of whether it’s UDP or TCP.

    Regarding overload of NAT tables, the UDP timeout thing is nonsense. We use DNS *everywhere*, and with the new (and sane) obsession with randomising source ports the use DNS makes of NAT tables is vastly more significant – every single DNS request needs a UDP association. This is a far cry from something that manages maybe 10 clear associations. Open half a dozen web pages in tabs and you’re hurling DNS packets out like crazy, all of which have to hang around for the timeout and never see reuse.

    We’re talking about stuff that’s either going full-bore, or it’s not. 4787 requests a min timeout of 2 minutes on that – even assuming the most dramatic jumping between seeds we can’t be talking about many mappings on the fw, certainly there won’t be any more than there would be with TCP, and at worst the ones that are there are going to hang out for a minute or two longer than TCP.

    Overall, for the reasons listed, I’m unconvinced that we’re looking at some kind of mass degradation of network performance, even temporarily. I haven’t seen any compelling arguments for the apocalypse, thus far (with respect) the scenarios put forward in support of it appear to be..unreasonably extreme.

    There is no suggestion (that I have seen) that the BT guys are deliberately trampling well known ports in order to obscure the protocol use, nor are they attempting to disguise themselves at the packet level as VoIP or similar to reduce the effectiveness of DI.

    In the medium term it’s a complete non-event. If the BT guys standardise as they’re likely to, they’ll reinvent almost everything about TCP, including RST and FIN packets, and our tools will gain the ability to utilise them just as they do with TCP today.

  14. Richard Bennett is either purposely lying or he has no idea what he is talking about. Either way, the Register needs to stop paying him to spread telecom industry propaganda.

  15. Richard Bennett (judging by his resume) knows best than everybody here that this article is one big pile of nonsense. The question here is WHO has paid him to publish it? And HOW MUCH?
    And if this is not the case, then Mr. Bennett, you need to tear all your diplomas, certificates, patents, resumes, etc., make a pile, put some salt on it and eat it – you don’t deserve to have them!

  16. Since when was the usage of UDP reserved for specific types of applications or subject to approval by any governing body, or Mr. Bennett? Any application that can live with the unreliability and statelessness of UDP, or that wants to do its own session management, can use UDP. For large file transfers using bittorrent (i.e., Linux distributions would be one legal example), the model is usually someone with bits dumping bits to someone who doesn’t have them. It’s understood the transfer will take some time. Packet drops and retransmissions are acceptable. UDP for the win.

    Pretty simple.

    If bittorrent will melt the Internet by using UDP because of tweaks by router manufacturers to prefer UDP, and the routers are preferring UDP beyond what any RFC or standard calls for, the problem lies with those router manufacturers, not with people making legitimate use of UDP. The real headline should then be “Non-standards compliant routing devices fail under increased UDP usage models.”

    Either way, I think Mr. Bennett should actually give a data-driven analysis based on an examination of the algorithms or with real data versus engaging in speculation and causing FUD.

  17. See the problem is, my ISP _did_ sell me unlimited bandwidth. They sell me a traffic rate, not a total volume of bandwidth.

    I’m simply using the service I purchased. I didn’t agree to “share” my connection with anyone. And if I need to, I should pay a reduced rate.

    The problem is overselling of bandwidth, many hundreds to one, and you know it.

Leave a Reply

Your email address will not be published. Required fields are marked *