Networks need Quality of Service mechanisms if they are required to carry low-jitter traffic such as voice or video-conferencing alongside a large volume of other traffic, such as large pre-recorded HDTV files. It doesn’t matter how much bandwidth the network has at maximum. File transfers on packet networks are designed to use all available bandwidth, so you’re always going to have situations where bursts of file data cause jitter to voice.
This is an inconvenient truth for the “fat pipe, always free” folks, so the generally try and define it out of existence. Here’s a good example from the U of Oregon’s head network admin, Dr. Joe St. Sauver:
• At the same time I oppose metering, you should also know that I oppose artificial/unrealistic “tests” or “challenges” of converged networks.
• For example, a classic example of an unrealistic network demand for a converged network is uncompressed high definition video over IP – that can run 1.2-1.5 gigabit per second. At that rate, dedicated video networks make sense.
• There’s no problem handling MPEG1 video (at 1.5Mbps) however, or even reasonable amounts of MPEG2 video at 1.5 to 20Mbps (on a fast ethernet connection going into a gig core).
Are you getting that? Dr. Joe’s non-priority scheme only works if each users has: A) 100 Mbps straight to a 1 gig “core” and he never uses it to transfer large HDTV files, compressed or otherwise.
Gee, that’s nice. But here on Planet Earth, we’re trying to figure out how to move multiple HDTV streams over connections much less broad than that, because, you know, bandwidth isn’t free off the University of Oregon’s campus.
And even in this scenario, what happens if 10 people are using their 100 Mbps Ethernet connections to the Gig core to transfer big video files? The core is maxed out. And then somebody comes along and tries to use VoIP while all this traffic is flowing. Boom.
That’s what QoS is for. No matter how fast a link is, it can always be overloaded because a billion people use the Internet, and not always at the ideal time.