Herding the Bandwidth Hogs

5 Dec

I don't think Herman and myself expected this to become so big, so fast! My post Is the Bandwidth Hog a Myth? and his Congestion Neutrality were picked up by Ars Technica, Slashdot, Gizmodo, Techdirt, DSL Reports and is being discussed on various fora. Thanks to all who picked it up, whether they agreed with our assertions or not.

I've read the comments here and tried to read most of those on Ars Technica and Slashdot although I must admit to being overwhelmed. Still, I've identified a number of trends from what I've read and I'll try to briefly address some of the questions raised or misinterpretations made.

First of all, I guess it's worth stressing a number of things as a bit of context: neither Herman nor myself are affiliated with anyone or getting money from anyone here. Since most of the comments or emails inferring that seem to be coming from the US, it might be worth noting that we're both European and that, although we're intellectually interested in what's happening in the US around the FCC's broadband plan and Net Neutrality rulings, we have no stake in them whatsoever.

I should also add that as far as I'm concerned this post was written of my own initiative and in no way related to my employer. Whether you think my thinking or counting skills are deficient (as one commentator here seems to) or that I'm spot on, the blame or praise is my own.

Overall, I've found the debate of high level so far, so don't let these first few paragraphs suggest that it's all mudslinging. Thanks to all who have kept things civil so that we can hopefully progress in either debunking the myth or being shamefully proven wrong.

Anyway, as far as the various comments and questions go, here are a few clarifications. Most importantly, allow me to focus the debate on the core point of our original posts and maybe relegate some of the other issues raised (some of which are crucially important) to the side for now.

What are posts were not about:

  • Our point was not to suggest that congestion didn't happen (this was clearly stated in Herman's post in particular) but that congestion could not be consistently attributed to the "excessive" usage of a small number of users.
  • Our point was also not to suggest that caps were bad (which seems to have been how Ars Technica read it) but we certainly suggest that they won't solve congestion issues. There is an interesting question and debate around capped/tiered/variable pricing models, which I've adressed here in the past (Ruminations About Broadband Pricing).
  • Finally, our point was not to suggests that some ISPs, especially the small and rural guys were not in a terrible squeeze due to the costs of backhaul. This is another crucially important issue, but it's not what we're talking about. As I've recently noted, there have been important and efficient policy efforts (Intelligent Ways to Solve the Middle Mile Problem) to introduce competition in backhaul, and this is perhaps an area even more important than access regulation when it comes to driving broadband penetration. But that does not imply that a small number of users are responsible for most of the growth in bandwidth usage, and hence for the increase in cost.

At the core, our posts were about answering the question: "Are a small number of well identified users responsible for bandwidth congestion?". Existing literature (see Kenjiro Cho's The Impact and Implications of the Growth in User to User Residential Traffic and the more recently updated Observing Slow Crustal Movement in Residential User Traffic) suggest that while at any given time a small number of users account for most of the aggregated bandwidth usage these are never the same users at different times.

Herman's analysis of the chokepoints in the network (oversubscribed
links) and the congestion management mechanisms of TCP/IP suggests to
us that the responsibility for congestion and its effects are a design
factor of the network. This, together with the existing literature
suggests that looking at the aggregate data usage of a given user over
a period of time as an indication of network disruption is misguided.
 Herman's analysis  also suggests that there is room for more
intelligent implementations of congestion management which will
mitigate the congestion problem. And given the growth of video on the
Net, the chances of getting congestion on oversubscribed links are
increasing.

And as I pointed out the first time, our interest is not in being right on this one, it's on understanding the facts and analysing whether pricing and network management policies are based on sound data points or on suppositions. This is an opportunity to hopefully get down to the bottom of this…

So far I've had two companies get in touch. I hope to get more. I will publish the data set requirements early next week and hope to spark a discussion about how feasible it is for ISPs to gather such data and my ability to do a compelling data dive on that basis.

Again, thanks for the interest and discussions!

9 Responses to “Herding the Bandwidth Hogs”

  1. Richard Bennett December 5, 2009 at 3:24 pm #

    First, it’s unfortunate that you’ve chosen not to address the fundamental error in your original post, the assertion that TCP is fundamentally fair. You said: “As Herman explains in his post, TCP/IP is by definition an egalitarian protocol. Implemented well, it should result in an equal distribution of available bandwidth in the operator’s network between end-users; so the concept of a bandwidth hog is by definition an impossibility. An end-user can download all his access line will sustain when the network is comparatively empty, but as soon as it fills up from other users’ traffic, his own download (or upload) rate will diminish until it’s no bigger than what anyone else gets.”
    This is, of course, not the way TCP actually works; it does not ensure that each user’s “download (or upload) rate will diminish until it’s no bigger than what anyone else gets.” Quite the opposite, TCP ensures that each user will be successful in consuming bandwidth in proportion to the number of TCP streams he has open at any given time. Your assumption that TCP automatically assigns bandwidth fairly is so far from the truth that it undermines your entire argument. And in fact that faulty assumption contradicts your new spin to the effect that “there is room for more intelligent implementations of congestion management which will mitigate the congestion problem.” Why on earth would there be room for “more intelligent implementations of congestion management” if TCP were fair to begin with, ensuring that “each user has a share no bigger than what anyone else gets?” These contradictions, zigs and zags are enormous.
    The fact is that ISPs already employ multiple strategies to deal with congestion at the same time because TCP is not an adequate mechanism to ensure fair access to bandwidth. These include over-provisioning at many points in their networks, adding bandwidth more or less continually, managing short periods of congestion by weighted packet drop, longer periods of congestion by QoS demotion, and persistent congestion by taking action against hogs, who are typically defined as people who cause the most congestion for three months in a row. There aren’t many such people consuming in the 99+ percentile month after month, but they do exist; in many cases, they’re malware victims and a threat to kick them off the net is the only thing that causes them to run an anti-virus.
    The faulty analysis here is consistent over-simplification of the subject matter; don’t assume that the people who run ISP networks are working from a similar level of simplicity.

  2. I couldn't possibly comment... December 5, 2009 at 11:41 pm #

    Would it be out of place to link to some operator comment? http://www.merit.edu/mail.archives/nanog/msg02545.html
    Further, if neither Herman nor myself are affiliated with anyone or getting money from anyone here is an issue, will the very learned Herr Bennett, whom God preserve, perhaps so far unbend as to divulge the odd hint about ITIF’s sources of funding?

  3. Richard Bennett December 6, 2009 at 1:00 am #

    I always find it amusing to see people who aren’t willing to disclose their names demanding that I account for my income.
    Thanks, but I’ll pass on that and let the facts speak for themselves.

  4. I couldn't possibly comment... December 6, 2009 at 6:15 pm #

    You haven’t stated any.

  5. Colin December 8, 2009 at 3:30 pm #

    Unfortunately the debate seems to focus whether the assesment by Herman of the workinsg of TCP is accurate or not. It is unfortunate because ot distracts from the discussion whether or not data caps or other measures alleviate the ‘financial crunch’ ISPs experience. And yes they do experience a squeeze, and yes congestion occurs.
    Richard Bennets contends that there is a small percentage of users that clog the network, or rather he says: “persistent congestion by taking action against hogs, who are typically defined as people who cause the most congestion for three months in a row”. Interestingly, he attributes some of that behaviour to “malware victims”. If that would be the cause than data caps would not help to change that behaviour of people who do not install appropriate security measures (some simple do not understand the hazards, do not renew their anti-virus software subscriptions or do not update their software (patches)). If you want to change this behaviour than why not offer them anti-virus software subscriptions and automtically inform them to take action?
    The added value for ISPs is that they can even generate some more revenue from doing this.
    The point Benoit is trying to make is that “heavy users” do not necessarily cause a problem. The problem is that congestion occurs a very specific points in time. Any ISP has utilisation (load) graphs and trend graphs showing how the bandwidth is used at each given moment of a typical day of the week and the development of the aggregate load. These graphs show an increase as well as what types of traffic. But interestingly congestion occurs at specific moments of the day, periods when many people “surf the web”. If we would democratize the usage than everyone would need to be ‘taxed’ according to the actual use. Forget about TCP streams and supposed fairness of TCP. Data caps or not, if most users still download, upload or surf the net congestion still occurs.
    Richard Bennets seems to support data caps or other mechanisms. I, for one, doubt the usefullness of data caps to alleviate congestion. However, it may help with the real issue of balance of costs and revenues, simply because ISP could use this to effectively increase pricing of Internet access.

  6. Phillip Dampier December 8, 2009 at 7:43 pm #

    OK, then I’ll ask. Of course, you won’t answer. As you yourself have stated, “ITIF’s policy is not to release information about sponsorship, period.” That’s a warning bell if there ever was one.
    The truth is, ITIF is just yet another corporate front group run out of K Street in Washington to advocate for cable and telco interests without leaving any telltale fingerprints all over the public policy discussions.
    The facts speak best for themselves when they are actually provided, not obscured behind a corporate veil of secrecy.
    Phillip Dampier
    Stop the Cap!
    http://stopthecap.com

  7. Richard Bennett December 9, 2009 at 2:48 am #

    The question of Herman and Benoit’s understanding of TCP and congestion is actually critical to the challenge Benoit has issued to ISPs. He’s asking them to supply him for data so that he can analyze it and determine whether caps could possibly have any effect on congestion. Given that their understanding of TCP’s behavior is clearly deficient, there’s no reason to expect their analysis of any captured traffic would be sound. You can’t very well demonstrate that you lack analytical skills and then demand people supply you with data to analyze; it’s a waste to time for any ISP to participate in such a challenge. They’re saying, in effect, “give us your data so we can mangle it.” It’s a stupid challenge.
    Every report on Internet traffic I’ve ever seen – the number is in the dozens – shows a clear power law distribution. Cisco presented a report to the FCC today that shows the “Top 1% and Top 10% of Global Broadband Subscribers Create 20% and 60% of Internet Traffic Respectively.” (http://www.openinternet.gov/workshops/docs/ws_tech_advisory_process/Cisco%20FCC%20Network%20Management%20Presentation%20120809.pdf , pg. 8) This isn’t news.
    The only question for caps is whether there’s a significant number of the top 1% who are large consumers month after month. Felten says it’s a different group every month, not the same group. Which is a matter of applying a black-and-white analysis in a matter than has shades of gray. Some of the top 1% are persistent, and some are not.
    Caps are a way of dealing with the portion of the top 1% who are persistent, and there are other ways of dealing with the other part of the 1% and the the 10%. There is no one-size-fits-all congestion strategy, but there is a set of tools that taken together manage the overall distribution of network resources. Some of these tools are threatened by the regulations the FCC has under consideration.

  8. Richard Bennett December 9, 2009 at 2:52 am #

    I have no personal or financial interest in any Internet service provider, carrier, service provider, application provider, CDN, or similar company. Like Benoit Felten, I analyze the industry, provide consulting, and collect fees for speaking at public events and for writing. My views on network management have been well-formed for many, many years and are not the result of any current financial relationship.
    Thanks for bringing this up, Phillip, your questions display a firm grasp of the subject matter.

  9. Colin December 11, 2009 at 1:48 pm #

    Richard, i appreciate your comment. And indeed there is an issue with understanding TCP from the assumptions Herman and Benoit use.
    I am not debating the existance of power law distribution in traffic loads at the backhaul. There is also a consistent traffic load profile. So we know when congestions are mostly likely to occur.
    As I see it the real question Benoit poses is whether the Top 1% (or 5% or even 10%) are consistently the same individuals (or more generic the same access lines). First, why would I worry about the Top x% of users consuming data. In fact I not not care expect when the network gets congested and I feel forced to invest in additional capacity. If a user continously downloads overnight when the traffic loads are very light I do not care, the capacity is there anyhow. So, all I should be interested in is how do I as an ISP ensure some fairness when peaks in traffic load occur and the networks experiences increased levels of congestion affecting all users (all ISP users logged on at that moment or those at a subnet). As you pointed out TCP does not ensure this. But neither do caps. The point is that ‘evéryone’ pays at the moment of congestion, we are all affected by increased latency.
    To use an analogy. This does not differ from traffic jams at roads. Capping how much someone is allowed to drive does not have a direct link or effect on the amount and length (periods) of traffic jams. But when you are in a traffic jam you pay with time wasted and gas wasted. But that does not stop us from driving even at times when we know the network is congested. I am based in Europe, and some governements there have tried to persuade people to drive less by increasing taxes on gas. That did not work. Think of it, at toll roads we usually do not see “congestion charge” (there is such a charge in London, and it kind of works). However we apply different charges for cars and for trucks because the latter cause more tear and wear of the roads (more trucks usually requires more frequent repairs and maintenance work). But more traffic from one user in and by itself does not require additional investments or more maintenance. That is why caps provide the wrong incentive, if they provide one at all.
    On topic. If we need to somehow ensure fairness and avoid consistent heavy investements in capacity that are only used a fraction of the time what we really want to do is to motivate people to use the network capacity at different moments; hence we want to lower the peak. Not necessairly lower the amount of traffic (in fact most ISPs are encoruarging user to do more when they roll-out DOCSIS 3.0 and fiber!).
    Caps work only for those users that are consistent in the major peaks of traffic and that consume an extraordinary amount of traffic (eg streaming content and heavy bit torrent). A better way would be along the lines of “congestion charges” or “right of way charge”. But than again fines do not work (even if we provide all the required transparancy so users understand they are causing congestion because when fined it has become a business transaction, in other words you have paid to cause congestion).
    Indeed it requires a set of tools. And some of these will indeed be banned by regulation. In itself that is not necessarily a bad thing becuase there are other factors that regulators need to take into account. That is the result of living in a democracy.
    Because the real problem is that ISP have priced their product to low and that they encourage people to use the product as much as possible.

Leave a Reply