Do data caps punish the wrong users?

28 Nov

Datacaps-thumbIn late 2009, Herman Wagter and myself wrote an article entitled “Is the ‘bandwidth hog’ a myth?” postulating that the way most ISPs were looking at their users’ consumption patterns was misaligned with what really happens in access networks. In particular, we suggested that the limitations imposed on heavy users (the so-called “bandwidth hogs”) by ISPs hoping to alleviate congestion were unlikely to work because the ISPs’ worldview confused data consumption and bandwidth usage, i.e. how much data was downloaded over a whole period with how much bandwidth capacity was used at any given point in time.

The piece ended with a request for honest-minded ISPs to submit usage data from their network for analysis, and a data setwas specified. A number of ISPs responded that they were keen to collaborate, but many of them didn’t have access to the data that would have made the analysis possible. Finally, one such ISP, a mid-size company from North America, agreed to share a data set for analysis. We have finally published the results from that study in a Diffraction Analysis report entitled Do data caps punish the wrong users? A bandwidth usage reality check. That report is for sale, but as promised I reproduce the executive summary here.

In the last couple of years, the considerable growth in internet traffic has pushed an increasing number of internet serviceproviders (ISPs) around the world to implement strategies to limit the usage of broadband services by their customers. Most of these strategies revolve around data caps: a level of monthly data consumption that triggers pay-as-you-go mechanisms at steep per megabyte rates.

Our thesis was that most of these strategies are implemented without an accurate understanding of the customers’ real time usage patterns, and as a result such strategies are neither accurate in targeting disruptive users, if they exist, nor fair to users who may consume a lot of data overall but not in a disruptive way. Further, our analysis aimed at assessing whether a very small number of users could indeed be considered to degrade quality for all other users.

In order to investigate these issues, we took real user data for all the broadband customers connected to a single aggregation link and analyzed the network statistics on data consumption in five-minute time increments over a whole day.The data was shared by an ISP in North America who wanted to understand its own network usage. Our analysis tracked both data consumption (i.e. total MB downloaded) and bandwidth usage (i.e. Mbps being used).

Our analysis confirms that data consumption is at best a poor proxy for bandwidth usage:

  • The top 1% of data consumers (hereafter Very Heavy consumers) account for 20% of the overall consumption.
  • Average data consumption over the period is 290 MB, while consumption for Very Heavy consumers is 9.6 GB. Thisroughly equates to data consumption of 8.7 GB and 288 GB per month, respectively.
  • However, only half of these Very Heavy consumers are customers of the highest service tier (6 Mbps), which implies that half of them have bandwidth usage restricted to 3Mbps (the next service tier) or lower.
  • 61% of Very Heavy data consumers download 95% of the time or more, but only 5% of those who download at least 95% of the time are Very Heavy data consumers.
  • While 83% of Very Heavy data consumers are amongst the top 1% of bandwidth users during at least one five minute time window at peak hours, they only represent 14.3% of said Top 1% of users at those times.

Bandwidth usage outside of periods when the aggregation link is heavily loaded (which we arbitrarily set at 75% load for this study) has no impact on costs or other users. Therefore our analysis of bandwidth usage focused on the three hours in the day where the link was loaded above 75%.The results show that while the number of active users does not vary significantly between 8 AM and 1 AM, the average bandwidth usage does vary significantly, especially around late afternoon and evening. This suggests that the increase in the aggregation link load is not a result of more customers connecting at a given time, but a result of customers having a more intensive use of their connections during these hours.

An analysis of customers contributing to peak bandwidth usage yielded some interesting results:

The proportion of bandwidth allocated to Very Heavy data consumers diminishes when the aggregation link load isabove 75%. While this suggests a fairer resource allocation during peak times, the link was never loaded enough in our data set to assess whether or not that resource allocation continues to be fair when there are no more resources to allocate (95% load or higher).

42% of all customers (and nearly 48% of active customers) are amongst the top 10% of bandwidth users at onepoint or another during peak hours.

6% of all customers (and 7.5% of active customers) are amongst the top 1% of bandwidth users at one point or another during peak hours.

Assuming that if disruptive users exist (which, as mentioned above we could not prove) they would be amongst those that populate the top 1% of bandwidth users during peak periods. To test this theory, we crossed that population with users that are over cap (simulating AT&T’s established data caps) and found out that only 78% of customers over cap are amongst the top 1%, which means that one fifth of customers being punished by the data cap policy cannot possibly be considered to be disruptive (even assuming that the remaining four fifths are).

Data caps, therefore, are a very crude and unfair tool when it comes to targeting potentially disruptive users. The correlation between real-time bandwidth usage and data downloaded over time is weak and the net cast by data caps captures users that cannot possibly be responsible for congestion. Furthermore, many users who are "as guilty" as the ones who are over cap (again, if there is such a thing as a disruptive user) are not captured by that same net.

In conclusion, we state that policies honestly implemented to reduce bandwidth usage during peak hours should be based on better understanding of real usage patterns and should only consider customers’ behavior during these hours; their behavior when the link isn’t loaded cannot possibly impact other users’ experience or increase aggregation costs. Furthermore, data caps as currently implemented may act as deterrents for all users at all times, but can also spur customers to look for fairer offerings in competitive markets.

19 Responses to “Do data caps punish the wrong users?”

  1. Serge November 30, 2011 at 6:50 pm #

    Just to understand what was being measured — what is a “single aggregation link”? At what place, relative to end-users, does this occur — i.e. is it the first layer-2 / layer-3 aggregation link upstream from the access segment?

  2. IamME November 30, 2011 at 7:23 pm #

    I’ve said this a number of time on various boards. I have a 60G/month data cap. I pay $60(+tax)/month for this pawldry amount. So every gig I do not use is essentially a wasted dollar(+tax) out of my pocket. So, that being said, I make a concerted effort to use up every bit of my data every month. If I had no data caps, I would actually consume less data because that perceived monetary loss would no longer apply. I’m sure I would go over 60G some months, but over all, I wouldn’t even come close most months.

  3. Bob November 30, 2011 at 8:08 pm #

    The discussion does not dig into the underlying cost issues. Typically, these are a) build-out of last-mile capacity, and b) purchasing transit
    regarding b)
    Networks that cater to the content-providers are often eager to install private peering with the eyeball-owner networks (the last-mile operator networks). These eyeball-owner networks frequently refuse these free connections and indeed ask the content providers to PAY to connect to their network. (the eyeball-owner is trying to get money on both sides: charging the consumer and charging the content-provider, by charging the content-provider’s network operator. In many cases, the cost of “transit” is a myth, because the networks serving content-providers will pay the cost of direct-peering circuits.
    regarding a)
    Build-out of last-mile capacity should only plague legacy providers (cable), who have to restructure their outside plant in order to substantially increase their bandwidth per home design point. (e.g. sub-split nodes, clean-up the upstream, replace and align amps, etc) NEW builds, like FiOS and UVerse shouldn’t have these issues. Even DSL shouldn’t have this issue. These designs inherently have either dedicated BW per home (DSL and uVerse) or large-capacity per home (e.g Gigabit or more shared by 32 homes). There is no justification for any sort of BW cap for these customers. (DOUBLY so for Verizon and AT&T, both of whom directly own and operate very large international IP backbone networks, each of which has settlement-free peering; they don’t pay to connect to anyone).
    For the cable operators to move into a world with 100% HD video, and increasingly greater usage of on-demand video, they will be forced to rebuild their plants. They are just dragging their feet to milk as much money out of the customer as possible. Same problem as the financial industry, greed at the top.

  4. Serge December 1, 2011 at 3:22 am #

    Bob, can you explain what you mean? Ignoring the bit about transit, which is irrelevant to any of this, your reasoning boils down to saying that access networks either (a) don’t have shared aggregation, or (b) have really big access links and aggregators that nobody could ever full ’cause they’re, like, really big. Both are wrong, but which is it?

  5. bobbyjimmy December 1, 2011 at 6:27 am #

    “While 83% of Very Heavy data consumers are amongst the top 1% of bandwidth users during at least one five minute time window at peak hours, they only represent 14.3% of said Top 1% of users at those times. ”
    this seems to imply that the top 1% of data users are MASSIVELY overrepresented in bandwidth usage during peak hours. i.e. they are top 1% of data users, but represent 14% of top bandwidth users. So tackling high data users would actually be fairly effective at reducing bandwidth utilised.
    The other question is that if we agree that overall usage is driving costs to the ISP because of bandwidth usage – regardless of who does it, what methods could be used to more fairly allocate bandwidth?
    A charging method that has a datacap might be a fairly crude mechanism with a whole lot of false positives and fals negatives, but it is at least understandable and relatively easily measurable. a file that is 5GB costs $x.
    A more fair method might be to just charge for bandwidth actually used during times of congestion, but that would be extremely impractical to measure and perceived as very unfair as a consumer cannot generally control how fast they download something. a file that is 5GB would cost nothing if downloaded at night, but if it was slowly downloading during the day it would cost more than if it was downloaded quickly between 7 and 8pm, for example. This method might be most fair in matching price to costs, but I bet customers wouldn’t perceive it as fair – charging for usage at the time when they most want to use the internet!
    As an analagy, can you imagine if electricity companies tried to charge $2 for the electricity used to boil a kettle during the ad breaks of the superbowl, but it was free when the match was actually being played? This would be more fair, but extremely difficult to understand and bill for, as well as percieved as being unfair by the customer.

  6. Benoît FELTEN December 1, 2011 at 11:18 am #

    Serge,
    What’s being measured is traffic from the first router from the customer line, sitting in the Central Office (what we in France would call the NRA).

  7. Benoît FELTEN December 1, 2011 at 11:23 am #

    Bob, the report does go into the cost issues. The build out (in my opinion) is not really the issue because it will never be decided on the basis of increase in traffic demand alone (or if it is, it’ll be too late…) In other words, targeting “bandwidth hogs” will not change the case for building or not building a fiber access infrastructure.
    The whole reasoning in our report is that the variable cost structure is tied solely to peak traffic (whether in the access, the aggregation and core or transit and peering) and therefore the only way in which disruptive users (if they exist) affect costs is during these times.

  8. Benoît FELTEN December 1, 2011 at 11:30 am #

    Bobbyjimmy, yes and no. The point being that if top 1% bandwidth users are considered disruptive (which we didn’t find any proof of in the data set), why single out these 14% ?
    Your view is the cynical view: they’re easier to target, and it’s easier to explain why. But that doesn’t make it fair. One of our goals in this was to unravel the rhetoric of data caps with real data. The rhetoric (that these guys are hurting all the other guys and therefore should be punished) doesn’t really come through as accurate.
    I’m not naïve though, I don’t think it’ll stop some ISPs at least of going for the easy solution, no matter how unfair.
    As to your second question, it’s clearly not easy. A “fair” way to do it in my opinion would be to set up a real-time ceiling on bandwidth available for all users during actual peaks. That doesn’t require any DPI or similar mechanisms and would be easy to spell out to customers without punishing anyone in particular.

  9. ISP-Marketier December 1, 2011 at 12:03 pm #

    So, metro-ethernet aggregation networks and backbones grow on trees, that’s why there is no justification for a usage price?

  10. ISP-Marketier December 1, 2011 at 12:42 pm #

    What about attaching an additional bandwidth-price (“pay as you use”) on services, customers are willing to pay for? Service or content providers could decide – if they need a dedicated service quality they should be able to pay a little money for the uncongested transport quality.
    To ask the customer to pay for high quality access in order to use the premium service of an other service-provider will just not work: It’s like going to the movies an paying at one booth for the movie itself and then going to another cashier to buy a premium seat in the back row – only to find, that the “movie-ticket” you have just bought unfortunatly is out of back row seats…
    Those guys, who are selling a service to the customer, which is dependent on a certain service-quality should be held responsible to make sure, the service will be delivered in a sufficient quality. If overall best effort transportation does not fit their needs – they should be made responsible for that.
    In this way, there is no need to punish bandwidth hogs, just because the business model is lacking fair pricing rules. I find it very disturbing to sell a product to a customer and if he really has fun with it and likes to intensify the usage of my product, instead of participating from his fun, I’ll have to cut it down because I don’t have a business model where I can refinance to cost for the grown fun.

  11. Brian December 1, 2011 at 3:27 pm #

    My electricity company DOES bill me more to boil a kettle during peak time windows (which shift seasonally) than non-peak windows. They charge me more to do laundry during the day than in the evening or at night and they charge me more to run my A/C during the hot afternoon when everyone else is doing that too.
    Everyone understands and accepts “peak period” billing for electricity.
    Electricity is not really an apples to apples comparison though since electricity is a finite resource that has a $$/unit cost factor. Bandwidth does not.
    A network connection costs no more to run full than it does to sit empty so there really is no actual $$/unit cost.
    That’s not to say that there cannot be congestion. What I don’t understand is why networks don’t operate on the simple “equal share” philosophy when there is congestion.
    If a pipe has X bandwidth and there are N users, each user gets X/N bandwidth, assuming all N users want that much. If some want/need less than X/N then they get as much as they need and actually result in the X/N value being higher for those that are reaching X/N.

  12. Martin Barry December 1, 2011 at 9:40 pm #

    Taking data consumed as a proxy for bandwidth used is a tried and tested method which is both:
    - easy for the consumer to understand and work with
    - relatively simple for the ISP to implement
    I don’t think anyone has ever argued that there is a perfect correlation between the two. However searching for a more accurate way to add a “pricing signal” to customer’s usage tends to end up with solutions that are hard to comprehend and/or hard to implement.
    Most data caps are a single ceiling for the entire billing period however, just like the electricity example mentioned above, they can differentiate peak and off-peak periods and have a higher cap (or no cap) during off-peak. That’s about the limit of complexity that seems to work before the customer gets buried by the technical jargon and legalese in their contract.

  13. Angry Voter December 3, 2011 at 3:28 pm #

    Bandwidth can’t be stockpiled.
    Imagine having more food than you can eat and letting it spoil instead of sharing it.
    Only total scumbags would do that.
    Network monopolists are those scumbags.

  14. TE December 5, 2011 at 3:51 pm #

    Isn’t the issue really that “most” providers are attempting to discourage/stifle bandwidth usage due to the fact that their network CAN’T really deliver more bandwidth if everyone were using more. Even those with capacity – FTTx networks – would prefer the current status quo because delivering more bandwidth to consumers at lower prices would inevitably cause SMB customers to ask why they have to pay much, much more for similar products.

  15. Benoît FELTEN December 5, 2011 at 3:54 pm #

    TE,
    You're correct that they see this as a threat. Then again said threat suggests that they've been gouging business customers forever. If an operator has nothing better to sell an SMB than best effort bandwidth at a higher price than the equivalent residential product, they'd really be calling it on themselves…

  16. CB February 10, 2012 at 5:29 am #

    @Benoit Felten, you’re right on the money….
    “The point being that if top 1% bandwidth users are considered disruptive (which we didn’t find any proof of in the data set)” and “disruptive users (if they exist)” ~ and you never will find proof because they do not exist. Though I firmly expect ‘proof’ to be engineered in a vain attempt to keep their pathetic and dishonest pricing schemes. We now have enough history to see exactly what they will do. They will use fees, taxes to legislate against consumers to hold on to their monopoly / oligopoly. Just keep up the honest analysis, we need more voices like yours.
    Anyone in the United States that has used a Cable Internet provider can attest to truth of your assertions. They just need to install tools to KNOW, (dd-WRT, OpenWRT, Tomato firmware on a supported firewall/router and you can see your bandwidth in real time, 24 X 7 X 365. Hint: The SpeedTests lie to consumers and should NEVER be trusted as honest or accurate!)
    If they regularly access their network in the wee hours of the morning, from 3am – 5am, middle of morning or afternoon when 60 – 80% of their neighbors are at work, school, out of their homes; than they will KNOW…assuming they have tools to show them the truth.
    They will see their bandwidth restricted, limited, throttled for no legitimate reason.
    For some ludicrous reason, the cable providers actually expect us to believe that we have multiple neighbors that are ‘disruptive’ or are somehow ‘stealing’ our precious bandwidth…what a joke (on us consumers).
    Hey Cable companies, you honestly expect American cable Internet consumers to believe that on every trunk these phantoms “of your imagination” exist? really, Really, REALLY… So crazy that it feels like a bad South Park episode.
    You want us to believe that on every block; in every neighborhood; in every zip code; in every community; in every town; in every county; in every state of this HUGE country that there is one or more crazed big bandwidth users stealing from their neighbors… When you think about it, the entity that asserts that crazy hypothesis is the one who is insane.
    I am surprised they can lie with a straight face!
    There is only one solution for consumers. Move! Move to a community that offers the same bandwidth upstream as downstream without throttling, restrictions, limits, artificially below the limit of your plan of any kind. In such a community the plan becomes the cap, a cap that they will never saturate, ever. Extra points to those that realize not all fiber is equal, FIOS does not qualify as they offer less bandwidth upstream than downstream, ie. 50Mb/5Mb. Anyone on FIOS running dd-WRT that can share with the rest of us what they are throttled back to?
    Move to a Fiber To The Home (FTTH) community! And not one who ‘promises’ to do it, but one who has already done it. The real telling story is how few communities offer FTTH. In Chattanooga, they started this process in 1990 and did not finish until December 2010. They could have finished faster except the Telcos and Cable companies kept creating BS lawsuits in a vain attempt to prevent progress. To limit competition. To prevent them from giving their citizens decent service. The Telco/Cable oligopoly failed, EPB, Chattanooga politicians and especially Chattanooga Internet customers have succeeded! And they offered to the oligopoly to provide the service first and as in Wilson NC, the incumbent oligopoly refused.
    With FTTH has come increase in small business, increase in jobs and an increase in the local economic community, as they knew it would. In fact they are creating jobs in this depression/recession thanks specifically to FTTH.
    Does it even make sense to purchase a home without FTTH today? They have determined that FTTH adds $5,000.00 to the value of the home. That is above the $3,000.00 cost of an individual FTTH cost in Utah via Utopia.
    There are currently less than 30 communities http://is.gd/HCi80q in the United States that have FTTH build-outs offering synchronous, bi-directional bandwidth. Specifically whatever your Internet plan, you will have the exact same bandwidth upstream as downstream. When Google complete all 5 of their planned FTTH communities (Kansas City being the first) there will be barely over 30 communities in the USA.
    Less than 30 in 2012, http://is.gd/HCi80q , that is just plain anti-American. But it our reality, like it or not.
    As for me and mine, we want opportunity, we want freedom, we want jobs, we want hope, we don’t want caps and should I actually use more than 10Mb/10Mb of bandwidth, well thats okay as they offer 30Mb/30Mb; 50Mb/50Mb; 100Mb/100Mb and even 1Gb/1Gb. There are no need for CAPs of any kind. So I have valid options and that my friends is what capitalism was suppose to be all about!
    Even better there is no reason to wrongfully slander my neighbors and try to make me think one of them is evil, when none are.
    Remember if you have only two options (Cable, DSL), you have no options, WAKE UP, analyze, than move! Your children’s children will thank you!
    Another parting thought, if your local Telco, cable and cellular companies are doing this to you, they can not do it to you without the help of your local politicians! They are the 1% and you are part of the 99%. If they can do this to you, so can other corporations.
    Which other corporations are buying off your elected officials to give you the shaft? If one is, others are….
    What you need to ask yourself is, what are you going to do about it. Why not research, learn and run for office and fix it for your community. And when you do I will happily add your community to the Fiber FTTH Synchronous Internet Access in the USA map. Just make sure your providers offer as much bandwidth, unthrottled, upstream as downstream via a Fiber To The Home Internet connection and you will qualify.
    And your community will create even more jobs…as we have learned by following Chattanooga TN success. Because they did what was right for their community. Don’t you deserve it, of course you do.

  17. Fortress of Solipsism March 3, 2012 at 5:39 pm #

    You assume that capping the heavy users will curb their peak usage. As a hypothetical, let’s say I’m a frequent torrent user. I get a notice from my ISP saying I used too much data so I decide to stop torrenting so much. However, I still expect to use Netflix at 7pm when I want to see a TV show or movie and I’m still going to watch the random internet video. All the cap has done is lower my total data usage while I still consume just as much data in the peak hours when it actually matters.

  18. willbalsham June 25, 2013 at 9:28 pm #

    is there any way to get the raw data you used in this analysis? Is it just a table of monthly bandwidth per user?

    • fiberevolution June 26, 2013 at 10:40 am #

      The data is proprietary, and it’s much more complex than bandwidth per user. Depending on who you represent and why you want access to the data, we can talk. Send me an email at contact at diffractionanalysis dot com.

Leave a Reply