back to article It's a net neutrality whodunnit: Boffins devise way to detect who's throttling transit

Back when net neutrality was a thing, engineers at the Center for Applied Internet Data Analysis (CAIDA) tested US interdomain links, and found them mostly flowing freely. News of Verizon throttling a California fire department's data suggest things have already changed in America, but if CAIDA's work gets legs, the world will …

  1. DonL

    "an excessively congested link will see packets dropped when their time-to-live (TTL) expires."

    I don't think that's true. When a packet passes a router the TTL is decreased by one, when the TTL reaches zero the packet is discarded. This is done primarily to prevent packets from ending up in an endless loop. Additional time spend in the buffer does not decrease the TTL any further as the TTL is not actually time related.

    What happens with congestion is that the buffer of the router fills up because the packets cannot be forwarded fast enough, when the buffer is completely full new packets are discarded as there is no free memory to store them in.

    1. Jellied Eel Silver badge

      TTL=Time To Lie?

      Normal TTL is to decrement TTL per hop, so if it exits the egress interface on a link and TTL's decremented to zero, the router at the other end should see TTL exceeded and drop the packet. Which is kinda inefficient, but that's IP.

      I guess you could try decrementing TTL based on time spent in a queue or buffer as a possible congestion avoidance mechanism, but AFAIK that would be non-standard and not implemented. Congestion's usually a simple tail drop, give or take any queueing prioritisation using IP precedence bits. Which would be verbotten in a neutral net. Time does play a factor in TCP sessions as part of it's congestion avoidance mechanisms, but that's an application thing using Karn's algorithm rather than TTL. And to add to the fun, in some MPLS implementations, IP TTL values are copied to MPLS so can be decremented per-hop, but that can also be disabled, so the number of MPLS LSPs is not visible at the IP level. And from memory, I think you can also decrement TTL by 2 per hop instead of 1, but can't remember why that may be a thing.

      1. bombastic bob Silver badge
        Devil

        Re: TTL=Time To Lie?

        well if network admins thought they could hide 'bad acting' by mucking with probe packets, I bet they would...

        Anyway.l I think it's a good idea to run probes like this during peak periods just to know how the network is doing, maybe re-route things around bottlenecks, do some load balancing and so on.

        The 'traceroute' algorithm uses something like this already, but it's based on the number of hops and the receipt of a control packet that tells you that you exceeded the number of hops. In this case the packet is just being dropped altogether when TTL is exceeded, so if it's not somehow indicating that it was dropped, the process becomes a bit more difficult to track.

        Still, I'd like to see some 'standard POSIX tool' that could do this, at any rate. Let's see if it can become a new internet standard.

        Oh and one more thing, with respect to the article: Verizon throttling bandwidth of firefighters - it was in their contract, unfortunately (and I blame both sides for that). Cellular contracts with data caps and throttling (if you go over the cap) have been around throughout the so-called 'net neutrality' regulation period from the FCC. So nothing changed at all, with respect to FCC de-regulation. Connecting the 'net neutrality' de-reg at the FCC with Verizon's data plan throttling practice is FUD, at best. Come on, El Reg, you can do better than THAT!

        1. Jellied Eel Silver badge

          Re: TTL=Time To Lie?

          Anyway.l I think it's a good idea to run probes like this during peak periods just to know how the network is doing, maybe re-route things around bottlenecks, do some load balancing and so on.

          That's where the fun begins. CAIDA's measuring across multiple networks, or ASNs in 'net speak. When traffic's traversing multiple networks, ISPs have limited influence over how traffic behaves once it's left their ASN. So your request for say, 4K kitteh videos could go out via an uncongested link, but the video stream comes back in via a congested one. Asymmetric traffic flow is very common. Then there's load balancing, which IP doesn't support, and if it's tried, can just lead to interesting problems with packet re-ordering. Then it can get even more interesting when the traffic's mostly UDP, which is a very dumb protocol. So the easiest solution is to throw bandwidth at the problem, but that's back to arguments over who pays..

          Still, I'd like to see some 'standard POSIX tool' that could do this, at any rate. Let's see if it can become a new internet standard.

          CAIDA's methods are kinda standard, but their measurements rely on probes. For user-level monitoring, the best tool to use is probably Smokeping, which is available in most flavors of binary.

  2. David 132 Silver badge
    Flame

    News of Verizon throttling a California fire department's data suggest things have already changed in America

    Can we please stop this misconception right now?

    The Cali fire dept thing was a case of them going over the data allowance on their plan, and had nothing to do with Net Neutrality. It would have been a NN issue if Verizon had been, say, allowing them to access FireSupplyCo-Who-Have-A-Deal-With-Verizon.com at full speed, but throttling their traffic to BobsFireSupplies.com and other sites.

    Now you can argue that either the fire dept didn’t pay for the right level of service, or that Verizon mis-sold their package, or Verizon lied about what “unlimited” data meant...but none of that is NN.

    There are MANY reasons to loathe Verizon (and AT&T) but let’s not undermine the Net Neutrality fight by conflating it with unrelated dickish behavior.

    1. Inventor of the Marmite Laser Silver badge

      I thought the issue was the fire dept had an "unlimited" plan, which Verizon decided was actually limited.

      1. Jellied Eel Silver badge

        (Bit of an accidental vote cos I couldn't figure out how to cancel it)

        The FD had a data SIM with a 25GB allowance. For whatever reason, that became interpreted (or misinterpreted) as 'unlimited', even though their filing mentions the allowance, and what happened if that was exceeded.

        Somehow, this has become part of the 'Net Neutrality' holy war and mythology, even though it was a simple case of the customer not understanding their contract. And personally I think the 'U' word should be banned in telco service contracts because they never are, or could be in all circumstances. But hey, that's marketing (Obligatory RIP BIll Hicks)

        1. Ian Michael Gumby
          Boffin

          @Jelly Eel

          It depends on the contract.

          The 'unlimited' plans have contract language that if the customer were to exceed X GB per month, their network connection will be throttled. However they are still able to use the network to get and send data.

          The alternative is to either stop when the amount of data send/received hits X or to charge a premium if they go over for the next Y GB and to automatically charge Y for every GB they go over on their plan.

          If you think this is bad, try having commercial grade service on the same network that provides residential service for your broadband. I would be pissed except that I can survive the downtime and if necessary use my cell as a router where the penalty for going over my data plan is much than the cost of going with an ISP and trying to get a fiber pulled to the building. (Aren't SOHO's great! :-P )

      2. David 132 Silver badge

        I thought the issue was the fire dept had an "unlimited" plan, which Verizon decided was actually limited.

        Exactly (and that’s what I was trying to convey). Nothing to do with NN, and everything to do with telcos’ arbitrary bastardization of the word “unlimited”.

        1. doublelayer Silver badge

          This is completely correct. You could put the original blame on verizon selling an annoying package or the fire department for not paying attention. To the extent there was blame, it would be verizon not terminating the throttling during an emergency. This is definitely not a net neutrality thing, and I think our writer may have mixed these things together. I've reported this to the tips and corrections, in the hopes that it was a mistake or the reporter was just tired at the time.

          1. Alan Brown Silver badge

            "You could put the original blame on verizon selling an annoying package or the fire department for not paying attention."

            Or you could invoke laws about truth in advertising.

            Saying "Unlimited" in large print and then imposing strict limits in the small print is the kind of thing that generally makes regulators very unhappy.

            The fact that people have leapt to Verizon's defence on this shows how many Stefs(*) there are reading El Reg. There have been a huge number of legal decisions around the world that you can't contract out of advertsiing offers using weasel terms like Verizon have done (Or that Verizon has a number of stooges here)

            (*)As in Userfriendly.org

    2. David 132 Silver badge
      Facepalm

      Ah, 4 downvotes for my factual, explanatory comment (as of time of writing this. Bring it on.).

      I can only assume that there's a few Verizon/AT&T employees reading this site... don't worry, I'm sure your mommas don't know what you do for a living.

    3. Ian Michael Gumby
      Boffin

      @David123 Not exactly.

      Yes, you are correct that this isn't a Net Neutrality issue.

      However your example isn't correct.

      Net Neutrality deals with the peering agreement between two networks and their traffic.

      A better example would be if the Fire Department's central command is on L3 and because of all of the real time high def videos that are being shared, news reports, weather, etc ... (meaning a lot of traffic) that is flowing one way to the fire stations that use Verizon, that Verizon throttles L3's traffic coming from the Fire Department's HQ. Or all of L3's traffic to Verizon.

      Many people don't understand how the internet actually works, including politicians who are supposed to get the facts before they vote.

      Note: I tried to give you a better example, I am sure that you can poke holes in it. But you get the idea.

  3. Sureo

    If user A is being throttled because of exceeding some cap, how can they measure this? Other users on the same link would be fine. Couldn't they only tell if the link was being throttled for everyone?

    1. Jellied Eel Silver badge

      Terminological inexactitude, and cold, hard commercial reality.

      Couldn't they only tell if the link was being throttled for everyone?

      Yup. Or if an ISP was being exceptionally devious, it could try prioritising CAIDA's test traffic to skew results.

      It's a neat paper, but caution is needed. It uses the term 'transit' to indicate flows across network(s), ie packets in transit, rather than the more general ISP usage. So that's indicative of a paid connection vs a peering connection. Which may still end up being paid for, or at least cost. Or cost some serious $$$.

      Then if 'Net Neutrality' is thrown into the mix, it can get complicated by a general lack of neutrality from various interested parties. So if a connection between ISPs is congested, who's going to pay for any upgrade? That then gets even more political if you consider content providers like Google, Netflix etc as ISPs.

      But the paper shows a method and data for determining where congestion's occuring across a multi-party link. That's pretty much to be expected. Assume you're connected to an ISP that has a 10Gbps link to a peer or transit ISP, who's then connected to YouTube. The 10Gbps is full, packets get dropped. Issue is.. what happens next?

      In a peering connection, ISPs might figure this is bad, and agree to increase their peering connection to 100Gbps, or Nx10Gbps via link aggregation. Congestion vanishes across that link, everyone's happy. Or, if the ISPs can't agree on commercial terms, it stays congested.

      If it's a transit connection, it can be simpler. Contact your sales rep at the transit ISP and they'll quote you for an upgrade. But that can get controversial. So the transit ISP may also have YT as a customer and so your ISP may effectively be having to pay to deliver their customer's traffic. Which is where the politics and lobbying come in.

      If you're a content provider, then you want to pay as little as possible for connectivity, because that cost obviously eats into your profit margins. And if your business is content streaming, like YT, Netflix etc, you're generating huge volumes of traffic.

      This is where the economic arguments and lobbying come in. Problem is there's a fundamental disconnect between the flow of money between customer and content provider, and paying for the cost of delivering that traffic. So that's mostly carried by the access ISPs. They bill you for a generic Internet connection, Netflix bills you for your video service. Generally none of that Netflix sub goes towards paying any network costs for the ISP that's delivering that traffic.

      Cable TV kinda manages some of those cost sharing by charging TV channels carriage fees for being on their network, as do satellite TV providers.. Which again gets political when operators and content providers can't agree on those deals.

      But that's also what's behind Net Neutrality from a political/lobbying standpoint. The content providers are very much against the creation of any mechanism that may result in them having to pay more for carriage. Their position can be a bit anti-consumer, because it just means that any costs would have to be carried by users via their Internet connection charge.

  4. John Smith 19 Gold badge
    Unhappy

    So delay == proxy for throttling. Simple idea.

    Maybe too simple?

    1. Jellied Eel Silver badge

      Re: So delay == proxy for throttling. Simple idea.

      Kinda. CAIDA's been doing network analysis for years, but it's a wicked problem. Especially if you're on the outside looking in. And even more so if there's various shenanigans going on around peering disputes.

      But it's a way to detect congestion. That doesn't necessarily mean throttling, ie trying to shove 15Gbps of traffic down a 10Gbps pipe isn't going to work. Behind that is the age-old problem of who pays to upgrade those links. In the paper, it seems to indicate a lot of the congestion's at YT's end.. And behind all that are the challenges with ISP's control over traffic routing vs say, Google's ability to detect congestion and direct sessions to servers & connections that may have spare capacity.

      Then there's the real neutrality stuff. So technically, it makes sense to be able to prioritise real-time or time senstive sessions like voice or live video. All common in private MLPS VPNs, but key to the holy war around 'Net Neutrality.. Because the fear that if you start classifying traffic, you then start charging premiums for carrying it.. Which is also complicated by content providers obfusticating their session control protocols so ISPs couldn't act on them even if they wanted to/were allowed to.

      But without an ability to manage congestion, the Internet will remain fundamentally best efforts.

  5. StuntMisanthrope

    adamsmith.com

    Bit like asymmetric finance, fake news and investigative journalism. The story says productivity, but the delay is self-serving, short on fact and the headline is boom. #todaysnewsisyesterdaysforecastandtomorrowsworld

    1. Steve Knox
      Unhappy

      Re: adamsmith.com

      I miss amanfrommars

      1. David 132 Silver badge

        Re: adamsmith.com

        No. No-one misses amanfrommars. He/it turns up, vomits sub-markov-chain textual turds onto the discussion, and yet for some reason has a fanclub around here.

  6. Dave Bell

    Why do I feel smarter than a journalist this morning?

    This whole story is riddled with misconceptions, and where it isn't, it;s all rather obvious anyway. It's essentially automating traceroute and ping and saying that when the RTT and packet loss jumps, the problem is between the last good node and the first bad one.

    I was doing that over dial-up internet through Demon in the last century.

    This isn't rocket science. And Kerbal Space Program feels more realistic than this article.

    1. Jellied Eel Silver badge

      Re: Why do I feel smarter than a journalist this morning?

      It's a bit more complicated than that, hence why it won at SIGCOMM. The voodoo's done using CAIDA's TSLP where it's probes craft the ICMP TTL's values to generate more link/hop specific data. Then a bunch of time-series analysis to interpret the results. Which are arguably a more accurate way to determine where congestion's occuring. It can't explain why though, ie if it's throttling, congestion or sometimes just the way routers deal with ICMP packets when they're stressed. For throttling/congestion though, the results are generally the same, ie dropped packets.

  7. Spanners Silver badge
    Linux

    Can it tell...?

    How well does it differentiate between overloaded systems, system failure, configuration errors and deliberate poor performance?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon