back to article DNS devastation: Top websites whacked offline as Dyn dies again

An extraordinary, focused attack on DNS provider Dyn continues to disrupt internet services for hundreds of companies, including online giants Twitter, Amazon, AirBnB, Spotify and others. The worldwide assault started at approximately 11am UTC on Friday. It was a massive denial-of-service blast that knocked Dyn's DNS anycast …

Page:

  1. Barry Rueger

    Inevitable

    Arguably this sort of "bring the Internet to its knees" attack was pretty much inevitable.

    And arguably that has been the case for most of a decade.

    Right now a lot of companies must be quaking in their boots wondering what's next.

    1. raving angry loony

      Re: Inevitable

      Or salivating at the profits to be had from marketing "protection" to those who feel vulnerable. The whole thing smells of a protection racket in the making.

    2. John Smith 19 Gold badge
      Unhappy

      "Right now a lot of companies must be quaking in their boots wondering what's next."

      Certainly those who understood what just happened.

      While the suppliers of the IoS products that enabled it should be ashamed.

    3. jobst

      Re: Inevitable - but not because of DNS

      It seems a lot of people are talking about DNS in the moment ... but we must not forget who really is to blame - it's the stupid security programmed into may IoT's because of greed. The cause for this problem are not the inherent problems of the DNS system(s) but the stupidity of device manufactures like Dlink, Netgear, Avtech and so on.

    4. Anonymous Coward
      Anonymous Coward

      Re: Inevitable

      Not as inevitable as someone shouting "Blockchain - that's the answer!"

  2. Anonymous Coward
    Anonymous Coward

    not the only one

    noip appeared to be offline at times on wednesday.

  3. Anonymous Coward
    Anonymous Coward

    I guess commercial considerations have replaced the "internet routes round damage" idea.

    1. Yes Me Silver badge
      Unhappy

      "routes round damage" idea.

      Routing works just fine during a DNS outage; the problem is that you can't find the addresses that you want to route to. DDoS against DNS authoritative servers has always been scary. If only every ISP supported ingress filtering, as they're supposed to, tracking down and killing DDoS bots would be that much easier.

    2. ecofeco Silver badge

      Actually it did. Many sites were affected but not all. I could easily reach most websites I use during the day.

      It worked exactly like it should.

    3. Frank Oz

      Well, yeah ...

      Bottom line is that its the current DNS process, which is in place to ensure that the domain name issuers get their little piece of the pie, which make attacks like this possible. In the good old days the ENTIRE DNS data files/database was mirrored on a number of servers around the planet ... so a DOOS would have to hit them all to cause things to fall over this badly.

      These days, all the hacker has to do is find out which commercial Domain Name provider provided which mega huge internet presence with its domain names ... and just hit that server. If anything this attack should result in a number of DYN clients going elsewhere (presumably to a less visible DNS provider)

      What should happen is that ICANN points out that the current DNS verification and validation processes (which are only in place to protect the IP of the DNS provider) actually make it easier for the Ungodly ... and that perhaps total replication of the database across mulitple provider sand locations might be a good idea to revert to.

      But that's unlikely - because nowadays ICANN represents the DNS providers.

      1. John Brown (no body) Silver badge

        "In the good old days the ENTIRE DNS data files/database was mirrored on a number of servers around the planet ... so a DOOS would have to hit them all to cause things to fall over this badly."

        I was wondering the same thing. Where are the master root servers these days? I take it they either no longer exists or people like Dyn don't bother with them.

        1. Danny 14

          Most companies run their own dns caching as part of their proxies (i imagine, we do to cut down on requests) so this will hit general public more. In fact i only noticed the issues when i switched from wifi to data on my phone.

    4. Steven Jones

      It's not a routing issue...

      The Internet does route around damage, but this isn't an attack on routing. It's an attack on a network service. That's rather a different thing.

      However, it's certainly true that far too little effort has been put into fundamentally hardening network services of all sorts against these sorts of attacks. Unfortunately far too many Internet protocols and services are built around assumptions of good behaviour.

      1. Anonymous Coward
        Anonymous Coward

        Re: It's not a routing issue...

        "Routing" has many more meanings than "IP routing". The Internet was designed as a distributed systems - kill a node, it would still keep on working. Now we're turning it into a big centralized system where a few big data centers, the grandsons of mainframes, hold everything. And when they become the easy target of huge distributes attacks like this, they are kaputt... the old saying "when you put all your eggs in one basket...".

        If all those DNS records were widely distributed, good luck to take down all of them.

  4. Dave Pickles

    DNS wouldn't be so vulnerable if folks set really long TTLs on their entries and didn't use DNS for load-balancing - caches around the internet could then ride out any feasible attack.

    1. ParasiteParty
      Facepalm

      I wish this was true...

      Many ISP's - to name BT for one but probably most, simply do not respect published TTL's.

      We've seen issues where we set low TTL's during a site migration but the ISP's simply don't go back for another lookup. You end up having to fanny about with DNS providers until enough time has passed for the providers DNS to re-do the lookup in its own time.

    2. efinlay

      Except...

      ...with really long TTLs, how do you manage regional or global load balancing? Failovers? Switching records in general? Migrations?

      (admittedly, that last one is generally less frequent)

      Honest question - not snarky, I'm curious.

      1. Anonymous Coward
        Holmes

        Re: Except...

        Good question, that commentard: "with really long TTLs, how do you manage regional or global load balancing".

        https://en.wikipedia.org/wiki/Anycast - Anycast addresses. This is what Google's 8.8.8.8 and 8.8.4.4 and OpenDNS 208.67.222.222 and 208.67.220.220 use.

        Cheers

        Jon

        I suspect I've answered your question as posted but probably not what you intended or the full story.

        1. Nate Amsden

          Re: Except...

          Anycast only covers a subset of load balancing needs(small subset at that)

        2. patrickstar

          Re: Except...

          Anycast is not suitable for TCP since TCP is dependent on all packets of a connection going to the same place. It might very well work for specific applications and short-lived connections but it's definitely not something you want to deploy on a website that's supposed to be reachable by 100% of all users.

          You can even encounter scenarios where, for some subset of users, it works very poorly if at all - equal cost multipath for example, where every other packet ends up at a different anycast instance.

        3. TeeCee Gold badge
          Facepalm

          Re: Except...

          Except that it isn't a good question as the original comment did specify that a prerequisite would be that people didn't use DNS for load-balancing.

          People are using something for something for which it wasn't designed to do and have found out that it's not really suitable for it? Who Could Possibly Have Seen That Coming?

      2. Alister

        Re: Except...

        ...with really long TTLs, how do you manage regional or global load balancing? Failovers? Switching records in general? Migrations?

        You can do it by not using DNS to switch between sites. Instead you have one or more load balancers with fixed IPs which you point the DNS at, and then redirect the traffic to the sites and servers as you want.

        We do DR failover this way, as well as load balancing and migrations between hosting environments.

        There is a slightly increased latency, obviously, but not enough to impact normal traffic.

    3. streaky
      Boffin

      TTL is nothing really to do with it. Sites would go offline under sustained attack sooner or later.

      The main issue here is that these large companies are doing DNS wrong on a more fundamental level. We learned years ago that people were attacking DNS providers and this could be leveraged to take out fundamental infrastructure and sites of all sorts of sizes. The fix is obvious and it's something I've recently pointed out to github:

      If you're an attack target do not just use a single DNS provider. Use 2.

      If you do that it's much easier to not be caught in the crossfire. It's also much more difficult for adversaries to take you out via DNS - they have to take out two entirely separate networks to achieve that requiring double the attack assets.

      The internet was designed very insecurely but they did build it in a way that made it easy to mitigate attacks like the one today and everybody running DNS services at the companies that were taken out look like complete clowns in retrospect. It's like the people who expect AWS zones to be up 100% of the time despite them not being designed to be survivable and Amazon giving people the tools to not do that.

      Also fwiw using anycast to balance large sites is a really bad idea. If anycast was a solution to the problem we wouldn't be sitting here talking about anycasted dns providers being taken out.

      1. bazza Silver badge

        @Streaky,

        "The internet was designed very insecurely"

        Security wasn't a consideration at all in those days.

        Fundamentally everything we have security-wise is a bodge. Ultimately no matter what security mechanism one contrives, it always boils down to the following. Machines are hopeless at identifying people.

        1. streaky

          The security thing wasn't really a complaint, just a fact of life. We can we rebuild, we have the technology - though I wasn't really arguing for that. I wouldn't mind burning UDP to the ground though but it's an entirely separate subject.

          TCP is dependent on all packets of a connection going to the same place

          TCP anycast is a thing (indeed it's how a lot of HTTP DDoS protection works). Doesn't mean it's a sensible use of resources when your DNS provider can do useful things for you; it's all cost/benefit - DNS providers are cheap, anycasted HTTP isn't. As I said it's not really a solution to the problem, not relying on your singular provider's servers in the case they get hit or plain just go down is.

          1. patrickstar

            Yes, it can be done, but it's not a general solution the way DNS anycast is.

            When anycasting DNS, you can just plop down anycast instances in whatever locations will host you, with no special routing policies in place, no prepending/community games, and having it exported as widely as possible.

        2. Anonymous Coward
          Anonymous Coward

          ...and how much it costs to implement offset against profit and how much the PHB will earn for his/her yearly bonus.

      2. Anonymous Coward
        Anonymous Coward

        > If you're an attack target do not just use a single DNS provider. Use 2.

        If *you* are an attack target, it is *your* infrastructure that is going to be targeted, not the DNS providers (they may throw that into the deal as well, but expect your own infrastructure to become suddenly popular with IP cameras and stuff).

        1. streaky
          Coffee/keyboard

          it is *your* infrastructure that is going to be targeted, not the DNS providers

          Yeah but TCP attacks the average toddler can deal with, they're blatant and they're easy to identify the source of and can be mitigated quite quickly. UDP attacks against DNS infrastructure are very difficult to deal with which is why they're popular for taking out large targets - and regardless of that "you" as the target can mean that you're one of many large US sites and the attacker would be happy to take you out as collateral.

        2. Doctor Syntax Silver badge

          "If *you* are an attack target, it is *your* infrastructure that is going to be targeted,"

          For some values of "you". If "you" means the US internet business community then DNS is part of that infrastructure and, from what's happened, appears to be a single point of failure for quite a large portion of "you".

      3. leenex

        It's mainly an attack on port 53, right?

        What if DNS servers were able to agree on a different, free port as part of the protocol? There would be no telling which port would be used, and an attacker would have to scan 65535 port to find it, right?

        Any client scanning all ports would be easy to identify.

        (This was a brain fart from someone who can hardly configure a Cisco router.)

        1. streaky

          It's not a question of scanning or whatever, the attack was against a "shared" DNS provider where all these sites were using common infrastructure. It's not as if you can "hide" your DNS server because resolvers have to be able to find them, so they have to be pointed at from the parent zone servers.

  5. Pen-y-gors

    Sort out priorities...

    I suspect so long as El Reg and a few other specialist interest sites are unaffected then this readership won't be too worried.

  6. Anonymous Coward
    Anonymous Coward

    The outage is actually doing a fab job at functioning as an ad-blocker.

    1. ecofeco Silver badge

      I noticed that as well.

  7. John Savard

    Helpful Article

    Hopefully, all DNS sites will start caching; I wish my computer would cache the IP address of sites I visit so that I wouldn't even notice a DNS failure - it could even warn me if an IP address changes, to help prevent IP spoofing.

    Anyways, I switched my DNS to that given in your article, and I could connect to the RuneScape servers once more! Quite saved my morning. Also I was reading old issues of U&lc, and that too was restored.

    1. hmv

      Re: Helpful Article

      DNS already does caching, but those who run amazon.com and github.com have chosen to minimise the caching time to make it easier to switch things around and disregarding the usefulness of caching.

      1. 404
        Facepalm

        Re: Helpful Article

        Ayup... mid to late 90's with NT DNS(!) servers, then later with Solaris x86 Bind DNS servers, when setting up new websites, you always had to stop Internet Exploder, close Netscape, restart your DNS client, just to check to see if the scripts took...

        I'm sorry, I don't know where I was going with this -> wife pops in with 'Oh look, a portable ski lodge!'... thought evaporated.

      2. Anonymous Coward
        Childcatcher

        Re: Helpful Article

        "DNS already does caching"

        Not really - each record has a Time To Live (TTL) in seconds. Your DNS server should honour that but it can go a bit mad when people ignore the standards to fix things.

        For example I've just looked up github.com via 2001:4860:4860::8888 (Google public DNS - IPv6) four times in quick succession and got the following TTLs: 117, 160, 144 and 18. I then looked up the NS records (AWS, four NS records) and looked them up there - now the A records round-robin between two IP addresses and with a TTL of 300.

        The world can be a nasty place

        1. patrickstar

          Re: Helpful Article

          Completely normal, expected, and perfectly TTL-obeying behavior.

          DNS resolvers don't reply with the TTL as originally specified in the response from the authoritative DNS server (i.e. what the guy who set up the domain specified) - they respond with the time remaining until the record expires from their cache.

          Your 4 queries ended up at 4 different servers. Google's DNS servers exist at multiple locations with the same IP address - this is what's known as anycast instancse. Each anycast instance then consists of multiple actual servers behind a load balancer. The load balancer just picked a server at random for each of your queries. Nothing mystical about any of this - everyone big does the same thing in almost exactly the same way, including the root servers.

          These 4 servers had cached the record (i.e. received a query for it when it wasn't cached and subsequently looked it up and stored the result in the cache) at different times. Hence different times remaining until expiration in their caches.

    2. Gary Bickford

      Re: Helpful Article

      > Hopefully, all DNS sites will start caching; I wish my computer would cache the IP address of sites I visit so that I wouldn't even notice a DNS failure - it could even warn me if an IP address changes, to help prevent IP spoofing.

      I have local DNS server running in cache mode on all my computers - desktops and servers. Theses are all Linux machines. IDK if Windows has that capability, but I think the default configuration for Ubuntu is to run bind as a caching DNS server if it is turned on. So then I have my net config is using 127.0.0.1 as the DNS source, and my bind configuration uses 8.8.8.8 plus another one.

      One additional benefit is that when I'm on a cable connection this bypasses the cable company's default DNS that it sets up in my cable modem's DHCP config, which they use for various nefarious purposes such as inserting their own ads in websites, selling my traffic info, and "fixing" domain name typos by routing to their own advertising sites. I've seen all of those tricks at various times when visiting people who use comcast or optimum.

  8. Scott 26
    Trollface

    No irony at all...

    ... at an article about a DDOS attack that has twitter affected, with a twitter screenshot in it....

  9. ma1010
    Mushroom

    ENOUGH!

    You know, this is really enough of this crap. What's the point? It's like the assbags that went out and wrote viruses and sent them around to screw up computers of people they didn't even know. What was that for? And now why try to bugger up the whole Internet?

    I'm not smart enough to figure out a solution (and there may not be one), but it seems to me that something should be possible.

    Technically, we need some geniuses to figure out a way to trace this crap back to its source. Politically, we need international treaties which provide that anyone who screws up the Internet, regardless of where they are, will be arrested and tried for it. Once found guilty, give them a nice, LONG prison sentence. And maybe a permanent, non-dischargable judgment (for many millions) that follows them around for the rest of their life to make sure they're pauperized to the point they can't AFFORD a computer.

    1. Mark 85

      Re: ENOUGH!

      You raise a good point with: "What's the point?". Perhaps a live fire test of the botnets? A warning? Not sure from here.

      With the IoT crap getting whipped into botnets, this could be a harbinger: "Remember this? we'll pay us or you're next."

      Or a state actor group flexing it's muscles as a warning....?

      I just don't think it's being done for fun.

      1. Dan 55 Silver badge
        Black Helicopters

        Re: ENOUGH!

        Or a state actor group flexing it's muscles as a warning....?

        Just after the big important IoT meeting in the US...

    2. Martin Summers Silver badge

      Re: ENOUGH!

      Have you never watched films? There's always some evil dude who wants to destroy the planet in some way. It's an ego thing, nothing rational.

      1. Martin-73 Silver badge

        Re: ENOUGH!

        The chap should be easy enough to track down. He'll be covered in long white persian cat hair

        1. CrazyOldCatMan Silver badge

          Re: ENOUGH!

          The chap should be easy enough to track down. He'll be covered in long white persian cat hair

          Phew! The only cats I have with white hair are all short-hairs. Maybe I'm only a little bit evil?

      2. Haku
        Coat

        Re: ENOUGH!

        "Have you never watched films?"

        I've heard of them. Doesn't Samuel L Jackson always play the black guy in those?

    3. Anonymous Coward
      Anonymous Coward

      Re: ENOUGH!

      Guess eventually the major internet companies or even government agencies will get proactive and start releasing bot-killers targeting vulnerable devices

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like