back to article Summoners of web tsunamis have moved to layer 7, says Cloudflare

Attackers have noticed that the world is getting better at fending off massive distributed denial-of-service attacks, and are trying to overwhelm application processes instead. So says DDoS-deflector Cloudflare, which reckons it's seen a spike in cyber-assaults trying to exhaust high-level server resources, such as per-process …

  1. doublelayer Silver badge

    Please, not a captcha

    To anyone out there considering this, please don't make this based on a captcha. Those things break too often. I'm tired of fighting with them, either so they'll work when being run on something mildly unusual, so people who have difficulty seeing things can try to use the audio one (if even provided), or so the provider doesn't decide that, since they are seeing us try a few times, that we must be a bot and should be blocked. Captchas are evil things.

    1. Nick Kew

      Re: Please, not a captcha

      Aren't you fighting yesteryear's battles?

      Or is someone out there really still using or advocating the vile things?

      1. Mark 85

        Re: Please, not a captcha

        Seems I run into the "captcha" quite a bit. So yeah, they're still out there. For many sites, it's probably a legacy thing that they figure still works and doesn't piss people off. For others it seems to be a case of "oh look.... we can use this" instead of investing in something better.

      2. Charlie Clark Silver badge

        Re: Please, not a captcha

        Or is someone out there really still using or advocating the vile things?

        They still tend to exist for signup services but otherwise have largely disappeared. They're a bit of a Catch 22 for signups because being able to create users is what a lot of attackers want to be able to do, hence they run lots of attacks and 2FA sort of needs bootstrapping. They are still evil but I can understand why some sites use them in these cases.

  2. Kevin McMurtrie Silver badge

    Good luck

    Blocking after too many 404s? As if it's difficult to find resource slow and intensive features on web sites?

    1. ghp

      Re: Good luck

      It seemed a good idea to me. Care to enlighten me?

      1. Killfalcon Silver badge

        Re: Good luck

        Some services return 404s for content that's been removed (while leaving links to said content intact) - it's very easy to rack up a number of them in a hurry. If your application/website/etc is setup that way, then your attack mitigation probably shouldn't use that rule - "consider your own architecture" probably isn't that surprising a rule, mind.

        That said, I don't see how a service being slow leads to 404s. 404 is a response, but a time-out.

  3. Sheepykins

    This is where the power of Linux and IPTables comes into play.

    I built a website, can't say which, constantly under heavy DDoS attacks and when that didnt work they went for the resource starvation - couldn't get a remote shell, was so slow had to run down to the server room :(

    Anyway, with attacks against HTTP servers real idiots usually use a common element with their scripted efforts and using an IPtables string match to silently drop traffic is very easy to do.

    1. Charlie Clark Silver badge

      Actually, to survive a dedicated DDoS attack you really do need to hind behind a CDN like Cloudflare and let them manage all that stuff for you. I've seen a whole data centre taken out by a traffic flood: your firewall won't help you much there.

  4. Anonymous Coward
    Trollface

    Simple solution

    "OSI layer 7 attacks"

    Simply use the DOD four layer model and avoid the problems at layers 5,6,7! Job done.

  5. ds6 Silver badge

    I really like Cloudflare's leadership and they're doing a great service to the Web—but at what cost? How much of the Internet runs through them again? How difficult would it be for some three-letter org to grease some palms to get essentially full, undetectable MITM access to whatever big service they want, if it hasn't already been done?

    We need to stop mitigating and design a secure, stable infastructure... But that's probably well and truly impossible at this point.

    1. Charlie Clark Silver badge
      Facepalm

      We need to stop mitigating and design a secure, stable infastructure

      And what are we supposed to do in the meantime while the new secure and stable infrastructure is designed and built?

      If you're running a business relevant website and worried about attacks than getting behind something like Cloudflare is one of the best investments you can make both for yourself and your customers.

  6. Daniel Garcia 2

    Amateurs

    “Logging in four times in one minute is hard - I type fast, but couldn’t even do this" nobody that need to log in often many devices would type credentials, ctrl+c>>>ctrl+v is the true way.

    1. Pascal Monett Silver badge

      Unless ctrl-v is blocked by the password-input UI. Then you just have to type normally again.

      Not common, but I have seen it happen to me.

  7. Claptrap314 Silver badge

    Another day, another marketing blurb as article

    At least this time, it's for a serious company (Cloudflare) talking about a genuine emergent issue (L7 attacks). But the solution they are talking is strictly for script-kiddle level attacks. Login attempts? Seriously? What am I missing? And why would this be significantly different than other traffic to mitigate?

  8. Nate Amsden

    old news, but good news?

    https://en.wikipedia.org/wiki/Slowloris_%28computer_security%29

    (just what I could remember off the top of my head)

    anyway the good news is that there should be significantly less collateral damage caused by application layer attacks since you don't have to flood all of the pipes to kill the service.

    I was at one place that I would consider "high traffic" (several years ago anyway), they processed a few billion requests per day. They were ad tracking pixels so the performance was high, when I was there the dual socket servers could sustain 3,000 requests per second in tomcat. Anyway before I started AOL had added their pixel to AIM, and AIM wasn't good about closing connections for some reason, so they got millions of requests which was exhausting the capacity of their systems just on open connections. They later tuned their load balancers to force terminate connections after something like 2 seconds(average request was maybe under 100ms), which fixed that issue.

    At another company I was at their app was so bad sometimes even 1 request per second would tip it over(certain kind of requests I don't remember what kind). The executives would freak out and claim DDOS and want to manually block each inbound IP (and the IPs kept changing, at a low rate of speed). I just laughed, I mean come on that is just pathetic. They expressed no real interest in fixing the app just blocking the bad requests. That company died off several years ago. I don't even think that situation was even an attack, because if your app can't handle more than a few requests per second you have bigger problems.

    I've never personally been on the receiving end of what I would call a DDoS, though have been collateral damage(including the Dyn incident a couple of years ago).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like