back to article RAM, bam, awww ... man! Boffins defeat Rowhammer protections

Ever since Rowhammer first emerged, there's been something of an arms race between researchers and defenders, and the boffins firing the latest shot reckon they've beaten all available protections. In the two years since Google first showed how forced bit-flipping could cause memory errors and create a takeover vector, boffins …

  1. Anonymous Coward
    Anonymous Coward

    Why the emphasis on software mitigations?

    Unless I misunderstand how and why rowhammer works, the root cause of the problem is in the hardware: frequent access to neighbouring data speeds up charge leakage, while the refresh times for the memory cells are deliberately chosen to keep the data safe for the average access pattern, rather than the worst possible scenario.

    To me, this appears to be a bug in the hadware specification. The appropriate responce is to correct the hardware specification, not to try to patch it up with complicated and error-prone software work-arounds - which undoubtedly create vulnerabilities of their own. If the proper fix makes memories somewhat slower on average - so be it. I'd rather have the correct data slightly later.

    1. bazza Silver badge

      Re: Why the emphasis on software mitigations?

      Unfortunately, it seems that the reason the hardware is "vulnerable" in the first place is because the operating margins of SDRAM are pared so far back to give us what we also want: high speed, low power memory. AFAIK there's no real hardware fix for this; high speed higher power memory doesn't work (the speed is achieved in part due to the lower operating voltage).

      So yes, we can have memory resilient to rowhammer attacks, but it's like that this would also be slower; and that's a tough marketing proposition at the moment. ECC memory helps somewhat - it becomes harder to exploit the physical effect undetected - but it is still vulnerable to a denial-of-service style attack (the memory can still be changed, but now you have memory faults cropping up and a crashed computer).

      There's other hardware deficiencies in our computing hardware. The behaviours of cache subsystems in almost all CPUs mean that Address Space Layout Randomisation can be defeated pretty easily - The Register has carried articles about this being achieved in <1minute in Javascript in a browser.

      ASLR is important in defeating things like browser exploits, and it's defeat may eventually cause Javascript to become to be seen as dangerous as things like Flash, Java plugins. That would be a disastrous outcome. The vulnerability is also in the hardware - in how caches permit timing attacks against ASLR - but again the fix is unpalatable; it means a slower CPU.

      Stop Executing Everyone Else's Code

      To me the real fix is to stop allowing other people to execute any code they like on our computers. Browsers are a major vector for this - Javascript in web pages. It's asking for trouble. A better way is to not allow execution of someone else's code on our own computers.

      Yes, that changes the web a lot - it means server side execution is all that is "safe" - but ultimately it's the only way to guarantee that exploitative software does not get run on our vulnerable hardware.

      1. S4qFBxkFFg

        Re: Why the emphasis on software mitigations?

        "it's defeat may eventually cause Javascript to become to be seen as dangerous as things like Flash, Java plugins. That would be a disastrous outcome."

        Others may disagree, but I wouldn't miss JavaScript - the good/useful things that it can do are far outweighed by the bad/stupid things that it's used for.

        So I won't get a "rich user experience" and might have to click refresh more often - I can live with that.

        1. Anonymous Coward
          Anonymous Coward

          Re: Why the emphasis on software mitigations?

          What you'd get would be the same as what you have now if you tried to browse the web with Javascript disabled. You don't think that because Javascript can cause theoretical security concerns on the client end that web servers are going to stop using it? That's pretty naive, look at how some STILL use Flash despite attacks that are much more than theoretical and have been an issue for a decade or more - and despite the fact that the entire mobile world and an ever increasing part of the desktop world can't run flash!

          1. PNGuinn
            Thumb Up

            Re: Why the emphasis on software mitigations?

            So, what would be the downside if we could ban Flash, java, javascript and a host of other similar technologies from the web?

            Lightning fast 486 machines, no "enhanced" advertising, back to fast 56k dialup .....

            ..... Google bleeding all over?

            Wot's not to like?

      2. Steve Todd

        Re: Why the emphasis on software mitigations?

        @bazza - "AFAIK there's no real hardware fix for this"

        Nonsense. The fix is already implemented by Xeon CPUs with pTRR compliant memory at no speed penalty, or by simply doubling the refresh speed at a 2-4% cost.

        The issue is the memory controller logic on the CPU. Adding some extra logic at the CPU or RAM side of the equation can spot the potential for rowhammer and increase the refresh rate accordingly.

    2. Solmyr ibn Wali Barad

      Re: Why the emphasis on software mitigations?

      Yup. JEDEC and hardware companies have some work to do.

      Servers have had ECC memory for ages. In desktop computers it's also possible to use ECC DRAM sticks. Higher end motherboards usually support ECC just fine. It'll be a bit slower and twice as expensive, but at least it's an option.

      Laptops, fondleslabs, smart TVs, low-end routers and IoT tat are stuck with non-parity memory for the foreseeable future. That's the reason for rushing with software fixes - there are untold millions of vulnerable devices out there.

      Not every NP memory chip is vulnerable to Rowhammer, though, it became an issue with high densities and high speeds.

      1. hmv

        Re: Why the emphasis on software mitigations?

        I think it would be marginally slower and slightly more expensive rather than twice as expensive.

        Of course I'm prejudiced as I'm one of those nuts who insist on running ECC memory in his main workstations.

  2. Christian Berger

    We can't we just admit that sandboxes don't work?

    Can't we just ban Turing complete code from untrustworthy sources from our computers? Can't we just change the web so websites aren't Turing complete any more?

    1. Charles 9

      Re: We can't we just admit that sandboxes don't work?

      No, because that means computers can't do what we want anymore. How do we get new programs if we can't download them? They're potentially unworthy sources (because even if they SAY they're trustworthy, can we BELIEVE them?)

      And websites became interactive and Turing-complete to meet consumer demand. Not to mention ANY protocol, not just WWW, can be similarly vulnerable to the right confluence of events. If you don't want to be attacked from the Internet, your only guaranteed option is to unplug, just as the only way to keep a computer from being hacked is to unplug it.

      1. Christian Berger

        Re: We can't we just admit that sandboxes don't work?

        Well that's actually rather easy:

        1. Use distributions sharing the same values as you have.

        2. Have you ever seen the web before Javascript and Flash? Everything worked much faster, despite of Browsers that choked on some GIFs and dialup connections.

        Things don't magically work just because you want them to work. Sandboxes have been proven over and over again to not work.

        1. Charles 9

          Re: We can't we just admit that sandboxes don't work?

          "2. Have you ever seen the web before Javascript and Flash? Everything worked much faster, despite of Browsers that choked on some GIFs and dialup connections."

          Not things like eBay because of the round-trip issues. That bus left long ago. Plus it was pretty, well, boring.

        2. Charles 9

          Re: We can't we just admit that sandboxes don't work?

          PS. NO distribution matches all my values, so compromises need to be made. Thing is, those compromises can end up compromising YOU, and for me, there's no way around that. Welcome to the Jungle.

    2. DropBear

      Re: We can't we just admit that sandboxes don't work?

      It would change largely nothing. You would still be vulnerable to all sorts of things that are supposed to be pure non-executable data, but contain instructions carefully crafted to trip up the parsers that are supposed to display them to you.

      1. Christian Berger

        Re: We can't we just admit that sandboxes don't work?

        "but contain instructions carefully crafted to trip up the parsers that are supposed to display them to you."

        You can formally verify parsers for decent languages, and you can make your language simple enough that your parser will be so trivial, it won't have a bug.

        1. Charles 9

          Re: We can't we just admit that sandboxes don't work?

          Formal verifications tend to have very narrow scopes (SeL4, for example, doesn't allow DMA, so it can't be used for performance-intensive applications). Plus there's always the specter of hardware pwnage in which case all bets are off.

  3. Tom Paine

    Is that, this?

    Did the RotW just catch up with El Reg?

    http://pythonsweetness.tumblr.com/post/169166980422/the-mysterious-case-of-the-linux-page-table?platform=hootsuite

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon