back to article Microsoft, Google: We've found a fourth data-leaking Meltdown-Spectre CPU hole

A fourth variant of the data-leaking Meltdown-Spectre security flaws in modern processors has been found by Microsoft and Google researchers. These speculative-execution design blunders can be potentially exploited by malicious software running on a vulnerable device or computer, or a miscreant logged into the system, to …

  1. Christopher Reeve's Horse
    Holmes

    Well well well well...

    ...well well well well well then.

  2. razorfishsl

    Anyone else get the idea this is a fuck feast, where intel and Co are out to find more spectre flaws to muddy the waters & stave off any lawsuits against intel

    1. Anonymous Coward
      Anonymous Coward

      re: intel and Co are out ... to muddy the waters

      You might very well think that. I couldn't possibly comment.

      Be careful out there.

    2. This post has been deleted by its author

    3. snifferdog_the_second

      @razorfishsl: No

  3. Waseem Alkurdi

    I bet my five dollars/euros/pounds/$currency that we're going to count 13, no more, no less variants by December.

    1. YetAnotherJoeBlow

      Spectre

      I'm aware of a group selling a Spectre vuln. They won't disclose the source as that would be giving it away for free. One has to buy on faith. The government would by it that way - who's going to con the NSA? The price is in the stratosphere. The government price is too low.

      13 by years end? Easily but we will never know how many there were will we? Best wait for new dies.

      1. hplasm
        Coat

        Re: Spectre

        "I'm aware of a group selling a Spectre vuln.

        SPECTRE?

  4. Boris the Cockroach Silver badge
    FAIL

    Its quite depressing really

    we have 1000's of people with the smarts in CPU design.... and things like this pop out.

    And not only pop out in Intel chips.. but everything else out there too.

    Almost as if they are all copying each other.......

    1. Brian Miller

      Re: Its quite depressing really

      They aren't copying each other, it's just that there's only so many ways to make something execute more instructions faster. And yes, speed is freaking important.

      There are a lot of timing attacks and other side channels that yield information. One of the important points of all of this is that too many applications don't encrypt sensitive data, even with minimal encryption.

      1. jaduncan

        Re: Its quite depressing really

        In this context encryption outside of the CPU doesn't really matter; the compromised processor is the thing that must touch decrypted data to, well, process it.

        1. Anonymous Coward
          Anonymous Coward

          Re: Maths

          Mathematics seems to suggest it is impossible to prevent all these types of side channels. Where ever execution is time or resource limited, you can correlate that to data.

          1. anonymous boring coward Silver badge

            Re: Maths

            "Mathematics seems to suggest it is impossible to prevent all these types of side channels."

            Perhaps, but by limiting the accuracy available to the attacker, I think we should be able to make these sort of attacks unfeasible.

        2. Michael Wojcik Silver badge

          Re: Its quite depressing really

          In this context encryption outside of the CPU doesn't really matter; the compromised processor is the thing that must touch decrypted data to, well, process it.

          Not necessarily true - that is, if the data is being loaded prior to decryption (for example, if decryption is being done by the core being probed), then encryption in memory would prune the Spectre attack tree somewhat. It's not a perfect defense by any means, but it narrows the scope for usefully probing that particular data.

          This is simply a specific case of the more general observation that a Spectre probe sequence will reveal much low-value data, possibly in addition to some high-value data. Encrypted data (which the attacker cannot economically decrypt) is low-value.

          Of course, the attacker may be able to find the key by probing elsewhere. It's a very partial measure.

      2. Anonymous Coward
        Anonymous Coward

        Re: Its quite depressing really

        Half right, half wrong:

        "They aren't copying each other, it's just that there's only so many ways to make something execute more instructions faster. And yes, speed is freaking important."

        Right. Someone comes up with an idea that solves a perceived problem, and it gradually becomes standard practice, unless someone else comes up with a better solution. That's just the way a successful society makes progress.

        "One of the important points of all of this is that too many applications don't encrypt sensitive data, even with minimal encryption."

        Wrong. In general, things have to be in comprehensible form for processing. There are a few ways of doing certain limited operations on encrypted data, but this is orders of magnitude slower than operating on the unencrypted data. Better to just scrap speculative execution, as it is a much lower performance hit.

    2. chuckufarley Silver badge

      Re: Its quite depressing really

      Well, think of it like this:

      Every modern CPU that suffers from these vulnerabilities has literally billions of transistors. Your higher end CPUs (and GPUs) have more transistors per chip than there will be people on the Earth tomorrow or twenty years from now. It's amazing that we don't have more of these flaws to deal with and that they are not worse. Perhaps there will be more that come to light soon, or in the next decade. What matters is that we find the flaws and learn how to fix them. It's a case of not being able to make progress until we fail and learn from our mistakes.

    3. anonymous boring coward Silver badge

      Re: Its quite depressing really

      Computing is still in its infancy.

      We have only (relatively) recently started accepting that malicious stuff will run on our computers (invited in by making the web able to run stuff locally). We used to think it was the exception that malicious code ran, whereas now it's the norm.

      Most security still stems from only running stuff from trusted sources. The main security holes are "run everything" platforms such as browsers and Flash.

      1. Anonymous Coward
        Anonymous Coward

        Re: Its *very* depressing really

        "We have only (relatively) recently started accepting that malicious stuff will run on our computers (invited in by making the web able to run stuff locally)."

        Are you serious? And, by the by, who is this "we"?

        Long before the era of ubiquitous web access, it was quite popular for someone to send someone else this week's weekly spreadseet (or whatever) and it has a macro in it that does the equivalent of "format c:" I'm thinking that goes back to the 1990s, when MS and IT departments had discovered email but the unprotected web hadn't become ubiquitous.

        And because the underlying commodity software people and commodity sysadmin people typically had no systematic concept of protecting their important resources (files and filesystems, for example) against inappropriate access (stuff visible that shouldn't be, stuff writable that shouldn't be), such matters being beneath outdated relics of a forgotten era where security and access controls *had* to be considered as part of a bigger picture, the rest of us end up two decades later with an industry literally subject to Meltdown.

        Maybe the IT crowd should switch it off and switch it on again and see if it works better afterwards. It seems to be the industry standard approach.

        1. Anonymous Coward
          Anonymous Coward

          Re: Its *very* depressing really

          "Maybe the IT crowd should switch it off and switch it on again and see if it works better afterwards. It seems to be the industry standard approach."

          It is a standard approach because resetting a system in an unknown state to a consistent starting configuration is a logical and efficient way to start.

          As the complexity and interconnectedness, both obvious and invisible, of computing environments increases, and the costs and impact of extended downtime to our lives soars, fast solutions or at least fast diagnosis becomes ever more logical.

          1. Anonymous Coward
            Anonymous Coward

            Re: Its *very* depressing really

            "As the complexity and interconnectedness, both obvious and invisible, of computing environments increases, and the costs and impact of extended downtime to our lives soars, fast solutions or at least fast diagnosis becomes ever more logical."

            These "costs of extended downtime" you mention.

            Who's picking up the costs? The system (hardware, software, etc) suppliers, the end users, the magic money tree?

            E.g. Do readers think the TSB IT people, as part of the diagnosis of the recent and ongoing issues, or the IBM people who were ordered in, by the CEO or whoever, might have tried "switching it off and on again"? Does the process seem to have helped resolve the issues?

            Have readers (some of whom must be TSB customers or TSB staff) been asked whether they care about complexity and interconnectedness, or whether they just might perhaps prefer to get at their money again so they can move it somewhere safer (e.g. shoe box under the bed).

            Complexity is not a valid excuse.

            "on-off-on provides fast recovery to a known state" might be admissible as a plea for leniency in certain very restricted circumstances.

          2. anonymous boring coward Silver badge

            Re: Its *very* depressing really

            "It is a standard approach because resetting a system in an unknown state to a consistent starting configuration is a logical and efficient way to start."

            It's the standard conditioned approach we use since we have been forced to use fragile systems where various components are allowed to affect each other in unpredictable ways.

            It's a sad state of affairs, that I mainly blame "ctrl-alt-del MicroSoft" for.

        2. anonymous boring coward Silver badge

          Re: Its *very* depressing really

          "Are you serious? And, by the by, who is this "we"?"

          It was a generalisation, of course.

          Obviously people have been tricked into running bad stuff on their machines for a very long time. Thanks MS for helping facilitating this.. Why not just run emailed stuff when people click on it? Brilliant!

          With "we" I meant your average home PC user, which, by the way, wouldn't even have had email facilities back in the pre-WWW era. (Yes, I know that _some_ would have had that.)

          It's a fact that the WWW has opened up the possibilities for trojans and viruses massively.

    4. anonymous boring coward Silver badge

      Re: Its quite depressing really

      "Almost as if they are all copying each other......."

      CPU design has been openly discussed in fora since day one.

      Most performance enhancement methods are very well known, and subject of research at universities etc.

      Developers get poached between companies.

      All currently used mainstream CPUs follow the same basic design pattern.

      Performance improvements are in the details of implementation, more than overall architecture.

      Pressure to make the fastest processors would lead to designers doing similar things, perhaps ignoring some obscure and unlikely to be exploited side effects (if they even considered them in the first place).

    5. steviebuk Silver badge

      Re: Its quite depressing really

      Might be because, as an engineer said to me the other day, she said "Maybe I don't see these things as I'm not criminally minded". This was as I was talking about potential exploits in some software we were using.

      Maybe the engineers that design the CPUs think the same. They just want to design the fastest chip possible and not have to think about the security of it.

      In my mind. As an engineer theses days I think you, unfortunately, do needed to think criminally in your work but only in order to protect yourself from what you think criminals might exploit.

      1. anonymous boring coward Silver badge

        Re: Its quite depressing really

        A better word might be that we have to be paranoid.

        We used to look at things from above. Privilege etc. Now we need to become paranoid and see what ways people can get to us from all sorts of obscure angles.

        1. Anonymous Coward
          Anonymous Coward

          Re: Its quite depressing really

          "A better word might be that we have to be paranoid"

          No, paranoia is believing things that are not true. Paranoids don't generally worry about real risks.

          Criminal minded is the way to go. Criminals look on everything as a potential opportunity for theft. They seek advantage, not fear.

          1. anonymous boring coward Silver badge

            Re: Its quite depressing really

            "No, paranoia is believing things that are not true. Paranoids don't generally worry about real risks."

            Well, before you have found the security flaw, you are indeed worrying about things that may or may not be true.

      2. Anonymous Coward
        Anonymous Coward

        Re: Its quite depressing really

        "the fastest chip possible and not have to think about the security of it."

        That's probably a reasonable starting point for designing a system to run the DOS version of Crysis - no need to consider data security or data integrity, no need for access controls like real computers used to have, just make that frame rate the fastest you can.

        For anything more realistic, there may be other fundamental considerations, along the lines of "should this instruction in this process with these access rights be permitted this kind of access to this kind of object".

        I'm struggling with some of the published descriptions of "rolling back" the consequences of mispredicted speculative execution.

        As far as I understood it, one of the fundamentals of getting speculative execution to work right in the real world (it's not easy, but it's not impossible either given sufficiently clear thinking) is that the results cannot become visible 'elsewhere' (e.g. to other applications), directly or indirectly, until the speculation up to that point is fully confirmed as correct. Hence multiple 'shadow' register sets and reservation stations and other such well documented and (I thought) well understood stuff.

        Shadow register sets provide multiple virtual (god i hate that word) copies of the real internal processor registers for speculative instructions to play with. Once it's determined which instruction stream gets to execute to completion, all the now-irrelevant copies aren't "rolled back", they're marked as outdated, and only the successful values are allowed to be used for further work. In any case, any speculative values *must not* be used for anything that will become visible in the outside world, e.g. a speculative load from real cache - such an operation cannot be "thrown away", and in the right circumstances potentially becomes a route for data leakage.

        Part of this is about processor architecture, part of it is about OS security. All of it requires clear thinking, not just a focus on 'how do we make this code sequence run faster' while forgetting the bigger picture - e.g. should this code sequence be permitted to execute at all.

        There used to be people who understood these things.

        1. Michael Wojcik Silver badge

          Re: Its quite depressing really

          There used to be people who understood these things.

          There still are. This is not a problem of understanding. It's a problem of economics.

          Things will change if and when a group of people representing a sufficient concentration of market power come to value particular security measures more highly than other attributes of whatever they're buying.

          And that's how things have always worked. A Honeywell running Multics was a hell of a lot more secure, under many reasonable threat models, than an Apple II. That didn't stop people from buying an Apple II to do their financial analysis with - because security was not an overwhelming economic advantage.

      3. Claptrap314 Silver badge

        Re: Its quite depressing really

        "Maybe I don't see these things as I'm not criminally minded".

        THIS. This is the mentality that made my time in microprocessor validation so...fruitful. This is the same mentality I tried to beat out of my calculus students. It's not lack of criminality, it's lack of rigor.

        I don't know how engineers are trained, but the important part of a mathematician's training is to find the edge cases that you missed the first time around. And the second.

      4. Daniel 18

        Re: Its quite depressing really

        "Maybe the engineers that design the CPUs think the same. They just want to design the fastest chip possible and not have to think about the security of it."

        In part, it's a matter of metrics. Engineers are not particularly rewarded for producing theoretically secure chips, they are rewarded for producing faster chips on time for the sales types to hype them as faster than the competition.

        In part, it's because a few engineers have months or years to design incredibly complicated chips, many many attackers, some lavishly supported by nation states, some by criminal organizations, some in a quiet basement somewhere have decades to find the small flaws that can be exploited.

    6. Tom 7

      Re: Its quite depressing really

      Federico Faggin designed the Z80 in 1974. It was, I'd bet, the last non-risc CPU that one person could get their head round. Since then people have designed bits of CPUs but how the hole thing works, along with the non too simple problem of the operating system running on it, is beyond one persons ability to fully understand. If you look at the way these things are being hacked you have to give some kudos to the people doing the hacking - just before you seriously deform their nasal passages.

      I would imagine, now these mechanisms have been uncovered they will be added to a long list of things to check for in future designs.

      Having said that I can easily see a bright engineer in Intel having spotted this already but the bean counters decided performance figures were more important than a hopefully sufficiently obscure security flaw.

      1. Daniel 18

        Re: Its quite depressing really

        "Federico Faggin designed the Z80 in 1974. It was, I'd bet, the last non-risc CPU that one person could get their head round. Since then people have designed bits of CPUs but how the hole thing works, along with the non too simple problem of the operating system running on it, is beyond one persons ability to fully understand."

        Off hand I don't know the exact date or chip generation, but it's been decades since CPUs were designed directly by humans, rather than by human guided design tools. That has to translate to a lessened understanding of what is going on 'under the hood' in detail... not that humans could do all the circuit analysis the tools do, even in a lifetime, for a chip with tens or hundreds of billions of transistors, data paths, etc.

    7. Michael Wojcik Silver badge

      Re: Its quite depressing really

      And not only pop out in Intel chips.. but everything else out there too.

      This is not at all surprising if you understand the basic concepts of information thermodynamics.

      A system that dissipates energy, where that dissipation is not a completely unbiased random function, is leaking information. In other words, it has side channels.

      If 1) any of those channels are detectable within the system, and 2) the system contains components with different security domains, then you have a potential violation of security boundaries.

      1 & 2 are true of essentially all general-purpose computing, and much embedded (dedicated-purpose) computing, today. The Spectre class has focused specifically on the side channels created by speculative execution, but that's simply because there are a number of ways in which those channels are detectable from within the system.

      Also, again, and contra Chris: These are not "blunders". They are deliberate design trade-offs. Arguably "oversights" is valid; those trade-offs were made based on incomplete risk analysis. But they were deliberate, and made to achieve the explicit goals of the project.

  5. John Brown (no body) Silver badge

    off by default...

    ...might also mean the fixes slow things down even more and risk is low enough that they don't want people complaining of slow(er) systems. Until an attack happens, then another update with make it on by default.

  6. chuckufarley Silver badge

    So who else...

    ...runs NoScript and is glad that they do?

    1. bombastic bob Silver badge
      Unhappy

      Re: So who else...

      "The fourth variant can be potentially exploited by script files running within a program"

      and I too, run 'NoScript' for reasons that now include THAT --^^^

  7. Anonymous Coward
    Anonymous Coward

    At this point I think Meltdown-Spectre is like herpes

    There is no permanent cure. All you can do is manage and limit damage.

  8. Adrian Harvey
    Go

    Analogy in video incomplete

    It would have been nice if the redhat video had extended the quite nice analogy of how speculative execution works to how this vulnerability exploits it. It kind of felt like it leapt from a helpful, high-level analogy - useful for explaining an obscure subject - to "and bad people could exploit this.." It would have been helpful to have an expanded analogy that explained how the speculatively produced bill could lead to another customer receiving your order (or something)

    I can't immediately think of a good way though - anyone else want to have a crack at stretching the analogy to it's limits?

  9. Anonymous Coward
    Anonymous Coward

    Anyone wanna buy an abacus??

    In stock and ready to dispatch.

    1. Anonymous Coward
      Anonymous Coward

      Re: Anyone wanna buy an abacus??

      I hope you have paid my license fee's on that. I have patent 000,000,001 that covers...

      "Adding or subtracting by movement of device(s) attached to another device(s)

    2. Tom 7

      Re: Anyone wanna buy an abacus??

      Unfortunately using an abacus in a mobile computing environment is a little insecure.

    3. Rusty 1

      Re: Anyone wanna buy an abacus??

      Yeah, well I have a table made of logs. #0000000002.

      1. GrumpenKraut
        Coat

        Re: Anyone wanna buy an abacus??

        Number 2: logs. Check.

        The one that is somewhat smelly --------->

  10. anonymous boring coward Silver badge

    "Also, to exploit these flaws, malware has to be running on a device"

    Unfortunately, just visiting a website starts all sorts of cra*p running. Draining the battery, flashing useless ads, and other oh-sooo important stuff going on.

    But, yes, these information leak bugs aren't exactly low hanging fruit. Much easier to just fool a gullible user to do something stupid.

    BTW, was it just me who found the explanatory RedHat video explanation not very useful? (I can't quite map waiters running around with how a CPU works..)

    1. Anonymous Coward
      Anonymous Coward

      For we know the breed.

      There's a whole criminal industry around persuading suckers to download and run malware, but the crooks aren't that clever. They can be traced. But nobody seems to bother, we never hear of anyone even getting to court.

      Is it an investigation failure or a reporting failure?

    2. Alan Brown Silver badge

      "I can't quite map waiters running around with how a CPU works.."

      Bistromathics

  11. Anonymous Coward
    FAIL

    Utter speculation to....

    presume that there is some security in Intel chips at all.

    Show me where !?

    The lack of a password at the front door perhaps, is about all.

  12. Jamie Jones Silver badge
    Boffin

    I'm confused..

    I still don't see how this valuable secret data that is now in the cache can be accessed by a third party. Even if it's based on a timing attack, if someone attempts to access data they aren't allowed to read, I'd have thought the cache wouldn't affect the speed of response, because the cached data would be unavailable anyway.

    Also, generally, they say virtual servers are likely to be badly affected, but I'd assume most of the hosts of these servers are not going to be idling, so the CPU shouldn't ever do 'idle-time' speculation, and just to be sure, wouldn't running something like SETI or crack in idle time solve that..

    Which leads me to another thought... CPU idle speculation must have an impact on kernel process scheduling, imagine:

    Case 1: A heavy job runs - not much else running on system. CPU speculates during brief idle times.

    Case 2: A heavy job runs. SETI etc. set to run at idleprio only - so shouldn't ever impact on the heavy job. However, in this case, the heavy job now loses the potential CPU speculative advantage, as the CPU is no longer idling as much.

    Argggh, too much to handle, and I've not had my coffee yet...

    Kittens.

    I like kittens.

    Furry, purry, cuddly kittens...

    Ahhh. Much better!

    1. anonymous boring coward Silver badge

      Re: I'm confused..

      "I'd assume most of the hosts of these servers are not going to be idling, so the CPU shouldn't ever do 'idle-time' speculation"

      It's not that sort of idle time (on a very macro scale). It's running some instructions while waiting for data for some other instructions: Out-of-order execution. Making your system more busy on a process level scale won't make any difference.

      1. Jamie Jones Silver badge
        Pint

        Re: I'm confused..

        Ah, so it's during a cycle where whatever process is holding the CPU would be holding it anyway, not for any time period the OS could slice up?

        Makes sense.

        Cheers! ----------------------------->

    2. Michael Wojcik Silver badge

      Re: I'm confused..

      I still don't see how this valuable secret data that is now in the cache can be accessed by a third party.

      It can't. Or, at any rate, that's not what Spectre-class attacks are about.

      Spectre-class attacks use speculative execution to alter the observable state of the system, then observe those state changes to infer what "secret" (not directly accessible) data was subject to their probes.

      In variant 1, for example, the attacker mistrains the branch predictor so that it will reliably take a path that tries to load from an invalid address (having found a suitable gadget in memory). That causes a speculative load into cache. The results of that branch are thrown away, but the cache remains warm, and the attacker can then time some loads to see whether a given address was cached or not. That, in turn, tells the attacker about the address computed by the code on the mispredicted branch; and that leaks some information about whatever went into computing that address.

      So the attacker gets the gadget code to read the "secret" memory (which it has access to) and use it in creating those addresses, gradually leaking information.

      That's only one variant (and rather simplified). The original Spectre paper explains variants 1 and 2, and other side channels that might be exploitable, in some detail.

      But the point is that the attack code never sees the secret data directly. It sees what effects the secret data has on rump post-spec-ex execution system state, when that secret data was misused to alter that state.

      1. anonymous boring coward Silver badge

        Re: I'm confused..

        This sounds like a well informed explanation and doesn't remind me of waiters running around a restaurant at all! ;-)

      2. Jamie Jones Silver badge
        Pint

        Re: I'm confused..

        @Michael: Belated thanks - that makes it clearer.

  13. ForthIsNotDead
    Facepalm

    Sing like it's 1985...

    I want my... I want my... I want my Motorola 68000.

  14. snifferdog_the_second

    Show some understanding, people

    We're going to see more of these. To get the performance that users have come to expect, modern CPU's are so fiendishly complicated that nobody (even the people who design them) can possibly know how they will behave in all possible situations. I have every sympathy with the chip designers. Getting your head around CPU design these days must be extremely challenging.

    And, to some extent, I blame software developers like myself. We have got lazy. "CPU's are fast" we say, "we don't need to bother about the efficiency of our code". I installed Windows 10 recently on a machine that, a few years ago, was state of the art and always had excellent performance. It ran like a dog, even doing something mundane like popping up a menu. Draw your own conclusions.

    1. Nick Ryan Silver badge

      Re: Show some understanding, people

      For many years the Microsoft path to software "efficiency" is to throw more hardware resources at it. I don't recall any real instances where they've genuinely made something faster and more efficient.

      If you've ever had cause to step through code at the CPU level you realise that not only is the shitty x86 instruction set wasting huge amounts of time juggling and swapping registers around, but much of the Microsoft code (i.e. libraries, variant hell, .net string handling, etc) spends huge amounts of CPU instructions not doing anything particularly constructive for the code it's meant to be running. While we don't really have to have efficiency everywhere, the level of inefficiency is staggering and whe e this is in lower level libraries then this rapidly escalates to affecting the entire system.

    2. Jamie Jones Silver badge
      Megaphone

      Re: Show some understanding, people

      And, to some extent, I blame software developers like myself. We have got lazy. "CPU's are fast" we say, "we don't need to bother about the efficiency of our code".

      Speak for yourself!

      Of course, this is all true, especially so when microsoft was dominant - they wanted to bloat the system so that their hardware partners would get more sales, and they'd sell more licenses.

      In the mid to late 90's we used to HATE this. The philosophy was there with newbie programmers, even managers.

      It became the culture - the 'norm'.

      "Running slow? Nothing to do with inefficent software - you need a faster machine."

      "Low on memory? You need more, obviously! It's perfectly reasonable for "hello world" to need 8Mb!"

      And of course, this all lead to comments I'm sure we've all heard: "Well, all computers crash, or need to be rebooted every few days"

      The same "couldn't care less" attitude gave us "this site best viewed in IE6 - update your browser, loser - microsoft is da shizz"

      But we were just old shites who were pleased to reduce code cycles in bytes, and speed in OP cycles. What would we know?

      A bit of a wakeup with Y2K, and a bigger wakeup when mobile phones became more capable, but still relatively underpowered... Of course, as mobile cpu/ram got better, that hope was soon lost.

      So now we have a similar "we know better" related to internet and 'cloud".

      Many of us facepalm at the lack of security in IoT shite, house door-locks that require internet access, shoving all data onto someone elses servers etc. but still, those of us who have been using the internet since the 80's and are intimate with it's design principles... what would we know..... Door lock not working? Get more memory! Toaster slow? It needs a faster CPU! Your whole system has stopped working? Not our fault, someone has deleted their githib account we were live-linking to!

      </get off my lawn you kids>

    3. David Nash Silver badge

      Re: Show some understanding, people

      Agreed, and should we be demanding perfection? The world isn't perfect and people aren't perfect. Just look at the various "Who-Me?" and "On Call" articles on this very website.

    4. Michael Wojcik Silver badge

      Re: Show some understanding, people

      And, to some extent, I blame software developers like myself. We have got lazy.

      This may be true, but it has absolutely nothing to do with the existence of Spectre-class vulnerabilities. The economic forces driving faster CPU designs would still be present if software were, say, three orders of magnitude less resource-hungry on average. People would just be running three orders of magnitude more work.

      Work will expand to fill available resources. Faced with a glut of cheap compute resources, companies would do more optimization, more speculative modeling, more whatever.

    5. Anonymous Coward
      Anonymous Coward

      Re: Show some understanding, people

      "I installed Windows 10 recently on a machine that, a few years ago, was state of the art and always had excellent performance. It ran like a dog, even doing something mundane like popping up a menu. Draw your own conclusions."

      Did it, thanks. I was forced to buy a new computer with Windows 10 on it. The day I spent installing and customizing Linux was far more productive than the weeks I would have struggled with Windows trying to secure it and tune it.

      As a bonus, the one program that I didn't have a Linux native replacement for turned out to run 'out of the box' in WINE, which was installed by default. It was a game, nice to have, but not crucial or time sensitive to get it running.

  15. Anonymous Coward
    Anonymous Coward

    Backdoors

    Meltdown-Spectre security flaws? Or designed in backdoors is more likely

    1. ArrZarr Silver badge
      Holmes

      Re: Backdoors

      If it were a backdoor, there would be one to hide it as much as possible. Not four.

  16. Anonymous Coward
    Anonymous Coward

    Why is anyone surprised ?

    This is what happens when you build down to a budget, rather than up to a spec.

  17. Miss_X2m1

    Can't win the horse race.

    So basically a consumer buys a computer for it's blazing processor speed and spends top dollar for that speed and then ends up with a half-dead horse after all the patching is done with.....LOL!!! :P

  18. Anonymous South African Coward Bronze badge

    More fun and games.

  19. ds33d8977JH3%3£1

    [s]So far, no known exploit code is circulating in the wild targeting the fourth variant.[/s]

    Having compromised the major antivirus companies, is anyone really surprised at the above statement?

  20. adam payne

    The fourth variant can be potentially exploited by script files running within a program – such as JavaScript on a webpage in a browser tab – to lift sensitive information out of other parts of the application – such as personal details from another tab.

    Potentially exploitable by scripts #thanksnoscript

  21. Terje

    I have always subscribed to the principle that if you can run code on the computer you have access to anything and everything on it. I think that rather then losing x% performance by disabling speculative execution etc. we should ask ourselves why on earth we allow javascript and similar technologiesto actually run code able to snoop on cache memory in the first place. I believe the only sensible solution is to take a step back and lock down the remote code being executed on your machine and take the slowdown from interpreting code instead of running it through a jit compiler and letting it run amok on the cpu any way it wants to.

    Of Course this leaves open any amount of holes to snoop data if you are a native code program, but that is something I already assume it is able to do by virtue of healthy paranoia.

    1. Jamie Jones Silver badge

      we should ask ourselves why on earth we allow javascript and similar technologiesto actually run code able to snoop on cache memory in the first place.

      A few years ago I would have been indifferent to this - subject to the usual security provisions of course.

      But now, seeing the stuff that runs on android, most of it - even from "reputable" companies is basically spyware, and so blatant they don't give a shit about it.

      With my programmers hat on, I have no sympathy for those complaining about GDPR etc. - they brought it on themselves.

    2. Michael Wojcik Silver badge

      why on earth we allow javascript and similar technologies to actually run code able to snoop on cache memory in the first place

      Well, in the first place, we don't. If anyone does, that's a bug. And it has nothing to do with any Spectre variant. These are side channel vulnerabilities. They're not about "snooping"; they're about detecting state using the inevitable effects of a complex system.

      (The sheer amount of misunderstanding about Spectre after these past four months is depressing. Not surprising, but depressing.)

      1. Terje

        I have to admit I have a hard time distinguishing all the different attacks by this time and I have not read up on most of the newer ones enough to tell exactly how they work, but if you manage in some way shape or form to learn things that have been speculatively loaded into cache,how is that not snooping on cache regardless of what method you use to do it?

        I believe my main point still stands, if you disallow everyone and his mother to run what is in reality arbitrary code on the cpu they will not be able to exploit the side channel attacks because they have no ability to run the code needed to do so.

    3. anonymous boring coward Silver badge

      "I think that rather then losing x% performance by disabling speculative execution etc. we should ask ourselves why on earth we allow javascript and similar technologies"

      Well, yes, but Google, MS etc don't care about you. They care about what information they can monetise. Running stuff, and letting advertisers run stuff, on your computer, using your electricity and CPU time, is what gives them more money.

  22. Sil

    Removing speculative execution

    Does anybody know / can make a very educated guess as to how big a performance hit would processors suffer if speculative execution was altogether entirely scraped from their design?

    1. Jamie Jones Silver badge
      Thumb Up

      Re: Removing speculative execution

      I've no idea.

      Interesting question though... If you're wasting so many cycles on disabling/stengthening/kludging these fixes, there must be a point where simply removing all trace of it would be more efficient, and leave more "CPU space" for other improvements.

      Mind you, maybe a redesign is needed, but I don't see why it's so hard for speculative execution to be done securely. Timing attacks are nothing new.

    2. Korev Silver badge
      Thumb Up

      Re: Removing speculative execution

      >Does anybody know / can make a very educated guess as to how big a performance hit would processors suffer if speculative execution was altogether entirely scraped from their design?

      I was wondering the same thing

    3. Anonymous Coward
      Anonymous Coward

      Re: Removing speculative execution

      The ARM A53 (and I think A55) don't have speculative execution, the A57 on do.

      Of course there are other differences, but quite a lot of mobile phones in the lower tiers use all-A53 designs, usually with 4 small slow cores and 4 larger faster ones.

      So it might be possible to get a rough idea.

      One thing I think is clear: most people, for phones and tablets, do not actually need speculative execution. That's most people, not users of PhotoShop.

    4. Claptrap314 Silver badge

      Re: Removing speculative execution

      There is a huge difference between turning off a core architectural feature on an existing product and comparing product A, designed with the feature and product B, designed without.

      Turning off speculative execution entirely on a modern processor will be REALLY expensive. I would speculate > 4x slowdown. > 10x would not surprise me. Given the implementation parallels between supporting out of order execution and speculative, you might end up turning off OOO as well. If so, you could see slowdowns > 50x.

      1. Michael Wojcik Silver badge

        Re: Removing speculative execution

        > 10x would not surprise me

        I think that's optimistic, at least for x86/x64, and other CISCy architectures such as z. You might get away with only one order of magnitude hit on Power. ARM might do even better (i.e. less than an order of magnitude).

        But x86? Those pipelines are deep. Kill spec-ex (for a general-purpose workload) and you'll be in a world of pain. And it's worth noting that even JITted managed languages tend to do even more branching than traditional procedural, compiled 3GLs did.

  23. Anonymous Coward
    Anonymous Coward

    Bad Software Design Rather than Bad Electronics Design

    The Meltdown issue was that information about the contents of memory not accessible by a process is available. That is a serious electronics design flaw and I think just applicable to Intel processors.

    The various Spectre variants allow information about the contents of memory which is accessible within a process to become known within that process by software which does not directly acess the memory. The issue is that software architects and designers have assumed that this information was inaccessible unless directly accessed. This is just very poor software deisgn and not a hardware bug at all. I have always assumed that anything within a process is accessible to anything else within a process and neither the hardware or operating systems literature./specifications have ever to my knowledge said anything other than this. Ignoring spectre an application software error can expose this information.The fact that software has been written which makes a whole assumptions about what can and can't be accessed that goes way beyond the specifications and statemenst about processors and operating systems is a problem with the software not a bug in the hardware and not a bug in the OS. If you want to control access to something stick it in a sperate process. That has always been the rule and if you do that then, apart from Meltdown, which WAS a stupid piece of electronics design, then you are OK.

  24. Anonymous Coward
    Anonymous Coward

    Another NSA back door!!!

    So it's official now, they've discovered another NSA back door!!! When they make 'em, they make 'em big!!

  25. FIA Silver badge

    Old coder rant

    I get The Reg is irreverent, and the red top of the IT world, but for some reason the constant use of 'design blunder' to describe a subtle interaction between disparate parts of a CPU that went unnoticed for well over 20 years seems a tad bit disingenuous.

    I know we now live in a world where all commentators are perfect and mistakes are to be vilified but still...

    1. GrumpenKraut
      Pint

      Re: Old coder rant

      For anyone with security in mind it absolutely is a design blunder. The real marvel is that none of the CPU makers found the issues in, what?, two decades???

      Beer. Safe because no speculative execution ---->

      1. Michael Wojcik Silver badge

        Re: Old coder rant

        For anyone with security in mind it absolutely is a design blunder

        This is flat-out wrong, and I'll note that IT security is part of my day job. Your comment misunderstands side-channel information exposure, and it misunderstands design, and it misunderstands security.

  26. Randy Hudson

    How does JavaScript read any memory that hasn't been previously initialized by the interpreter for its use? If that were possible, then your script could have non-deterministic behavior?

    1. chuckufarley Silver badge

      This might not answer your question...

      ...but it might set you down the path to finding an answer.

      https://www.theregister.co.uk/2017/03/14/outdated_javascript_libraries_weaken_web_security/

      https://www.theregister.co.uk/2017/11/20/open_web_application_security_project_2017_report/

      https://www.theregister.co.uk/2018/05/15/microsoft_acg_mitigation_missed_memory_bug/

    2. Michael Wojcik Silver badge

      How does JavaScript read any memory that hasn't been previously initialized by the interpreter for its use?

      It doesn't - at least not in any of the published Spectre attacks.

      The original Spectre paper explains this, and there are other explanations online (and I've posted explanations in comments to Reg stories, as have some others, though you have to filter for accuracy).

      That's the whole point of a side-channel attack. You don't have direct access, so you find a proxy that leaks information about what you want to see.

  27. Claptrap314 Silver badge

    Spectre how & why

    Despite my previous comment, I am not inclined to be overly harsh on the designers for these issues.

    The thing to understand is the difference between architectural state (a-state) and microarchitectural state (m-state). The m-state of a processor is everything that is needed to determine, for any input, what the m-state of the processor will be in the next cycle. This is not a tautology or circularity. We see, for instance, that we need to know the state of the L1 cache to know what will be in the register file. Therefore, the L1 cache is part of the m-state, and we need to include everything that affect the L1 cache as part of the m-state as well. The a-state is everything needed to know the result of executing the next instruction. The difference between the two is mostly caches, but there is another matter. Given the m-state, and a set of inputs, you can know the final m-state. But the a-state is not closed. In particular, performance registers and clocks are part of the a-state, but they are not predicted by the a-state. You can load from one of these registers, but the final state of that register is know known. Therefore, the a-state is NOT enough to predict the result of a series of instructions. Did they teach you that in school? Probably not so that it stood out.

    For consumer grade processors, the contract is strictly about the a-state. The m-state might be presented, but it is subject to change at any time. In particular, if a bug is found the a processor, a patch might be issued to the microcode in the processor to fix the bug. This fix is extremely likely to impact the performance of the processor under at least some circumstances. That is, the m-state behavior is thrown out to fix the a-state. Of course, manufacturers are strongly motivated to keep the m-state changes minimal.

    Design teams have been told to deliver a-state promises at maximum speed.

    Spectre is not a violation of the a-state promises. It is therefore not a "bug" in the sense that the processor is failing to behave as advertised. It is a failure to isolate state, and therefore a security failure in the presence of untrusted code.

    Note that at the front of every manual I saw in the 1996-2006 timeframe, there was a big notice just inside the cover that the processor was not cleared for use with information classified "confidential" or higher. Perhaps they could have been a bit more explicit, but processor designers were disclaiming side channel-free products.

    ---

    So, what to expect? 1) Variants of these bugs are going to continue to dribble out. The only way to avoid them on existing product is to entirely turn off speculative execution, which might not even be possible. If it is possible, expect huge drops in performance. 50x would not surprise me. 2) Designs to get around this issue are going to require huge reworking of the caches. Expect cache memory sizes to halve. This will be a major performance hit. 3) Given the size of the performance hit, I expect compute utilization to bifurcate. In trusted computing environments, the benefits of speculative executing are going to support a continuing market for speculative execution. In general computing, not. I anticipate this split appearing in the cloud.

    ---

    In another discussion, someone mentioned targeting contention for execution units as a variant. Execution unit contention might even happen with designs that are merely pipelined. Defending against that would involve adding execution units sufficient to ensure that it cannot happen. Given my experience, don't hold your breath.

  28. Amorphous
    Meh

    Side-channel timing attacks on Humans

    Reminds me of those psych experiments revealing racial bias in humans who react slower/faster to flash-cards of different races. How to patch that?

    1. Anonymous Coward
      Anonymous Coward

      Re: Side-channel timing attacks on Humans

      Technically you are incorrect - the flash card test detects racism by showing people of different ethnicities. There is only one human race, and racism is believing this to be incorrect.

      But yes, there are a lot of side channel attacks on people. One used to be to catch suspected deserters representing themselves as civilians by having an NCO shout a word of command in their ear unexpectedly.

    2. Claptrap314 Silver badge

      Re: Side-channel timing attacks on Humans

      Funny thing happened when they analyzed those tests based on the political views of the takers. Turns out, conservatives & libertarians often show almost no bias.

      Presumably, it is because we don't see people primarily as members of groups.

      But yeah, if you want to fix it in yourself, stop being a liberal. :D

  29. MaximusM

    I think everybody is missing the real point!

    I have been a software developer for over 30 years and have a great insite to the issue this brings up. Most of the processors were designed to be used for a single user Operating System (OS) where you don't have concerns of snooping by another users or applications because it's a single user system! Once code is running on that CPU there are many many ways to snoop which can be considered hacking by looking at memory locations or data written to disk or deleted file space. I believe the issues they are concerned about are really about multi-user operating systems run on CPU's that are designed for single user. This is were the CPU does not provide the protection of snooping between users.

    Heck, look at the languages these days. The developers are lazy and don't need to release (delete) memory that was allocated and you let a garbage collector do the cleanup in Java and C#. I really consider this garbage collection scheme to be a security leak itself. In my book this is far worse than the CPU issues they are reporting.

  30. LordHighFixer

    Two roads diverged in a yellow wood....

    The really truly sad part of all of this, is that there was much discussion over these types of "features" in a CPU 40 years ago. It was determined that these were a "Bad Idea". The favoured idea at the time was multiprocessing could solve the problem, if we could find a few people smart enough to write the code for the hardware we designed. ((z80/8080/68000 era)). We are still waiting. CPU design went down a path that put money in pockets now. Screw the future. Its not like this stuff will still be running when anyone notices the issues.....

  31. osmarks

    Can we just not base our CPUs on the awfully slow execution model of C?

  32. Miss_X2m1

    My Machine Now Runs SLLLLLLLLOOOOOOWWWWW

    Since my machine has been "patched" by Microsoft, it runs amazingly slow. I'm angry.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon